Mar 16 2017
Mar 16

I'm excited to be presenting at this year's DrupalCon Baltimore on a topic near and dear to my heart: I'll be presenting Just Keep Swimming: Don't drown in your open source project! at DrupalCon next week.

On a basic level, I'll outline ways I deal with rage-inducingly-vague bug reports, hundreds of GitHub notifications per day, angry and entitled users, and keep a positive attitude that allows me to continue to contribute on a daily basis.

This will be my first time presenting in the Being Human track, but I'm excited because in a lot of ways, it's the DrupalCon track that I get the most out of—what good is the work we do if we can't tie it back to our own identity, our community, and ultimately a sense of relationship with other people? There are many other great sessions in the Being Human track; if you're a DrupalCon veteran and have spent most of your time in the more technical sessions (and even the 'hallway track'), you're missing out on a wealth of knowledge gained through the community's collective experience!

The Birds of a Feather schedule was also just posted, so please add a BoF if you're interested in discussing some aspect of Drupal, Open Source, community, etc. (I just added a BoF to discuss Managing Drupal sites with Composer, as that's a topic that I deal with on a daily basis, and there's surely room for improvement!)

Mar 14 2017
Mar 14

Last week, the proverbial floodgates were opened when Drupal.org finally opened access to any registered user to create a 'full' Drupal.org project (theme, module, or profile). See the Project Applications Process Revamp issue on Drupal.org for more details.

Drupal.org modules page
You can now contribute full Drupal projects even if you're new to the community!

Some people weren't comfortable with this change, but it makes onboarding new Drupal contributors more painless and rewarding experience. In the past, when projects got stuck in a Project Application Review hell, it could take months to years before a new Drupal contrib project would be granted access to the exclusive 'full project' club! One of the sad byproducts of that process was the fact that so few contrib themes seem to have been created in the past year or so (compared to the Drupal 6 and 7 release cycle, parts of which still allowed anyone access to full projects).

Whether it's a net benefit for the community or whether there are dragons lurking in this new process, only time will tell. I think it will be a huge boon towards contrib health, especially since full projects are much easier to download, evaluate, test, fix, and work with than sandbox projects—and security coverage is still possible through the existing PAR process. I know that if I had hit tons of roadblocks the first time I tried submitting a theme I just spent many hours building, I would've just given up on submitting any Drupal contrib at that point. Other ecosystems (Wordpress plugins, NPM modules, Packagist libraries, etc.) don't have such a high bar for entry; and they don't offer automatic security support either!

In any case, one of the best things to come out of the entire PAR situation is a neat script/project, PAReview.sh, that runs any Drupal module through a gauntlet of tests, checking for code quality, typos, etc. And you can run it on your own modules, easily—whether they're on Drupal.org, GitHub, or locally, on your filesystem!

Traditionally, you had two options for running PAReview.sh on your module:

  1. You could use the extremely handy online pareview.sh website to test any publicly-accessible Git project repository. (But this didn't work for local or private projects, or for really quick iterative fixes.)
  2. You could spend some time installing all the script's dependencies as outlined in the installation docs. (But this can take a while, and assumes you're on Ubuntu — some of the instructions take some time to get right on a Mac or other Linux distros.)

At Pieter De Clercq's suggestion, though, I added out-of-the-box support for PAReview.sh to Drupal VM (see issue: Feature Idea: PAReview.sh as installed extra.

Enabling PAReview.sh on Drupal VM

The setup is very simple, and works the same on macOS, Linux, or Windows (anywhere Drupal VM runs!):

  1. Download Drupal VM.
  2. Install Vagrant and VirtualBox.
  3. Create a config.yml file inside the Drupal VM directory, and put in the following contents:

    post_provision_scripts:
      - "../examples/scripts/pareview.sh"
    
    composer_global_packages:
      - { name: hirak/prestissimo, release: '^0.3' }
      - { name: drupal/coder, release: '^8.2' }
    
    nodejs_version: "6.x"
    nodejs_npm_global_packages:
      eslint
    
    
  4. Open a Terminal and cd into the Drupal VM directory, and run vagrant up.

Note: For an even speedier build (if you're just using Drupal VM to test out a module or three), you can add the following to config.yml to make sure only the required dependencies are installed, and to prevent a full Drupal 8 site from being built (if you don't need it):

installed_extras: []
extra_packages: []
drupal_build_composer_project: false
drupal_install_site: false
configure_drush_aliases: false

A new Drupal VM instance should be built within 5-10 minutes (depending on your computer's speed), with PAReview.sh configured inside.

Using PAReview.sh on Drupal VM

Drupal VM pareview.sh script after vagrant ssh

In the Drupal VM directory (still in the Terminal), run vagrant ssh to log into the VM. Then run the pareview.sh script on a Drupal.org module just to ensure it's working:

pareview.sh http://git.drupal.org/project/honeypot.git

This should output some messages, and maybe even a few errors (hey, no maintainer is perfect!), and then drop you back to the command line.

You can also run the script against any local project. If you drop a module named my_custom_module into the same directory as the Vagrantfile, you can see it inside the VM in the /vagrant folder, so you can run the script against your custom module like so:

pareview.sh /vagrant/my_custom_module

See the PAReview.sh documentation for more usage examples and notes, and go make your modules perfect!

Conclusion

Special thanks to Pieter De Clercq (PieterDC) for the inspiration for this new feature, to Klaus Purer (klausi) for the amazing work on PAReview.sh, and also to Patrick Drotleff (patrickd) for maintaining the free online version.

Feb 24 2017
Feb 24

Another day, another Acquia Developer Certification exam review (see the previous one: Certified Back end Specialist - Drupal 8, I recently took the Front End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Front End Specialist - Drupal 8 Exam Badge

Now that I've completed all the D8-specific Certifications, I think the only Acquia Certification I haven't completed is the 'Acquia Cloud Site Factory' Exam—one for which I'm definitely not qualified, as I haven't worked on a project that uses Acquia's 'ACSF' multisite setup (though I do a lot of other multisite and install profile/distribution work, just nothing specific to Site Factory!). Full disclosure: Since I work for Acquia, I am able to take these Exams free of charge, though many of them are worth the price depending on what you want to get out of them. I paid for the first two that I took (prior to Acquia employment) out of pocket!

Some old, some new

This exam feels very much in the style of the Drupal 7 Front End Specialist exam—there are questions on theme hook suggestions, template inheritance, basic HTML5 and CSS usage, basic PHP usage (e.g. how do you combine two arrays, in what order are PHP statements evaluated... really simple things), etc.

The main difference with this exam centers on the little differences in doing all the same things. For example, instead of PHPTemplate, Drupal 8 uses Twig, so there are questions relating to Twig syntax (e.g. how to chain a filter to a variable, how to print a string from a variable that has multiple array elements, how to do basic if/else statements, etc.). The question content is the same, but the syntax is using what would be done in Drupal 8. Another example is theme hook suggestions—the general functionality is identical, but there were a couple questions centered on how you add or use suggestions specifically in Drupal 8.

The main thing that tripped me up a little bit (mostly due to my not having used it too much) is new Javascript functionality and theme libraries in Drupal 8. You should definitely practice adding JS and CSS files, and also learn about differences in Drupal 8's Javascript layer (things like using 'use strict';, how to make sure Drupal.behaviors are available to your JS library, and the like).

I think if you've built at least one custom theme with a few Javascript and CSS files, and a few custom templates, you'll do decently on this exam. Bonus points if you've added a JS file that shouldn't be aggregated, added translatable strings in both Twig files and in JS, and worked out the differences in Drupal's stable and classy themes in Drupal 8 core.

For myself, the only preparation for this exam was:

  • I've helped build two Drupal 8 sites with rather complex themes, with many libraries, dozens of templates, use of Twig extends and include syntax, etc. Note that I was probably only involved in theming work 20-30% of the time.
  • I built one really simple Drupal 8 custom theme for a photo sharing website (closed to the public): https://jeffgpix.com/
  • I read through the excellent Drupal 8 Theming Guide by Sander Tirez (sqndr)

My Results

I scored an 83.33% (10% better than the Back End test... maybe I should stick to theming :P), with the following section-by-section breakdown:

  • Fundamental Web Development Concepts : 92.85%
  • Theming concepts: 73.33%
  • Templates and Pre-process Functions: 87.50%
  • Layout Configuration: 66.66%
  • Performance: 100.00%
  • Security: 100.00%

I'm not surprised I scored worst in Layout Configuration, as there were some questions about defining custom regions, overriding region-specific markup, and configuring certain things using the Breakpoints and Responsive Images module. I've done all these things, but only rarely, since you generally set up breakpoints only when you initially build the theme (and I only did this once), and I only deal with Responsive Images for a few specific image field display styles, so I don't use it enough to remember certain settings, etc.

It's good to know I keep hitting 90%+ on performance and security-related sections—maybe I should just give up site building and theming and become a security and performance consultant! (Heck, I do a lot more infrastructure-related work than site-building outside of my day job nowadays...)

This exam was not as difficult as the Back End Specialist exam, because Twig syntax and general principles are very consistent from Drupal 7 to Drupal 8 (and dare I say better and more comprehensible than in the Drupal 7 era!). I'm also at a slight advantage because almost all my Ansible work touches on Jinja2, which is the templating system that inspired Twig—in most cases, syntax, functions, and functionality are identical... you just use {{.j2}} instead of {{.twig}} for the file extension!

Feb 22 2017
Feb 22

Continuing along with my series of reviews of Acquia Developer Certification exams (see the previous one: Drupal 8 Site Builder Exam, I recently took the Back End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Drupal Site Builder - Drupal 8 2016
I didn't get a badge with this exam, just a cert... so here's the previous exam's badge!

Acquia finally updated the full suite of Certifications—Back/Front End Specialist, Site Builder, and Developer—for Drupal 8, and the toughest exams to pass continue to be the Specialist exams. This exam, like the Drupal 7 version of the exam, requires a deeper knowledge of Drupal's core APIs, layout techniques, Plugin system, debugging, security, and even some esoteric things like basic webserver configuration!

A lot of new content makes for a difficult exam

Unlike the other exams, this exam sets a bit of a higher bar—if you don't do a significant amount of Drupal development and haven't built at least one or two custom Drupal modules (nothing crazy, but at least some block plugins, maybe a service or two, and some other integrations), then it's likely you won't pass.

There are a number of questions that require at least working knowledge of OOP, Composer, and Drupal's configuration system—things that an old-time Drupal developer might know absolutely nothing about! I didn't study for this exam at all, but would've likely scored higher if I spent more time going through some of the awesome Drupal ladders or other study materials. The only reason I passed is I work on Drupal 8 sites in my day job, and have for at least 6 months, and in my work I'm exposed to probably 30-50% of Drupal's APIs.

Unlike in Drupal 7, there are no CSS-related questions and few UI-related questions whatsoever. This is a completely new and more difficult exam that covers a lot of corners of Drupal 8 that you won't touch if you're mostly a site builder or themer.

My Results

I scored an 73%, with the following section-by-section breakdown:

  • Fundamental Web Concepts: 80.00%
  • Drupal core API : 55.00%
  • Debug code and troubleshooting: 75.00%
  • Theme Integration: 66.66%
  • Performance: 87.50%
  • Security: 87.50%
  • Leveraging Community: 100.00%

I am definitely least familiar with Drupal 8's core APIs, as I tend to stick to solutions that can be built with pre-existing modules, and have as yet avoided diving too deeply into custom code for the projects I work on. Drupal 8 is really streamlined in that sense—I can do a lot more just using Core and a few Contrib modules than I could've done in Drupal 7 with thousands of lines of custom code!

Also, I'm still trying to wrap my head around the much more formal OOP structure of Drupal (especially around caching, plugins, services, and theme-related components), and I bet that I could score 10% or more higher in another 6 months, just due to familiarity.

I also scored fairly low on the 'debug code and troubleshooting' section, because it dealt with some lower-level debugging tools than what I prefer to use day-to-day. I use Xdebug from time to time, and it really is necessary for some things in Drupal 8 (where it wasn't so in Drupal 7), but I stick to Devel's dpm() and Devel Kint's kint() as much as I can, so I can debug in the browser where I'm more comfortable.

In summary, this exam was by far the toughest one I've taken, and the first one where I'd consider studying a bit before attempting to pass it again. I've scheduled the D8 Front End Specialist exam for next week, and I'll hopefully have time to write a 'Thoughts on it' review on this blog after that—I want to see if it's as difficult (especially regarding to twig debugging and the render system changes) as the D8 Back End Specialist exam was!

Feb 15 2017
Feb 15

I've been looking at a ton of different solutions to using Drupal 8's Configuration Management in a way that meets the following criteria:

  1. As easy (or almost as easy) as plain old drush cex -y to export and drush cim -y to import.
  2. Allows a full config export/import (so you don't have to use update hooks to do things like enable modules, delete fields, etc.).
  3. Allows environment-specific configuration and modules (so you don't have to have some sort of build system to tweak things post-config-import—Drupal should manage its own config).
  4. Allows certain configurations to be ignored/not overwritten on production (so content admins could, for example, manage Webforms or Contact Forms on prod, but not have to have a developer pull the database back and re-export config just to account for a new form).

The Configuration Split module checks off the first three of those four requirements, so I've been using it on a couple Drupal 8 sites that I'm building using Acquia's BLT and hosting on Acquia Cloud. The initial setup poses a bit of a challenge due to the 'chicken-and-egg' problem of needing to configure Config Split before being able to use Config Split... therefore this blog post!

Installing Config Split

Configuration Split setup - Drupal 8

The first time you get things set up, you might already be using core CMI, or you might not yet. In my case, I'm not set up with config management at all, and BLT is currently configured out of the box to do a --partial config import, so I need to do a couple specific things to get started with Config Split:

  1. Add the module to your project with composer require drupal/config_split:^1.0.
  2. Deploy the codebase to production with the module in it (push a build to prod).
  3. On production, install the module either through the UI or via Drush (assuming you're not already using core CMI to manage extensions).
  4. On production, create one config split per Acquia Cloud environment, plus another one for local and ci (so I created local, ci, dev, test, and prod).
    • For each split, make sure the machine name matches the Acquia Cloud environment name, and for the path, use ../config/[environment-machine-name]).
    • For Local, use local, for CI (Travis, Pipelines, etc.), use ci (for the machine names).
  5. Pull the production database back to all your other Acquia Cloud environments so Config Split will be enabled and configured identically in all of them. 6 On your local, run blt local:refresh to get prod's database, which has the module enabled.

Note that there may be more efficient (and definitely more 'correct') ways of getting Config Split installed and configured initially—but this way works, and is quick for new projects that don't necessarily have a custom install profile or module where you can toss in an update hook to do everything automated.

Configuring the Splits

Now that you have your local environment set up with the database version that has Config Split installed—and now that Config Split is installed in all the other environments using the same configuration, it's time to manage your first split—the local environment!

  1. Enable a module on your local environment that you only use for local dev (e.g. Devel).
  2. Configure the 'Local' config split (on http://local.example.com/admin/config/development/configuration/config-s...)
  3. Select the module for the Local split (e.g. select Devel in the 'Modules' listing).
  4. Select all the module's config items in the 'Blacklist' (use Command on Mac, or Ctrl on Windows to multi-select, e.g. select devel.settings, devel.toolbar.settings, and system.menu.devel).
  5. Click 'Save' to save the config split.

Now comes the important part—instead of using Drush's config-export command (cex), you want to make it a little... spicier:

drush @project.local csex -y --split=local

This command (configuration-split-export, csex for short) will dump all the configuration just like cex... but it splits out all the blacklisted config into the separate config/local directory in your repository!

Note: If you get Command csex needs the following extension(s) enabled to run: config_split., you might need to run drush @project.local cc drush. Weird drush bug.

Next up, you need to create a blank folder for each of the other splits—so create one folder each for ci, dev, test, and prod, then copy the .htaccess file that Config Split added to the config/local folder into each of the other folders.

We're not ready to start deploying config yet—we need to modify BLT to make sure it knows to run csim (short for config-split-import) instead of cim --partial when importing configuration on the other environments. It also needs to know which --split to use for each environment.

Modifying BLT

For starters, see the following BLT issue for more information about trying to standardize the support for Configuration Split in BLT: Support Config Split for environment-specific Core CMI.

  1. You need to override some BLT Phing tasks, so first things first, replace the import: null line in blt/project.yml with import: '${repo.root}/blt/build.xml'.
  2. Add a file in the blt/ directory named build.xml, and paste in the contents of this gist: https://gist.github.com/geerlingguy/1499e9e260652447c8b5a936b95440fa
  3. Since you'll be managing all the modules via Config Split, you don't want or need BLT messing with modules during deployment, so clear out all the settings in blt/project.yml as is shown in this gist: https://gist.github.com/geerlingguy/52789b6489d338cb3867e325e2e0a792

Once you've made those two changes to BLT's project.yml and added a blt/build.xml file with your custom Phing tasks, it's time to test if this stuff is all working correctly! Go ahead and run:

blt local:refresh

And see if the local environment is set up as it should be, with Devel enabled at the end of the process. If it is, congratulations! Time to commit this stuff and deploy it to the Cloud!

Once you deploy some code to a Cloud environment, in the build log, you should see something like:

The following directories will be used to merge configuration to import:
/mnt/www/html/project/docroot/../config/default
../config/dev
Import the configuration? (y/n):
y
Configuration successfully imported from:                              [success]
/mnt/www/html/project/docroot/../config/default
../config/dev.

This means it's importing the default config, mixed in with all the dev config split directory. And that means it worked.

Start deploying with impunity!

The great thing about using Drupal 8's core CMI the way it is meant to be used (instead of using it with --partial) is that configuration management becomes a total afterthought!

Remember in Drupal 7 when you had to remember to export certain features? And when features-revert-all would sometimes bring with it a six hour debugging session as to what happened to your configuration?

Remember in Drupal 7 when you had to write hundreds of update hooks to do things like add field, delete a field, remove a content type, enable or disable a module?

With CMI, all of that is a distant memory. You do whatever you need to do—delete a field, add a view, enable a dozen modules, etc.—then you export configuration with drush csex --split=local. Commit the code, push it up to prod, et voilà, it's magic! The same changes you made locally are on prod!

The one major drawback to this approach (either with Config Split or just using core CMI alone without --partial) is that, at least at this time, it's an all-or-nothing approach. You can't, for example, allow admins on prod to create new Contact forms, Webforms, blocks, or menus without also pulling the database back, exporting the configuration, then pushing the exported config back to prod. If you forget to do that, CMI will happily delete all the new configuration that was added on prod, since it doesn't exist in the exported configuration!

If I can find a way to get that working with Config Split (e.g. say "ignore configuration for the webform. config namespace" without using --partial), I think I'll have found configuration nirvana in Drupal 8!

Feb 13 2017
Feb 13

It's been over a year since Drupal 8.0.0 was released, and the entire ecosystem has improved vastly between that version's release and the start of the 8.3.0-alpha releases (which just happened a couple weeks ago).

One area that's seen a vast improvement in documentation and best practices—yet still has a ways to go—is Composer-based project management.

Along with a thousand other 'get off the island' initiatives, the Drupal community has started to take dependency management more seriously, by integrating with the wider PHP ecosystem and maintaining a separate Drupal.org packagist for Drupal modules, themes, and other projects.

At a basic level, Drupal ships with a starter composer.json file that you can use if you're building simpler Drupal sites to manage modules and other dependencies. Then there are projects like the Composer template for Drupal projects (which Drupal VM uses by default to build new D8 sites) and Acquia's BLT which integrate much more deeply with Composer-based tools and libraries to allow easier patching, custom pathing, and extra library support.

One thing I've found lacking in my journey towards dependency management nirvana is a list of all the little tips and tricks that make managing a Drupal 8 project entirely via Composer easier. Therefore I'm going to post some of the common (and uncommon) things I do below, and keep this list updated over time as best practices evolve.

Adding a new module

In the days of old, you would either download a module from Drupal.org directly, and drag it into your codebase. Or, if you were command line savvy, you'd fire up Drush and do a drush dl modulename. Then came Drush Makefiles, which allowed you to specify module version constraints and didn't require the entire module codebase to exist inside your codebase (yay for smaller repositories and repeatable deployments and site rebuilds!).

But with Composer, and especially with the way many (if not most) Drupal 8 modules integrate with required libraries (e.g. TODO Solarium/Solr/link to issue in search api solr module queue), it's easier and more correct to use composer require to add a new module.

Drupal.org modules don't quite follow semantic versioning, but the way release versioning works out with the Drupal.org packagist endpoint, you should generally be able to specify a version like "give me any version 8.x-1.0 or later, and I'll be happy".

Therefore, the proper syntax for requiring a module this way (so that when you run composer update drupal/modulename later, it will update to the latest stable 8.x-1.x release) is:

composer require drupal/modulename:^1.0

This says "add modulename to my codebase, and download version 1.0 or whatever is the latest release in the 8.x-1.x release series (including alpha/beta releases, if there hasn't been a stable release yet).

Note on version constraints: Two of the most-commonly-used version constraints I see are ~ (tilde) and ^ (caret). Both are similar in that they tell Composer: 'use this version but update to a newer version in the series', but the tilde is a bit more strict in keeping to the same minor release, while the caret allows for any new version up to the next major release. See this article for more details: Tilde and caret version constraints in Composer. See this Drupal core issue for discussion on why the caret is often preferred in Drupal projects: Prefer carat over tilde in composer.json.

Updating modules

Early on in my Composer adventures, I did the reasonable thing to update my site—I ran composer update, waited a while for everything to be updated, then I committed the updated composer.json and composer.lock files and was on my merry way. Unfortunately, doing this is kind of like cleaning a dirty blue dress shirt by washing it in a bucket of bleach—sure, the stain will be removed, but you'll also affect the rest of your shirt!

If you are meticulous about your dependencies, and lock in certain ones that are finicky at specific versions (e.g. composer require drupal/modulename:1.2) or at a specific git commit hash (composer require drupal/modulename:dev-1.x#dfa710e), then composer update is manageable.

But if you're managing a project with many moving parts using more than a dozen contributed modules... be cautious when considering running composer update without specifying specific modules to update!

Instead, what I recommend is a more cautious approach:

  1. See what modules need updating.
  2. Update those modules specifically using composer update drupal/modulename --with-dependencies.

If you had required the module using a release series like drupal/modulename:^1.0, then Composer will update that module—and only that module—to the latest tagged release in the 8.x-1.x branch. And adding --with-dependencies will ensure that any libraries the module depends on are updated as well (e.g. if you update the Search API Solr module, the Solarium dependency will also be updated).

Another quick tip: In addition to Drupal core's update module functionality and drush pm-updatestatus, you can use Composer's built-in mechanism to quickly scan for outdated dependencies. Just use composer outdated. This will show you if Drupal core, contrib modules, or any other dependencies are outdated.

Removing modules

This one is pretty easy. To remove a module you're no longer using (be sure it's uninstalled first!):

composer remove drupal/modulename

Older versions of Composer required a flag to also remove module dependencies that aren't otherwise required, but modern versions will remove the module and all it's dependencies from your composer.json, composer.lock, and the local filesystem.

Requiring -dev releases at specific commits

From time to time (especially before modules are stable or have a 1.0 final release), it's necessary to grab a module at a specific Git commit. You can do this pretty simply by specifying the dev-[branch]#[commit-hash] version constraint. For example, to get the Honeypot module at it's latest Git commit (as of the time of this writing):

composer require drupal/honeypot:dev-1.x#dfa710e

Be careful doing this, though—if at all possible, try to require a stable version, then if necessary, add a patch or two from the Drupal.org issue queues to get the functionality or fixes you need. Relying on specific dev releases is one way your project's technical debt increases over time, since you can no longer cleanly composer update that module.

Regenerating your .lock file

Raise your hand if you've ever seen the following after resolving merge conflicts from two branches that both added a module or otherwise modified the composer.lock file:

$ composer validate
./composer.json is valid, but with a few warnings
See https://getcomposer.org/doc/04-schema.md for details on the schema
The lock file is not up to date with the latest changes in composer.json, it is recommended that you run `composer update`.

Since I work on a few projects with multiple developers, I run into this on almost a daily basis. Until recently, I would find a module, then run a composer update drupal/modulename. Now, I just found that I can quickly regenerate the lockfile without updating or requiring anything, by running:

composer update nothing

Note that some people on Twitter mentioned there's a composer update --lock command that does a similar thing. The docs say "Only updates the lock file hash to suppress warning about the lock file being out of date." — but I've had success with nothing, so I'm sticking with it for now until someone proves --lock is better.

Development dependencies

There are often components of your project that you need when doing development work, but you don't need on production. For example, Devel, XHProf, and Stage File Proxy are helpful to have on your local environment, but if you don't need them in production, you should exclude them from your codebase entirely (not only for minor performance reasons and keeping your build artifacts smaller—non-installed modules can still be a security risk if they have vulnerabilities).

Composer lets you track 'dev dependencies' (using require-dev instead of require) that are installed by default, but can be excluded when building the final deployable codebase (by passing --no-dev when running composer install or composer update).

One concrete example is the inclusion of the Drupal VM codebase in a Drupal project. This VM configuration is intended only for local development, and shouldn't be deployed to production servers. When adding Drupal VM to a project, you should run:

composer require --dev geerlingguy/drupal-vm:^4.0

This will add geerlingguy/drupal-vm to a require-dev section in your composer.json file, and then you can easily choose to not include that project in the deployed codebase.

Commiting your .lock file

The Composer documentation on the lock file bolds the line:

**Commit your application's composer.lock (along with composer.json) into version control.

For good reason—one of the best features of any package manager is the ability to 'lock in' a set of dependencies at a particular version or commit hash, so every copy of the codebase can be completely identical (assuming people haven't gone around git --force pushing changes to the libraries you use!), even if you don't include any of the code in your project.

Ideally, a project would just include a composer.json file, a composer.lock file, and any custom code (and config files). Everything else would be downloaded and 'filled in' by Composer. The lock file makes this possible.

Patching modules

Acquia's BLT the composer-patches project, which is what it says on the tin: "Simple patches plugin for Composer."

Use is fairly simple: first, composer require cweagans/composer-patches:^1.0, then add a patches section to the extra section of your composer.json file:

    "extra": {
        "patches": {
            "acquia/lightning": {
                "New LightningExtension subcontexts do not autoload": "https://www.drupal.org/files/issues/2836258-3-lightning-extension-autoload.patch"
            },
            "drupal/core": {
                "Exposed date view filter fix": "https://www.drupal.org/files/issues/2369119-145_0.patch"
            }
    }

Once you've added a patch, you might wonder how to get Composer to apply the patch and update composer.lock while still maintaining the same version you currently have (instead of running composer update drupal/module which may or may not update/change versions of the module).

The safest way is to run composer update none (a handy trick yet again!), which allows Composer to delete the module in question (or core), then download the same version anew, and apply the specified patch.

Other Tips and Tricks?

Do you know any other helpful Composer tricks or things to watch out for? Please post them in the comments below!

Feb 11 2017
Feb 11

XHProf, a PHP extension formerly created and maintained by Facebook, has for many years been the de-facto standard in profiling Drupal's PHP code and performance issues. Unfortunately, as Facebook has matured and shifted resources, the XHProf extension maintenance tailed off around the time of the PHP 7.0 era, and now that we're hitting PHP 7.1, even some sparsely-maintained forks are difficult (if not impossible) to get running with newer versions of PHP.

Enter Tideways.

Tideways has basically taken on the XHProf extension, updated it for modern PHP versions, but also re-branded it to be named 'Tideways' instead of 'XHProf'. This has created a little confusion, since Tideways also offers a branded and proprietary service for aggregating and displaying profiling information through Tideways.io. But you can use Tideways completely independent from Tideways.io, as a drop-in replacement for XHProf. And you can even browse profiling results using the same old XHProf UI!

So in this blog post, I want to show you how you can use Drupal VM (version 4.2 or later) to quickly and easily profile Drupal 8 pages using Tideways (the PHP extension), the XHProf UI, and the XHProf Drupal module (all running locally—no cloud connection or paid service required!). You can even get fancy callgraph images!

Here's a video walkthrough for the more visually-inclined:

[embedded content]

Configure Drupal VM to install Tideways

The only thing you need to do to a stock Drupal VM configuration is make sure tideways is in your list of installed_extras. So, for my VM instance, I created a config.yml file and put the following inside:

---
installed_extras:
  - drush
  - mailhog
  - tideways

You can add whatever other installed_extras you need, but for this testing and benchmarking, I'm only including the essentials for my site.

If you want to have Drupal VM build a Drupal 8 site for you, and also automatically composer require the XHProf module for Drupal 8, you can also add:

drupal_composer_dependencies:
  - "drupal/xhprof:1.x-dev"

This will ensure that, after a Drupal 8 codebase is generated, the appropriate composer require command will be run to add the Drupal XHProf module to the codebase and the composer.json file. You could even add xhprof to the array of drupal_enable_modules in config.yml if you want the module installed automatically during provisioning!

Run vagrant up to start Drupal VM and provision it with Tideways, or run vagrant provision if you already have Drupal VM set up and are just adding Tideways to it.

Install Drupal's XHProf module

After Vagrant finishes provisioning Drupal VM, you can enable the XHProf module with drush @drupalvm.drupalvm.dev en -y xhprof (or do it via the 'Extend' page in Drupal's UI). Then, to configure the module to collect profiles for page loads, do the following:

  1. Visit the XHProf configuration page: /admin/config/development/xhprof
  2. Check the 'Enable profiling of page views' checkbox.
  3. Make sure the 'Tideways' extension is selected (it should be, by default).
  4. Check the 'Cpu' and 'Memory' options under 'Profile'
  5. Click 'Save' to save the settings.

Profile a page request!

XHProf Profile link from Drupal module

  1. Visit any page on the site (outside of the admin area, or any of the other paths excluded in the XHProf 'Exclude' configuration).
  2. Find the 'XHProf output' link near the bottom of the page.
  3. Click the link, and you'll see the XHProf module's rendering of the profile for that page.

For more basic profiling, that's all you need to do. But Drupal VM's Tideways integration also automatically sets up the XHProf GUI so you can browse the results in a much more efficient and powerful way. To use the more powerful XHProf GUI:

  1. Visit http://xhprof.drupalvm.dev/ (or xhprof.[yoursiteurl]).
  2. Click on a profile result in the listing.

Drupal 8 home page XHProf profile GUI

In here, you have access to much more granular data, including a full 'callgraph', which is a graphical representation of the entire request flow. Note that it can take a minute or longer to render callgraphs for more complex page loads!

Here's a small snippet of what Drupal 8's home page looks like with empty caches:

Drupal 8 home page callgraph rendered by XHProf GUI

Alternatives

If you're still running PHP 5.6 or 7.0, you can still use XHProf, but it seems like XHProf's maintenance is now in a perpetually fuzzy state—nobody's really picked up the ball consistently after Facebook's maintenance of the extension dropped off.

Another service which has a freemium model but requires the use of a web UI rather than a locally-hosted UI is Blackfire, which is also supported by Drupal VM out of the box!

Feb 10 2017
Feb 10

As someone who loves YAML syntax (so much more pleasant to work with than JSON!), I wanted to jot down a few notes about syntax formatting for the benefit of Drupal 8 developers everywhere.

I often see copy/pasted YAML examples like the following:

object:
  child-object: {key: value, key2: {key: value}}

This is perfectly valid YAML. And technically any JSON is valid YAML too. That's part of what makes YAML so powerful—it's easy to translate between JSON and YAML, but YAML is way more readable!

So instead of using YAML like that, you can make the structure and relationships so much more apparent by formatting it like so:

object:
  child-object:
    key: value
    key2:
      key: value

This format makes it much more apparent that both key and key2 are part of child-object, and the last key: value is part of key2.

In terms of Drupal, I see the confusing { } syntax used quite often in themes and library declarations. Take, for instance, a library declaration that adds in some attributes to an included JS file:

https://some-api.com/?key=APIKEY&signed_in=true: {type: external, attributes: { defer: true, async: true} }

That's difficult to read at a glance—and if you have longer key or value names, it gets even worse!

Instead, use the structured syntax for a more pleasant experience (and easier git diff ability):

https://some-api.com/?key=APIKEY&signed_in=true:
  type: external
  attributes:
    defer: true
    async: true

You really only need to use the { } syntax for objects if you're defining an empty object (one without any keys or subelements):

# Objects.
normal-object:
  key: value
empty-object: { }

# Arrays.
normal-array:
  - item
empty-array: [ ]

I've worked with a lot of YAML in the past few years, especially in my work writing Ansible for DevOps. It's a great structured language, and the primary purpose is to make structured data easy to read and edit (way, way simpler than JSON, especially considering you won't need to worry about commas and such!)—so go ahead and use that structure to your advantage!

Feb 09 2017
Feb 09

...aka, avoid the annoying Javascript error below:

drupal.js:67
TypeError: undefined is not an object (evaluating 'entityElement
      .get(0)
      .getAttribute')

Many themers working on Drupal 8 sites have Contextual menus and Quick Edit enabled (they're present in the Standard Drupal install profile, as well as popular profiles like Acquia's Lightning), and at some point during theme development, they notice that there are random and unhelpful fatal javascript errors—but they only appear for logged in administrators.

Eventually, they may realize that disabling the Contextual links module fixes the issue, so they do so and move along. Unfortunately, this means that content admins (who tend to love things like contextual links—at least when they work) and up not being able to hover over content to edit it.

There are two ways you can make things better without entirely disabling these handy modules:

  1. Apply the patch from this Drupal.org issue: contextual.js and quickedit.js should fail gracefully, with useful error messages, when Twig templates forget to print attributes.
  2. Make sure you always include attributes somewhere in the wrapping class in all node templates, as well as {{ title_prefix }} and {{ title_suffix }}. If you don't, the contextual links module won't be able to inject the proper classes it needs to add the contextual link. And until the above patch hits Drupal core, any page where your template is used and a user with permission to use Contextual links visits, Javascript on that page will break!

As one quick example, I was working on a node template for a bootstrap theme, and it looked something like:

  <div class="col-md-6">
    <h2 class="lead">{{ label }}</h2>
    <p class="small">{{ node.getCreatedTime() | date("F d, Y") }}</p>
    <div class="body">
      {{ content.body }}
    </div>
    {{ content|without('body') }}
  </div>

To fix this so the contextual link displays to the right of the title, I modified the template as in number 2 above, to look like:

<div{{ attributes }}>
  <div class="col-md-6">
    {{ title_prefix }}
    <h2 class="lead">{{ label }}</h2>
    {{ title_suffix }}
    <p class="small">{{ node.getCreatedTime() | date("F d, Y") }}</p>
    <div class="body">
      {{ content.body }}
    </div>
    {{ content|without('body') }}
  </div>
</div>

Now, I get the handy little contextual link widget, and I can happily go about editing nodes within the context of the page I'm on (instead of digging through the admin content listings for the node!):

Quick edit contextual link in Drupal 8

Note also the {{ content|without('body') }}—it's always important to render the entire {{ content }} element somewhere (even without() all the other fields on the node) so that cache tags bubble correctly—see a related core issue I opened a week or so ago: Bubbling cache tag metadata when rendering nodes in preprocess functions is difficult.

Jan 31 2017
Jan 31

BLT - Setup complete on Windows 10

Quite often, I get inquiries from developers about how to get Drupal VM working on Windows 10—and this is often after encountering error after error due to many different factors. Just for starters, I'll give a few tips for success when using Drupal VM (or most any Linux-centric dev tooling or programming languages) on Windows 10:

  • If at all possible, run as much as possible in the Windows Subsystem for Linux with Ubuntu Bash. Some things don't work here yet (like calling Windows binaries from WSL), but hopefully a lot will be improved by the next Windows 10 update (slated for Q2 2017). It's basically an Ubuntu 14.04 CLI running inside Windows.
  • When using Vagrant, run vagrant plugin install vagrant-vbguest and vagrant plugin install vagrant-hostsupdater. These two plugins make a lot of little issues go away.
  • If you need to use SSH keys for anything, remember that the private key (id_rsa) is secret. Don't share it out! The public key (e.g. id_rsa.pub) can be shared freely and should be added to your GitHub, Acquia, etc. accounts. And you can take one public/private key pair and put it anywhere (e.g. copy it from your Mac to PC, inside a VM, wherever). Just be careful to protect the private key from prying eyes.
  • If you're getting errors with any step along the way, copy out parts of the error message that seem relevant and search Google and/or the Drupal VM issue queue. Chances are 20 other people have run into the exact problem before. There's a reason I ask everyone to submit issues to the GitHub tracker and not to my email!

Now, down to the nitty-gritty. One group of developers had a requirement that everyone only use Windows 10 to do everything. On most projects I'm involved with, at least one or two developers will have a Linux or macOS environment, and that person would be the one to set up BLT.

But if you need to set up BLT and Drupal VM entirely within Windows, there are a few things you need to do unique to the Windows environment, due to the fact that Windows handles CLIs, line endings, and symlinks differently than other OSes.

I created a video/screencast of the entire process (just to prove to myself it was reliably able to be rebuilt), which I've embedded below, and I'll also post the detailed step-by-step instructions (with extra notes and cautionary asides) below.

Video / Screencast

[embedded content]

Step-by-step instructions

  1. Install the Windows Subsystem for Linux with Ubuntu Bash.
  2. Install Vagrant.
    1. You should also install the following: vagrant plugin install vagrant-vbguest and vagrant plugin install vagrant-hostsupdater. This helps make things go more smoothly.
  3. Install VirtualBox.
  4. Open Ubuntu Bash.
  5. Install PHP and Composer (no need for Node.js at this time) following these BLT Windows install directions.
  6. Set up your Git username and password following these BLT directions.
  7. Run the commands inside the BLT - creating a new project guide.
    1. Note that the composer create-project command could take a while (5-10 minutes usually, but could be slower).
    2. If it looks like it's not really doing anything, try pressing a key (like down arrow); the Ubuntu Bash environment can get temporarily locked up if you accidentally scroll down.
  8. When you get to the blt vm step, run that command, but expect it to fail with a red error message (as of Windows 10 Anniversary Update the WSL can't easily call out to Windows executables from the Ubuntu Bash environment... therefore it fails to see that VirtualBox is installed in Windows since it's only able to see executables in the Ubuntu virtual environment.).
  9. Install Cmder (preferred), Cygwin, or Git for Windows.
  10. Open Cmder.
    1. You need to run Cmder as an administrator (otherwise BLT's Composer-based symlinks go nuts). In Cmder, right-click on the toolbar, click 'New Console...', then check the 'Run as administrator' checkbox and click Start.
    2. You can use other Bash emulators for this (e.g. Cygwin, Git Bash, etc.) as long as they support SSH and are run as Administrator.
    3. When Microsoft releases the Windows 10 update post-Anniversary-update, the WSL might be able to do everything. But right now it's close to impossible to reliably call Windows native exe's from Ubuntu Bash, so don't even try it.
  11. cd into the directory created by the composer create-project command (e.g. projectname).
    1. Note that Ubuntu Bash's home directory is located in your Windows user's home directory, in a path like C:\Users\[windows-username]\AppData\Local\lxss\home\[ubuntu-username].
  12. Run vagrant up to build Drupal VM.
    1. Note that it will take anywhere from 5-25 minutes to bring up Drupal VM, depending on your PC's speed and Internet connection.
  13. Once vagrant up completes, run vagrant ssh to log into the VM.
  14. From this point on, all or most of your local environment management will take place inside the VM!
  15. Make sure you add your SSH private key to the Vagrant user account inside Drupal VM (so you can perform actions on the codebase wherever you host it (e.g. BitBucket, GitHub, GitLab, etc.) and through Acquia Cloud).
    1. You can create a new key pair with ssh-keygen -t rsa -b 4096 -C "[email protected]" if you don't have one already.
    2. Make sure your SSH public key (id_rsa.pub contents) is also in your Acquia Cloud account (under 'Credentials'), and GitHub (under 'SSH Keys') or whatever source repository your team uses.
  16. Inside the VM, cd into the project directory (cd /var/www/[projectname]).
  17. Delete Composer's vendor/bin directory so Composer can set it up correctly inside the VM: sudo rm -rf vendor/bin.
  18. Run composer install.
    1. If this fails the first time, you may be running a version of BLT that requires this patch. If so, run the command sudo apt-get install -y php5.6-bz2 (using the php_version you have configured in box/config.yml in place of 5.6).
    2. If this has weird failures about paths to blt or phing, you might not be running Cmder as an administrator. Restart the entire process from #12 above.
  19. Run blt local:setup to install the project locally inside Drupal VM.
    1. If this fails with a warning about insecure_private_key or something along those lines, you need to edit your blt/project.local.yml file and update the drush.aliases.local key to self (instead of projectname.local). BLT presumes you'll run blt commands outside the VM, but when you run them inside, you need to override this behavior.
  20. On your host machine, open up a browser and navigate to http://local.projectname.com/ (where the URL is the one you have configured in blt/project.yml under project.local.hostname).

CONGRATULATIONS! If all goes well, you should have a BLT-generated project running inside Drupal VM on your Windows 10 PC! You win the Internet for the day.

Next Steps

If you want to push this new BLT-generated project to a Git repository, make sure you have a public/private key pair set up inside Drupal VM, then in the project root, add the remote Git repository as a new remote (e.g. git remote add origin [email protected]:path/to/repo.git), then push your code to the new remote (e.g. git push -u origin master).

Now, other developers can pull down the codebase, follow a similar setup routine to run composer install, then bring up the VM and start working inside the VM environment as well!

Jan 16 2017
Jan 16

I recently needed to re-save all the nodes of a particular content type (after I had added some fields and default configuration) as part of a Drupal 8 site update and deployment. I could go in after deploying the new code and configuration, and manually re-save all content using the built-in bulk operation available on the /admin/content page, but that would not be ideal, because there would be a period of time where the content isn't updated on the live site—plus, manual processes are fragile and prone to failure, so I avoid them at all costs.

In my Drupal 8 module, called custom, I added the following update hook, inside custom.install:

<?php
// Add this line at the top of the .install file.
use Drupal\node\Entity\Node;/**
* Re-save all Article content.
*/
function custom_update_8002() {
// Get an array of all 'article' node ids.
$article_nids = \Drupal::entityQuery('node')
->
condition('type', 'article')
->
execute();// Load all the articles.
$articles = Node::loadMultiple($article_nids);
foreach (
$articles as $article) {
$article->save();
}
}
?>

Though Drupal 8's configuration management system allows almost any config changes to be made without update hooks nowadays... I find I still need to use update hooks on many sites to deploy updates that affect the way a theme or a view displays content on the site (especially when adding new fields to existing content types).

Jan 14 2017
Jan 14

After migrating an older Drupal 6 site with 20,000 media items to Drupal 8, I found a strange problem with image uploads. On the Drupal site, using Image FUpload and Adobe Flash, I could upload up to 99 images in one go. On the new Drupal 8 site, I was only able to upload 20 images, even though I didn't see an error message or any other indication that the rest of the images I had selected through the Media Image upload form were not successfully added.

I could choose 21, 40, or 500 images, but only 20 were ever added to an album at any time.

There were no apparent warnings on the screen, so I just assumed there was some random bug in the Media Image Entity or Media module suite that limited uploads to 20 files at a time.

But due to an unrelated error, I glanced at the PHP logs one day, and noticed the following error message:

[Fri Dec 23 22:05:53.403709 2016] [:error] [pid 29341] [client ip.address.here:41316] PHP Warning:  Maximum number of allowable file uploads has been exceeded in Unknown on line 0, referer: https://example.com/entity-browser/modal/image_browser?uuid=b6e0c064758fc25f517d276e265585959f18361a&original_path=/node/add/gallery

Quite enlightening!

So looking at the PHP docs for file uploads, it seems the default limit is 20 files. That's a bit low for a photo sharing site, so I decided to lift that limit to 250 for the benefit of family members who are a bit trigger-happy when taking pictures!

I edited php.ini and set the following directive:

max_file_uploads = 250

And now I can upload to my heart's content, without manually batching uploads in groups of 20!

Jan 13 2017
Jan 13

One project I'm working on needed a Behat test added to test whether a particular redirection works properly. Basically, I wanted to test for the following:

  1. An anonymous user accesses a file at a URL like http://www.example.com/pictures/test.jpg
  2. The anonymous user is redirected to the path http://www.example.com/sites/default/files/pictures/test.jpg

Since the site uses Apache, I added the actual redirect to the site's .htaccess file in the docroot, using the following Rewrite configuration:

<IfModule mod_rewrite.c>
  RewriteEngine on

  # Rewrite requests for /profile_images to public files directory location.
  RewriteRule ^ pictures/(.*)$ /sites/default/files/pictures/$1 [L,NC,R=301]
</IfModule>

Testing with curl --head, I could see that the proper headers were set—Location was set to the correct redirected URL, and the response gave a 301. So now I had to add the Behat test.

First, I created a .feature file to contain Redirection tests for my project (since it uses Acquia's BLT, I placed the file in tests/behat/features so it would automatically get picked up when running blt tests:behat):

Feature: Redirects
  In order to verify that redirects are working
  As a user
  I should be able to load a URL
  And I should be redirected to another URL

  # Note: Can't use "When I visit 'path'" because it expects a 200.
  @mink:goutte
  Scenario: Test a Picture redirect.
    Given I am not logged in
    When I do not follow redirects
      And I am on "/pictures/xyz.jpg"
    Then I am redirected to "/sites/default/files/pictures/xyz.jpg"

There are a couple unique aspects of this feature file which are worth highlighting:

  1. Testing redirects requires the GoutteDriver, so I've tagged the scenario (@mink:goutte) to indicate this requirement—see notes on the page behat: intercepting the redirection with behat and mink.
  2. I had to add the line When I do not follow redirects to make sure I can intercept the browser and tell it to not follow redirects automatically. By default, it will follow redirects on the And I am on [path] line, and that would make my ability to test the actual redirection impossible.
  3. The Then I am redirected to [path] line is where the magic happens. I had to write a custom Behat step definition to teach Behat how to test the redirection.

If I ran the test at this point, it would fail on the When I do not follow redirects line, because Behat doesn't yet know how to not follow redirects. So I need to teach it by adding two step definitions to my FeatureContext class. Here's the class in it's entirety:

<?php
<?phpnamespace Drupal;

use

Drupal\DrupalExtension\Context\RawDrupalContext;
use
Behat\Behat\Context\SnippetAcceptingContext;// Needed for assert* functions from PHPUnit.
require_once '../../../../vendor/phpunit/phpunit/src/Framework/Assert/Functions.php';/**
* FeatureContext class defines custom step definitions for Behat.
*/
class FeatureContext extends RawDrupalContext implements SnippetAcceptingContext {/**
* @When /^I do not follow redirects$/
*/
public function iDoNotFollowRedirects() {
$this->getSession()->getDriver()->getClient()->followRedirects(false);
}
/**
* @Then /^I (?:am|should be) redirected to "([^"]*)"$/
*/
public function iAmRedirectedTo($actualPath) {
$headers = $this->getSession()->getResponseHeaders();
assertTrue(isset($headers['Location'][0]));$redirectComponents = parse_url($headers['Location'][0]);
assertEquals($redirectComponents['path'], $actualPath);
}

}

?>

Notes for the customized FeatureContext:

  1. I had to require PHPUnit's functions to be able to assert certain things in the redirect test step definition, so I've manually loaded that file with require_once.
  2. iDoNotFollowRedirects() allows me to disable Goutte's automatic redirect handling.
  3. iAmRedirectedTo() is where the magic happens:
    1. The response headers for the original request are retrieved.
    2. The Location URL is extracted from those headers.
    3. The path (sans protocol/domain/port) is extracted.
    4. I assert that the path from the Location header matches the path that is being tested.

Now, when I run the test on my local environment (which runs Apache, so the .htaccess redirect is used), I get the following result:

1 scenario (1 passed)
4 steps (4 passed)
0m11.24s (31.16Mb)

Unfortunately, I then realized that the tests in our CI environment are currently using Drush's embedded PHP webserver, which doesn't support/use Apache .htaccess files. Therefore I'll either have to set up the CI environment using Apache instead of Drush, or use some other means of testing for the proper redirection (e.g. using PHPUnit to verify the right syntax appears inside the .htaccess file directly).

Jan 11 2017
Jan 11

Yesterday I presented Drupal VM Tips & Tricks at the DrupalDC meetup, remotely. I didn't have a lot of time to prepare anything for the presentation, but I thought it would be valuable to walk through some of the neat features of Drupal VM people might not know about.

Here's the video from the presentation:

[embedded content]

Some relevant links mentioned during the presentation:

Dec 30 2016
Dec 30

Note: Extra special thanks to Doug Vann for providing motivation to finally post this blog post!

[embedded content]

Early in 2016, when the Search API and Solr-related modules for Drupal 8 were in early alpha status, I wrote the blog post Set up a faceted Apache Solr search page on Drupal 8 with Search API Solr and Facets.

That post was helpful during the painful months when Solr search in Drupal 8 was still in a very rough state, but a lot has changed since then, and Solr-based search in Drupal 8 is in a much more stable (and easy-to-configure) place today, so I thought I'd finally write a new post to show how simple it is to build faceted Solr search in Drupal 8, almost a year later.

Build a local development environment with Apache Solr

These days I always build and maintain Drupal sites locally using Drupal VM; doing this allows me to set up a development environment exactly how I like it, and doing things like adding Apache Solr is trivial. So for this walkthrough, I'll start from square one, and show you how to start with absolutely nothing, and build faceted search on a new Drupal 8 site, using Drupal VM and a Composer file:

Download Drupal VM and follow the Quick Start Guide, then add the following config.yml inside the Drupal VM folder to ensure Apache Solr is installed:

---
# Make sure Apache Solr is installed inside the VM.
installed_extras:
  - drush
  - mailhog
  - solr

# Configure Solr for the Search API module.
post_provision_scripts:
  - "../examples/scripts/configure-solr.sh"

# Use custom drupal.composer.json to build and install site.
build_composer: true
build_composer_project: false
drupal_core_path: "{{ drupal_composer_install_dir }}/docroot"
drupal_composer_dependencies:
  - "drupal/devel:1.x-dev"
  - "drupal/search_api:^1.0"
  - "drupal/search_api_solr:^1.0"
  - "drupal/facets:^1.0"

We're going to use Drupal VM's Composer integration to generate a Drupal site that has a Composer-based Drupal install in the synced folder path (by default, in a drupal subdirectory inside the Drupal VM folder).

The drupal_composer_dependencies variable tells Drupal VM to composer require the modules necessary to get Search API Solr and Facets. The post_provision_scripts script is included with Drupal VM, and will configure the version of Apache Solr installed with Drupal VM appropriately for use with the Search API Solr module.

Copy the example.drupal.composer.json to drupal.composer.json, then run vagrant up to build the local development environment and download all the Drupal code required to get started with search.

Note: If you are setting search up on an existing site or don't want to use Drupal VM, download and install the Search API, Search API Solr, and Facets modules manually, and make sure you have Apache Solr running and a search core configured with the latest Search API Solr module version's configuration.

Install the modules

If you want to install the modules via Drupal's UI:

  1. Go to http://drupalvm.dev/, then log in as the administrator (default username and password is admin/admin).
  2. Go to the Extend page (/admin/modules), and enable "Search API", "Facets", "Solr search", and "Solr Search Defaults".

Enable Search API Solr and Facet modules in Drupal 8 UI

If you want to install the modules via Drush:

  1. Run drush @drupalvm.drupalvm.dev en -y facets search_api search_api_so lr search_api_solr_defaults

You should also uninstall the core 'Search' module if it's installed—it is not required for Search API or Facets, and will continue to store extra junk in your site's database if it is installed.

Configure the Solr server

Visit the Search API configuration page, and edit the default Solr Server, making the following changes:

  • Change 'Solr core' to collection1 (default is d8).

Search API Solr search core configuration

At this point, on the server's status page (/admin/config/search/search-api/server/default_solr_server), you should see a message "The Solr server could be reached" and "The Solr core could be accessed":

Apache Solr server connection details in Search API configuration

Once this is done, uninstall the Solr Search Defaults module (drush @drupalvm.drupalvm.dev pmu -y search_api_solr_defaults); this module is no longer required after initial install, as the configuration for your Solr server is now part of the site's active configuration store and won't be deleted.

Configure the Solr index

The Solr Search Defaults module creates a default content index containing all published nodes on the website. In our case, that means all Basic pages and Articles would be included.

You don't need to change anything in the index configuration, but if you want to have a poke around and see how it's set up and what options you have in terms of the data source, fields, or processors, visit /admin/config/search/search-api/index/default_solr_index/edit.

Add default content (if you don't have any)

Assuming you built this site using Drupal VM, it's likely the site is barren, with no content whatsoever to be indexed. To fix that, you can use the Devel module's handy companion, Devel generate:

  1. Enable Devel generate: drush @drupalvm.drupalvm.dev en -y devel_generate
  2. Generate dummy content: drush @drupalvm.drupalvm.dev generate-content 100
    • Note: At the time of this writing, the Drush command didn't result in generated content. Use the UI at /admin/config/development/generate/content if the Drush command isn't generating content.

Now you have a bunch of nodes you can index and search!

Confirm Search indexing is working

It's best to let your production Solr servers wait a couple minutes before freshly-indexed content are made available to search; this way searches are a little more performant as Solr can batch its update operations. But for local development it's nice to have the index be up-to-date as quickly as possible for testing purposes, so Drupal VM's configuration tells Solr to update it's search index immediately after Drupal sends any content.

So, if you generated content with Devel generate, then visit the Index status page for the default search index (/admin/config/search/search-api/index/default_solr_index), you should see all the content on the site indexed:

100 percent of content indexed in Search API

If you're working on an existing site, or if all the content isn't yet indexed for some reason, you can manually index all content by clicking the 'Index now' button and waiting for the operation to complete.

Note that indexing speed can vary depending on the complexity of your site. If you have a site with many complex node types and hundreds of thousands or millions of nodes, you'll need to use more efficient methods for indexing, or else you'll be waiting months for all your content to be searchable!

Make a Faceted Solr Search View

The Solr Search Defaults module creates an example Views-based search page, which you can access at /solr-search/content. It should already be functional, since your content is indexed in Solr (try it out!):

Function Search Content page with Search API Solr in Drupal 8

For many sites, this kind of general keyword-based site search is all that you'd ever need. But we'll spruce it up a bit and make it more functional by changing the path and adding a Content Type Facet.

First, modify the view by visiting /admin/structure/views/view/solr_search_content:

  1. Change the Title to 'Search' (instead of 'Search Content').
  2. Change the Path to '/search' (instead of '/solr-search/content').
  3. Click 'Save'.

Second, create a Content Type facet by visiting /admin/config/search/facets:

  1. Click 'Add facet'.
  2. Choose the 'View Solr search content, display Page' Facet source (this is the view you just edited).
  3. Select 'Content type (type)' for the Facet's Field.
  4. Name the facet 'Search Facet - Content type' (this will help with placing a block later).
  5. Click 'Save'.
  6. On the Facet edit page:
    1. Check the box to 'Show the amount of results'.
    2. Check the 'List item label' checkbox (this will make the facet show 'Basic page' instead of 'page'—the label instead of the machine name for each item).
    3. Click 'Save'.

The facet is now ready to be placed in your theme so it will appear when the Search view is rendered. Visit the Block layout configuration page (/admin/structure/block), and click 'Place block' in the region where you want the facet to appear. In my theme, I chose the 'Sidebar first' region.

Find 'Search Facet - Content type' (the Facet you just created) and click 'Place block'. Then set the block title to something like 'Filter by Type', and click 'Save block'. You don't need to set specific visibility constraints for the block because the Facet is set to not display at all if there aren't search results on the page that require it to be shown.

Click 'Save blocks' on the Block layout page, and then visit your sitewide search page at /search:

Solr Search page with a basic preconfigured Facet in Drupal 8

If you perform a search, then you'll notice the facet's result counts will adjust accordingly:

Facets with result count for Drupal 8 Search API keyword search page

At this point, you should have a fully operational Faceted Solr search on your Drupal 8 site. From here, you can customize the search page further, work on outputting different results (maybe a content teaser instead of the full rendered content?), and add more facets (date, author, taxonomy term, etc.) to make the search work exactly as you'd like!

Next steps

If your hosting provider doesn't provide an Apache Solr search core for your site to use, you might want to consider using Hosted Apache Solr to host your site's Solr search core; it uses a similar setup to what's used in Drupal VM, and I can vouch for it, since I run the service :)

Note that the Search API modules are still in beta as of this blog post; minor details may result in differences from the screenshots and instructions above.

Dec 20 2016
Dec 20

Limiting the amount of surprises you get when developing a large-scale Drupal project is always a good thing. And to that end, Acquia's BLT (Build and Launch Tools) wisely chooses to leave Drupal VM alone when updating BLT itself. Updates to Drupal VM can and should be done independently of install profile and development and deployment tooling.

composer require geerlingguy/drupal-vm:~4.0

But this creates a conundrum: how do you upgrade Drupal VM within a project that uses BLT and has Drupal VM as one of it's composer dependencies? It's actually quite simple—and since I just did it for one of my projects, I thought I'd document the process here for future reference:

  1. In your project's root, require the newer version of Drupal VM: composer require --dev geerlingguy/drupal-vm:~4.0 (in my case, I was updating from the latest 3.x release to 4.x).
  2. Edit your box/config.yml file—it's best to either use BLT's current config.yml template as a guide for updating yours, or read through the Drupal VM release notes to find out what config variables need to be added, changed, or removed.
  3. Commit the updates to your code repository.
  4. (If updating major versions) Instruct all developers to run vagrant destroy -f, then vagrant up to rebuild their local environments fresh, on the new version. (If updating minor versions) Instruct all developers to run vagrant provision to update their environments.

There are a lot of great new features in Drupal VM 4, like the ability to switch PHP versions in the VM on-the-fly. This is great for those testing migrations from PHP 5.6 to 7.0 or even 7.1! There's never been an easier and quicker way to update your projects to the latest VM version.

Dec 15 2016
Dec 15
For a recent project, I needed to use the popular Bootstrap theme, using the Sass starterkit for easy CSS management.
Dec 10 2016
Dec 10

Drupal VM 4.0.0 Release Tag - We've Got Company on GitHub

Seven months after Drupal VM 3 introduced PHP 7.0 and Ubuntu 16.04 as the default, as well as more stable team-based development environment tooling, Drupal VM 4 is here!

Thanks especially to the efforts of Oskar Schöldström and Thom Toogood, who helped push through some of the more tricky fixes for this release!

If you're not familiar with Drupal VM, it's a tool built with Ansible and Vagrant that helps build Drupal development environments. The fourth release brings with it even more flexibility than before. Not only can you choose between Ubuntu and CentOS, Apache and Nginx, MySQL and PostgreSQL, Memcached and Redis... you can now seamlessly switch among PHP 5.6, 7.0, and 7.1—without having to recreate your entire development environment!

See the 4.0.0 release notes for all the details—here are the highlights:

  • Drush is now optional (you can use the version included with your project, or not use it at all!)
  • PHP 5.6, 7.0 and 7.1 are supported—and switching between them is easier than ever. Just update php_version and run vagrant provision to switch!

Download Drupal VM and try out one of the most popular Vagrant-based development environments.

Oct 04 2016
Oct 04

tl;dr: If you want to skip the 'how-to' part and explanation, check out the pix_migrate example Drupal 8 migration module on GitHub.

For a couple years, I wanted to work on my first personal site migration into Drupal 8, for the last Drupal 6 site I had running on my servers. I've run a family photo/audio/video sharing website since 2009, and through the years it has accumulated hundreds of galleries, and over 20,000 media items.

Family Photos and Events website display - desktop and mobile
The home page of the Drupal 8 photo sharing website.

I did a lot of work to get the Drupal 6 site as optimized and efficient as possible, but after official support for Drupal 6 was over, the writing was on the wall. I was fortunate to have a few days of time when I could rebuild the site in Drupal 8, work on a migration, and build a new theme for the site (fully responsive, with retina/responsive images, modern HTML5 media embeds, etc.!). I already wrote a couple other posts detailing parts of the site build and launch:

And this will be the third post with some takeaways/lessons from the site build. I mostly write these as retrospectives for myself, since I'll likely be building another five or ten migrations for personal Drupal projects, and it's easier to refer back to a detailed blog post than to some old source code. Hopefully it helps some other people as well!

Why I Didn't Use Core's 'Migrate Drupal'

Migrate Drupal is a built-in migration path for Drupal 8 that allows some Drupal 6 or 7 sites to be migrated/upgraded to Drupal 8 entirely via the UI. You just tell the migration where your old database is located, click 'Next' a few times, then wait for the migration to complete. Easy, right?

As it turns out, there are still a lot of things that aren't upgraded, like most file attachments, Views, and of course any module data for modules that haven't been ported to Drupal 8 yet (even for those that are, many don't have migration paths yet).

The module is still in 'experimental' status, and for good reason—for now, unless you have a very simple blog or brochure-style Drupal 6 or 7 site, it's a good idea to spin up a local Drupal 8 site (might I suggest Drupal VM?) and test your migration to see how things go before you commit fully to the Drupal 8 upgrade process.

Building Custom Migrations

Since Migrate Drupal was out, I decided to build my own individual migrations, migrating all the core entities out of Drupal 6 and into Drupal 8. Since this was an SQL/database-based migration, almost everything I needed was baked into Drupal core. I also added a few contributed modules to assist with migrations:

  • Migrate Plus - Adds a few conveniences to the migration pipeline, and also includes the best (and most up-to-date) migration example modules.
  • Migrate Tools - Provides all the drush commands to make working with Migrate through a CLI possible.

Other than that, before working on any migration, I had to rebuild the structure of the website in Drupal 8 anew. For me, this is actually a fun process, as I can rebuild the content architecture from scratch, and ditch or tweak a lot of the little things (like extra fields or features) that I found were not needed from the Drupal 6 site.

One of my features of Drupal 8 is the fact that almost everything you need for any given content type is baked into core. Besides adding a number of media-related modules (see the official Drupal 8 Media Guide for more) to support image, video, and audio entities (which are related to nodes), I only added a few contrib modules to add user registration spam prevention (Honeypot, a nicer Admin Toolbar, and a helpful bundle of Twig extensions via Twig Tweak that helped make theming (especially embedding views) easier.

Here's a high-level overview of the content architecture that drives the site:

Drupal image gallery website content architecture diagram

Or in text form (with a chain of dependencies):

  • Users (since all content belongs to a particular User).
  • Files (in Drupal, all files should be imported first, so they can be attached to entities later).
  • Names (this is a taxonomy, with a bunch of names that can be attached to photos (either via keyword match or manually tagging images).
  • Images (a Media Entity, which can have one User reference, one File reference, and one or more Name references).
  • Galleries (a Content Type, which can have one User reference, and one Image reference).

Once the site's structure was built out, and the basic administrative and end-user UX was working (this is important to make it easier to evaluate how the migration's working), I started building the custom migrations to get all the Drupal 6 data into Drupal 8.

Getting Things in Order

Because Images depend on Users, Files, and Names, I had to set up the migrations so they would always run in the right order. It's hard to import an Image when you don't have the File that goes along with it, or the User that created the Image! Also, Galleries depend on Images, so I had to make sure they were imported last.

Knowing the content structure of the new site, and having the database of the old site already set up (I added both databases in the site's settings.php file like so:

<?php
/**
* Database settings.
*/
$databases['default']['default'] = array (
'database' => 'pix_site',
'username' => 'pix_site',
'password' => 'supersecurepassword',
'prefix' => '',
'host' => '127.0.0.1',
'port' => '3306',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
);
$databases['migrate']['default'] = array(
'database' => 'pix_site_old',
'username' => 'pix_site_old',
'password' => 'supersecurepassword',
'prefix' => '',
'host' => '127.0.0.1',
'port' => '3306',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
);
?>

I decided to name my custom migration module pix_migrate, so I created a directory with that name, with an info file pix_migrate.info.yml that describes the module and its dependencies:

name: Pix Migrate
type: module
description: 'Migrations for media gallery site Drupal 6 to Drupal 8 upgrade.'
package: custom
version: '8.0'
core: '8.x'
dependencies:
  - migrate_plus

Once I had that structure set up, I could go to the Extend page or use Drush to enable the module: drush en -y pix_migrate; this would also enable all the other required Migrate modules, and at this point, I was ready to start developing the individual migrations!

Aside: Developing and Debugging Migrations

During the course of building a migration, it's important to be able to quickly test mappings, custom import plugins, performance etc. To that end, here are a few of the techniques you can use to make migration development easier:

  • First things first: for every module that generates configuration, the module should also clean up after itself. For a Migration-related module, that means adding a hook_install() implementation in the module's .install file. See this hook_uninstall() example for details.
  • When you test something that relies on configuration in your module's 'install' directory (as we'll see Migrations do), you'll find it necessary to reload that configuration frequently. While you could uninstall and reinstall your module to reload the configuration (drush pmu -y pix_migrate && drush en -y pix_migrate), this is a little inefficient. Instead, you can:
  • Use one of the following modules to allow the quick refresh/reload/re-sync of configuration from your module's install directory (note that there are currently a number of solutions to this generic problem of 'how do I update config from my modules?'; it will be some time before the community gravitates towards one particular solution):
  • When running migrations, there are some really helpful Drush commands provided by the Migrate Tools module, along with some flags that can help make the process go more smoothly:
    • When running a migration with drush migrate-import (drush mi), use --limit=X (where X is an integer) to just import X items (instead of the full set). (Note that there's currently a bug that make multiple runs with --limit not work like you'd expect.)
    • When running a migration with drush migrate-import, use --feedback=X (where X is an integer) to show a status message after every X items has been imported.
    • If a migration errors out or seems to get stuck, use drush migrate-stop, then drush migrate-status to see if the migration kicks back to 'Idle'. If it gets stuck on 'Stopping', and you're sure the Migration's not doing anything anymore (check CPU status), then you can use drush migrate-reset-status to reset the status to 'Idle'.
  • If a migration seems to be very slow, you can use XHProf to profile Migrations via Drush, and hopefully figure out what's causing the slowness.

Migration module file structure

As stated earlier, we have a folder with an .info.yml file, but here's the entire structure we'll be setting up in the course of adding migrations via configuration:

pix_migrate/
  config/
    install/
      [migrations go here, e.g. migrate_plus.migration.[migration-name].yml]
    src/
      Plugin/
        migrate/
          source/
            [Source plugins go here]
  pix_migrate.info.yml
  pix_migrate.install

Our first migration: Users

Note: ALL the code mentioned in this post is located in the GitHub repository TODO. This is the module that I used to do all my imports, using Drupal 8.1.x.

Most of the time, the first migration you'll need to work on is the User migration—getting all the users who authored content into the new system. Since most other migrations depend on the authors being imported already, I usually choose to build this migration first. It also tends to be one of the easiest migrations, since Users usually don't have to be related to other kinds of content.

I'll name all the migrations using the pattern pix_[migration_name] for the machine name, and 'Pix [Migration Name]' for the label. Starting with the User migration, here's how I defined the migration itself, in the file pix_migrate/config/install/migrate_plus.migration.pix_user.yml:

id: pix_user
migration_group: pix
migration_tags: {}
label: 'Pix User'

source:
  plugin: pix_user

destination:
  plugin: 'entity:user'

process:
  name: name
  pass: pass
  mail: mail
  init: init
  status: status
  created: created
  access: access
  login: login
  timezone: timezone_name
  roles:
    plugin: default_value
    default_value: 1

migration_dependencies: {}

dependencies:
  module:
    - pix_migrate

Note that all migration configurations follow the basic pattern:

# Meta info, like id, group, label, tags.

source:
  # Source definition or plugin.

destination:
  # Destination definition or plugin.

process:
  # Field mappings.

# migration_dependencies and dependencies.

For more documentation on the structure of the migration itself, please read through the example modules included with the Migrate Plus module.

For the User migration, the configuration is telling Drupal, basically:

  1. Use the pix_user Migration source plugin (we'll create that later).
  2. Save user entities (using the entity:user Migration destination plugin).
  3. Map fields to each other (e.g. name to name, mail to mail, etc.).
  4. This migration requires the pix_migrate module, but doesn't require any other migrations to be run before it.

We could've used a different source plugin (e.g. the csv or xml plugin) if we were importing from different data sources, but since we are importing from an SQL database, we need to do define our own plugin (don't worry, it's just like Drupal 7 Migrations, but even simpler!), so we'll create that inside pix_migrate/src/Plugin/migrate/source/PixUser.php:

<?php
namespace Drupal\pix_migrate\Plugin\migrate\source;

use

Drupal\migrate\Plugin\migrate\source\SqlBase;
use
Drupal\migrate\Row;/**
* Source plugin for Pix site user accounts.
*
* @MigrateSource(
* id = "pix_user"
* )
*/
class PixUser extends SqlBase {/**
* {@inheritdoc}
*/
public function query() {
return
$this->select('users')
->
fields('users', array_keys($this->fields()))
->
condition('uid', 0, '>')
->
condition('uid', 1, '<>');
}
/**
* {@inheritdoc}
*/
public function fields() {
$fields = [
'uid' => $this->t('User ID'),
'name' => $this->t('Username'),
'pass' => $this->t('Password'),
'mail' => $this->t('Email address'),
'created' => $this->t('Account created date UNIX timestamp'),
'access' => $this->t('Last access UNIX timestamp'),
'login' => $this->t('Last login UNIX timestamp'),
'status' => $this->t('Blocked/Allowed'),
'timezone' => $this->t('Timeone offset'),
'init' => $this->t('Initial email address used at registration'),
'timezone_name' => $this->t('Timezone name'),
];

return

$fields;
}
/**
* {@inheritdoc}
*/
public function getIds() {
return [
'uid' => [
'type' => 'integer',
],
];
}

}

?>

In the plugin, we extend the SqlBase class, and only have to implement three methods:

  1. query(): The query that is used to get rows for this migration.
  2. fields(): The available fields from the source data (you should explicitly list every field from which you need to get data).
  3. getIds(): Which source field (or fields) are the unique identifiers per row of source data.

There are other methods you can add to do extra work, or to preprocess data (e.g. prepareRow()), but you only have to implement these three for a database-based migration.

The User migration is pretty simple, and I'm not going to go into details here—check out the examples in the Migrate Plus module for that—but I will step through a couple other bits of the other migrations that warrant further explanation.

Second migration: Files

The File migration needs to migrate managed files from Drupal 6 to core file entities in Drupal 8; these file entities can then be referenced by 'Image' Media Entities later. The file migration is defined thusly, in pix_migrate/config/install/migrate_plus.migration.pix_file.yml:

id: pix_file
[...]
source:
  plugin: pix_file

destination:
  plugin: 'entity:file'
  source_base_path: https://www.example.com/
  source_path_property: filepath
  urlencode: true
  destination_path_property: uri

process:
  fid: fid
  filename: filename
  uri: filepath
  uid:
    -
      plugin: migration
      migration: pix_user
      source: uid
      no_stub: true
    -
      plugin: default_value
      default_value: 1
[...]

The important and unique parts are in the destination and process section, and I'll go through them here:

First, for the entity:file source plugin, you need to define a few more properties to make sure the migration succeeds. source_base_path allows you to set a base path for the files—in my case, I needed to add the URL to the site so Migrate would fetch the files over HTTP. You could also set a local filesystem path or any other base path that's accessible to the server running the migration. I used urlencode: true to make sure special characters and spaces were encoded (otherwise some file paths would fail). Then I told the plugin to use the filepath from the source to migrate files into the uri in the destination (this is a change from Drupal 6 to Drupal 8 in the way Drupal refers to file locations).

Then, for the process, some of the fields were easy/straight mappings (File ID, name, and path—which is morphed using the rules I set in the destination settings mentioned previously). But for the uid, I had to do a little special formatting. Instead of just a straight mapping, I defined two processors—one the migration plugin, which allows me to define a migration from which a mapping should be used (the pix_user migration from earlier), and the other the default_value plugin.

In my case, I didn't import the user with uid of 1 during the pix_user migration, so I have to tell Migrate first, "don't stub out a user for missing users", then "for any files that don't have a user mapped from the old site, set a default_value of uid 1.

For the PixFile.php plugin definition (which defines the pix_file source plugin), I needed to do a tiny bit of extra work to get the filepath working with references in Drupal 8:

<?php
/**
* {@inheritdoc}
*/
public function prepareRow(Row $row) {
// Update filepath to remove public:// directory portion.
$original_path = $row->getSourceProperty('filepath');
$new_path = str_replace('sites/default/files/', 'public://', $original_path);
$row->setSourceProperty('filepath', $new_path);

return

parent::prepareRow($row);
}
?>

If I didn't do this, the files would show up and get referenced properly for Image media entities, but image thumbnails and other image styles wouldn't be generated (they'd show a 404 error). Note that if your old site's files directory is in a site-specific folder (e.g. sites/example.com/files/), you would need to replace the path using that pattern instead of sites/default/files.

Third migration: Names

This is perhaps the simplest of all the migrations—please see the code in the pix_migrate repository on GitHub. It's self-explanatory, and doesn't depend on any other migrations.

Fourth migration: Image entities referencing files and names

The Image media entity migration is where the rubber meets the road; it has to migrate all the image node content from Drupal 6 into Drupal 8, while maintaining a file reference to the correct file, an author reference to the correct user, and name references to all relevant terms in the Names taxonomy.

First, let's look at the migration definition:

id: pix_image
[...]
source:
  plugin: pix_image

destination:
  plugin: 'entity:media'

process:
  bundle:
    plugin: default_value
    default_value: image
  name: title
  uid:
    -
      plugin: migration
      migration: pix_user
      source: uid
      no_stub: true
    -
      plugin: default_value
      default_value: 1
  'field_description/value': body
  'field_description/summary': teaser
  'field_description/format':
    plugin: default_value
    default_value: basic_html
  field_names:
    plugin: migration
    migration: pix_name
    source: names
  status: status
  created: created
  changed: changed
  'field_image/target_id':
    plugin: migration
    migration: pix_file
    source: field_gallery_image_fid

migration_dependencies:
  required:
    - pix_user
    - pix_file
    - pix_name
[...]

There's a bit to unpack here:

  • We're going to use a pix_image migrate source plugin to tell Migrate about source data, which we'll define later in PixImage.php.
  • We're importing Images as media entities, so we set the destination plugin to entity:media.
  • We want to save Image media entities, so we have to define the bundle (in process) with a default_value of image
  • The uid needs to be set up just like with the pix_file migration, referring to the earlier pix_user migration mapping, but defaulting to uid 1 if there's no user migrated.
  • field_description is a complex field, with multiple possible values to map; so we can map each value independently (e.g. field_description/value gets mapped to the body text in Drupal 6, and field_description/format gets a default value of basic_html.
  • field_names, like uid, needs to refer to the pix_name migration for term ID mappings.
  • field_image/target_id needs to refer to the pix_file migration for file ID mappings.
  • This migration can't be run until Users, Names, and Files have been migrated, so we can explicitly define that dependency in the migration_dependencies section. Set this way, Migrate won't allow this Image migration to be run until all the dependent migrations are complete.

There is also a little extra work that's necessary in the pix_image Migrate source plugin to get the body, summary, and Names term IDs from the source (I chose to do it this way instead of trying to get the ::query() method to do all the necessary joins, just because it was a little easier with the weird database structure in Drupal 6):

<?php
/**
* {@inheritdoc}
*/
public function prepareRow(Row $row) {
// Get Node revision body and teaser/summary value.
$revision_data = $this->select('node_revisions')
->
fields('node_revisions', ['body', 'teaser'])
->
condition('nid', $row->getSourceProperty('nid'), '=')
->
condition('vid', $row->getSourceProperty('vid'), '=')
->
execute()
->
fetchAll();
$row->setSourceProperty('body', $revision_data[0]['body']);
$row->setSourceProperty('teaser', $revision_data[0]['teaser']);// Get names for this row.
$name_tids = $this->select('term_node')
->
fields('term_node', ['tid'])
->
condition('nid', $row->getSourceProperty('nid'), '=')
->
condition('vid', $row->getSourceProperty('vid'), '=')
->
execute()
->
fetchCol();
$row->setSourceProperty('names', $name_tids);

return

parent::prepareRow($row);
}
?>

Final migration: Galleries referencing images

The fifth and final migration puts everything together. Since we changed a little of the content architecture from Drupal 6 to Drupal 8, there's a tiny bit of extra work that goes into getting each Gallery's images related to it correctly. In Drupal 6, the images each referenced a gallery node. In Drupal 8, each Gallery has a field_images field that holds the references to Image media entities.

So we can still map the images the same way as other fields are mapped in the migration configuration:

[...]
process:
  [...]
  field_images:
    plugin: migration
    migration: pix_image
    source: images
[...]

But to get the images field definition correct, we need to populate that field with an array of Image IDs from the Drupal 6 site in the pix_gallery source plugin:

<?php
/**
* {@inheritdoc}
*/
public function prepareRow(Row $row) {
[...]
// Get a list of all the image nids that referenced this gallery.
$image_nids = $this->select('content_type_photo', 'photo')
->
fields('photo', ['nid'])
->
condition('field_gallery_nid', $row->getSourceProperty('nid'), '=')
->
execute()
->
fetchCol();
$row->setSourceProperty('images', $image_nids);

return

parent::prepareRow($row);
}
?>

This query basically grabs all the old 'photo' node IDs from Drupal 6 that had references to the currently-being-imported gallery node ID, then spits that out as an array of node IDs. Migrate then uses that array (stored in the images field) to map old image nodes to new image media entities in Drupal 8.

Conclusion

I often think migrations are full of magic... and sometimes they do seem that way, especially when they work on the first try and migrate a few thousand items at once! But when you dig into them, you find that beneath one simple line of abstraction (e.g. title: title for a field mapping), there is a lot of grunt work that Migrate module does to get the source data reliably and repeatably into your fancy new Drupal 8 site.

This blog post is a little more rough than what I normally would write, and less 'tutorial-y', but I figure that most developers are like me—we learn by doing, but need to see real-world, working examples before the light bulb goes off sometimes. Hopefully I've helped with something through the course of writing this post.

Note that the migrations themselves took me a couple days to set up and debug, and I probably read through 10 other earlier blog posts, every line of certain classes' code, and all the documentation on Drupal.org pertaining to migrations in Drupal 8. Hopefully as time goes on and more examples are published, that aspect of migration development becomes less necessary :)

Oct 01 2016
Oct 01

PostgreSQL elephant transparent PNG
The PostgreSQL logo. Same family as PHP's mascot!

For the past few years, I've been intending to kick the tires of PostgreSQL, an open source RDBMS (Relational DataBase Management System) that's often used in place of MySQL, MariaDB, Oracle, MS SQL, or other SQL-compliant servers. Drupal 7 worked with PostgreSQL, but official support was a bit lacking. For Drupal 8, daily automated test builds are finally being run on MySQL, SQLite, and PostgreSQL, so many of the more annoying bugs that caused non-MySQL database engines to fail have finally been fixed!

With Drupal VM, one of my goals is to be able to replicate almost any kind of server environment locally, supporting all of the most popular software. Developers have already been able to choose Apache or Nginx, Memcached, or Redis, Varnish, Solr or Elasticsearch, and many other options depending on their needs. Today I finally had the time to nail down PostgreSQL support, so now developers can choose which database engine they'd like—MySQL, MariaDB, PostgreSQL, or even SQLite!

As of Drupal VM 3.3.0, all four are supported out of the box, though for MariaDB or PostgreSQL, you need to adjust a couple settings in your config.yml before provisioning.

If you want to build a VM running Drupal 8 on PostgreSQL, the process is pretty simple:

  1. Download Drupal VM and follow the Quick Start Guide.
  2. Before running vagrant up, create a config.yml file with the contents:

---
drupalvm_database: pgsql

  1. Run vagrant up.

After a few minutes, you should have a new Drupal 8 site running on top of PostgreSQL!

PostgreSQL database engine Drupal 8 status report page

A few caveats

You should note that, just like with support for Apache vs. Nginx1, there are far fewer Drupal sites running on PostgreSQL than on MySQL (or MariaDB), so if you choose to use PostgreSQL, you'll likely encounter a bump in the road at some point. For example, to get PostgreSQL working at all with Drupal, the database has to use an older PostgreSQL default output method that uses ASCII instead of hex (the default since PostgreSQL 9.0) for transmission.

If you're planning on digging deeper in to PostgreSQL with Drupal (especially if you need to support things like spatial and geographic objects, or full-text search, and don't want to add on Apache Solr or something like it), you should read through this meta issue for Drupal 8: [meta] Remaining Drupal 8 PostgreSQL issues.

Learn More

1Historically, Apache was used by the vast majority of Drupal sites, so many Drupal features, documentation, and hosting providers assume Apache and either don't consider Nginx configuration, or give it 'second-class' status. That's not to say it's _not_supported... just that you often need to do some of your own due diligence to get everything working smoothly and securely when not using the default option!

Sep 22 2016
Sep 22

Recovering from surgery finally gave me time to update my last D6 site—a 7 year old private photo and media sharing site with nearly 10,000 nodes and 20+ GB of content—to Drupal 8. Drupal 8 has become a lot more mature lately, to the point where I'm comfortable building a site and not having the foundation rot out from beneath as large ecosystem shifts have mostly settled down.

One thing that I thought would have the simplest implementation actually took a little while to figure out. I needed to have users' full name display instead of their usernames throughout the site. For D6 (and for similar D7 use cases), the easiest way to do this was to enable the Realname module, configure it a tiny bit, and be done with it.

In Drupal 8, however, Realname doesn't yet have a full release (see this issue for progress), and the way usernames are generated has changed slightly (see change record hook_username_alter() changed to hook_user_format_name_alter()).

So it took a few minutes' fiddling around before I came up with the following hook implementation that reformats the user's display name using a 'Name' field (machine name field_name) added to the user entity (you can add the field at /admin/config/people/accounts/fields), but only if there's a value for that user:

<?php
/**
* Implements hook_user_format_name_alter().
*/
function custom_user_format_name_alter(&$name, $account) {
// Load the full user account.
$account = \Drupal\user\Entity\User::load($account->id());
// Get the full name from field_name.
$full_name = $account->get('field_name')->value;
// If there's a value, set it as the new $name.
if (!empty($full_name)) {
$name = $full_name;
}
}
?>

Note that there's ongoing discussion in the Drupal core issue queue about whether to remove hook_user_format_name_alter(), since there are some caveats with its usage, and edge cases where it doesn't behave as expected or isn't used at all. I'm hoping this situation will be a little better soon—or at least that there will be a Realname release so people who don't like getting their hands dirty with code don't have to :)

Aug 18 2016
Aug 18

During some migration operations on a Drupal 8 site, I needed to make an HTTP request that took > 30 seconds to return all the data... and when I ran the migration, I'd end up with exceptions like:

Migration failed with source plugin exception: Error message: cURL error 28: Operation timed out after 29992 milliseconds with 2031262 out of 2262702 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html).

The solution, it turns out, is pretty simple! Drupal's \Drupal\Core\Http\ClientFactory is the default way that plugins like Migrate's HTTP fetching plugin get a Guzzle client to make HTTP requests (though you could swap things out if you want via services.yml), and in the code for that factory, there's a line after the defaults (where the 'timeout' => 30 is defined) like:

<?php
$config
= NestedArray::mergeDeep($default_config, Settings::get('http_client_config', []), $config);
?>

Seeing that, I know at a glance that Drupal is pulling any http_client_config configuration overrides from settings.php and applying them to the Guzzle Clients that this factory creates. Therefore, I can add the following to my site's settings.php to set the default timeout to 60 seconds instead of the default 30:

<?php
/**
 * HTTP Client config.
 */
$settings['http_client_config']['timeout'] = 60;
?>

Pretty simple! You can override any of the other settings this way too, like the proxy settings (there's an example in the default settings.php file), headers, and whether to verify certificates for https requests.

Aug 11 2016
Aug 11

DrupalCamp St. Louis 2016 Logo

The time is here! The rest of the DrupalCamp St. Louis 2016 organizers and I were working feverishly this week to get all our ducks in a row, and we now have online registration opened up for DrupalCamp St. Louis 2016! Here are the relevant details:

You'll get a snazzy T-Shirt, a catered lunch, and the fuzzy warm feeling of being part of the great Drupal open source community! Plus I'll be there!

You can still submit session proposals until August 17—so get your proposal in soon; we'll announce the full set of selected sessions on August 20th!

Aug 04 2016
Aug 04

File this one under the 'it's obvious, but only after you've done it' category—I needed to attach a CSS library to a view in Drupal 8 via a custom module so that, wherever the view displayed on the site, the custom CSS file from my module was attached. The process for CSS and JS libraries is pretty much identical, but here's how I added a CSS file as a library, and made sure it was attached to my view:

Add the CSS file as a library

In Drupal 8, drupal_add_css(), drupal_add_js(), and drupal_add_library() were removed (for various reasons), and now, to attach CSS or JS assets to views, nodes, etc., you need to use Drupal's #attached functionality to 'attach' assets (like CSS and JS) to rendered elements on the page.

In my custom module (custom.module), I added the CSS file css/custom_view.css:

.some-class {
  color: #000;
}

Then, to tell Drupal about the CSS file, I added it as a library inside custom.libraries.yml (alongside the .module file):

custom_view:
  css:
    component:
      css/custom_view.css: {}

In this case, the library's name is the top-level element (custom-view), so when I later want to attach this new library to a page, node, view, etc., I can refer to it as custom/custom_view (basically, [module_name]/[library_name]).

Attach the library to your view

Thank goodness for tests! I was looking through the Drupal core issue queue for issues mentioning views using #attached, and eventually found a patch that referred to the test views_test_data_views_pre_render(), which tests the ability to alter a view using the pre-render hook, and thankfully includes an example for attaching a library.

In my case, learning from that test, I want to attach the library to my view by the view ID (in my case super_awesome_view), so I added the following hook in my .module file:

/**
* Implements hook_views_pre_render().
*/
function custom_views_pre_render(ViewExecutable $view) {
  if (isset($view) && ($view->storage->id() == 'super_awesome_view')) {
    $view->element['#attached']['library'][] = 'custom/custom_view';
  }
}

You may be wondering, "What was wrong in the old days with the simplicity of drupal_add_css()? Well, the main reason why much of Drupal 8's awesome caching abilities are possible is due to the fact that all rendered markup can have cacheability metadata attached, and attaching CSS and Javascript like this allows that caching system to work automatically. In Drupal 7, it was just too messy when any code anywhere could toss in a random stylesheet or JS file outside of Drupal's more rigid Render API.

Aug 01 2016
Aug 01

In Drupal 8, many small things have changed, but my willingness to quickly hack something out in a few lines of code/config instead of installing a relatively large module to do the same thing hasn't :-)

I needed to add a checkbox to control whether the page title should be visible in the rendered page for a certain content type on a Drupal 8 site, and there are a few different ways you can do this (please suggest alternatives—especially if they're more elegant!), but I chose to do the following:

  1. Add a 'Display Title' boolean field (checkbox, using the field label as the title, and setting off to 0 and on to 1 in the field settings) to the content type (page in this example).

    Drupal 8 Basic Page 'Display Title' checkbox

  2. Make sure this field is not set to be displayed in the content type's display settings.
  3. In my theme's hook_preprocess_page (inside themename.theme), add the following:
<?php
/**
* Implements hook_preprocess_page().
*/
function themename_preprocess_page(&$variables) {
// Hide title on basic page if configured.
if ($node = \Drupal::routeMatch()->getParameter('node')) {
if (
$node->getType() == 'page') {
if (!
$node->field_display_title->value) {
unset(
$variables['page']['content']['mysite_page_title']);
}
}
}
}
?>

mysite_page_title is the machine name of the block that you have placed on the block layout page (/admin/structure/block) with the page title in it.

After doing this and clearing caches, the page title for Basic Page content was easy to show and hide based on that simple checkbox. Or you can use the Exclude Node Title module, if you don't want to get your hands dirty!

Jul 18 2016
Jul 18

Today I needed to migrate a URL/Link into a Drupal 8 site, and I was scratching my head over how to migrate it so there were distinct values for the URL (the actual link) and the Label (the 'title' that displays to end users and is clickable). Drupal 8's Link field type allows you to set a URL in addition to an optional (or required) label, but by default, if you just migrate the URL, the label will be blank.

I first set up the migration config like so:

...
process:
  field_url: source_url

And source_url was defined in the migration's source.fields configuration.

In my case, the source data didn't have a label, but I wanted to set a default label so the Drupal 8 site could display that as the clickable link (instead of an ugly long URL). To do that, it's similar to migrating a formatted text field, where you can migrate individual components of the field using the syntax [field_name]/[component]. In a Link field's case, it looks like:

...
process:
  'field_url/uri': source_url
  'field_url/title':
    plugin: default_value
    default_value: 'Click here!'

A lot easier than I was expecting—I didn't have to do anything in PrepareRow or even write my own plugin! I found that the parameters were uri and title by digging into the Link module's field propertyDefinitions, which lists a uri, title, and options (the definition is in Drupal\link\Plugin\Field\FieldType.

Special thanks to Richard Allen for cluing me into this after I was looking for documentation to no avail (he pointed out that Link fields are probably just like the core Body field, which is migrated like so, with body/value, body/format, etc.). He also mentioned that pinging people in the #drupal-migrate IRC channel is usually a helpful way to get help at this point in the game!

Jul 13 2016
Jul 13

On a recent project, there was a Migration run that took a very long time, and I couldn't pinpoint why; there were multiple migrations, and none of the others took very long at all (usually processing at least hundreds if not thousands of nodes per minute). In Drupal 7, if you enabled the XHProf module, then you'd get a checkbox on the configuration page that would turn on profiling for all page requests and Drush commands.

In Drupal 8, the XHProf module was completely rewritten, and as a side effect, the Drush/CLI profiling functionality is not yet present (see: Profile drush/CLI with XHProf in Drupal 8).

Since I don't have the time right now to help figure out how to get things working through the official XHProf module, I decided to use a 'poor man's profiling' method to profile a Migration run:

  1. Find the Migrate module's drush command file (migrate_tools.drush.inc).
  2. Inject the appropriate xhprof functions in the right places to set up and save a profiling run.
  3. Run the drush mi [migration-name] command.
  4. Profit!

In my case, since I was using Drupal VM and it's default XHProf configuration, I had to add the following PHP:

<?php
/**
* @param string $migration_names
*/
function drush_migrate_tools_migrate_import($migration_names = '') {
// Enable XHProf profiling for CPU and Memory usage.
xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);

...

migrate drush command code here ...// Disable XHProf, save the run into the configured directory.
$xhprof_data = xhprof_disable();
$XHPROF_ROOT = "/usr/share/php";
include_once
$XHPROF_ROOT . "/xhprof_lib/utils/xhprof_lib.php";
include_once
$XHPROF_ROOT . "/xhprof_lib/utils/xhprof_runs.php";
$xhprof_runs = new XHProfRuns_Default();
$run_id = $xhprof_runs->save_run($xhprof_data, "xhprof_testing");
}
?>

The $XHPROF_ROOT should point to the directory where you have XHProf installed.

After doing this, and running drush mi [migration-name], I looked at the Drupal VM XHProf runs page (configured by default at http://xhprof.[drupal-vm-name]/), and noticed the new run (the one at the top—the second one in this screenshot was from a run I did while viewing the site in the browser):

XHProf Drupal VM Dashboard page screenshot

See more on the PHP.net XHProf documentation pages, most notably the XHProf example with optional GUI example.

Jun 30 2016
Jun 30
Many developers who work on Drupal (or other web/PHP) projects have error reporting disabled in their local or shared dev environments. They do this for a variety of reasons: some don't know how to enable it, some are annoyed by the frequency of notices, warnings, and errors, and some don't like to be reminded of how many errors are logged.But there are a few important reasons you should make sure to show all errors when developing:
Jun 20 2016
Jun 20

DrupalCamp St. Louis logo - Fleur de Lis

DrupalCamp St. Louis 2016 will be held on September 10-11 in St. Louis, MO, on the campus of the University of Missouri, St. Louis, and we're excited to announce that session submissions are open!

We'd love to hear people speak about Drupal business, case studies, coding, community, DevOps, front end, PHP, project management, security, or any other Drupal topic. If you're interested in speaking, please submit a session for consideration, and we'll announce the selected sessions before August 1st.

Need some ideas for inspiration, or some guidelines for what makes a great DrupalCamp session? Please read through the Session FAQs created by the amazing organizers of MidCamp—the same general things apply to DrupalCamp St. Louis.

Haven't spoken at a DrupalCamp before? Now's your chance to shine!

We're also accepting camp sponsorships. Please see our Camp Sponsors page for information on how you can help support one of the midwest's best Drupal events!

Jun 12 2016
Jun 12

Drupal 8 plus Hosted Apache Solr

After a few months of testing, I'm happy to announce Hosted Apache Solr now supports Search API Solr with Drupal 8! Both Search API and Search API Solr have been getting closer to stable releases, and more people have been requesting Drupal 8 search cores, so I decided to finish testing and updating support guides this weekend.

Note that getting Solr and search pages configured in Drupal 8 is a bit different than in Drupal 7 (even if you were using the Search API module instead of the older Apache Solr Search module)—I posted a detailed setup guide earlier this year: Set up a faceted Apache Solr search page on Drupal 8 with Search API Solr and Facets.

I still haven't migrated either this site or Hosted Apache Solr to Drupal 8 yet, so I don't have a canonical Drupal 8 reference site using Hosted Apache Solr at this time, but I've been working on a few other Drupal 8 sites, and am happy to report I can't imagine building a site in Drupal 7 again, if I can help it :) There are a few rough edges, but all the great new features more than make up for that!

Jun 07 2016
Jun 07

DrupalCamp St. Louis 2016 - Landing page

I wanted to post this as a 'save the date' to any other midwestern Drupalists—here in St. Louis, we'll be hosting our third annual DrupalCamp on September 10 and 11 (Saturday and Sunday) in St. Louis, MO. We'll have sessions on Saturday, and a community/sprint day Sunday, and just like last year, we'll record all the sessions and post them to the Drupal STL YouTube channel after the Camp.

We're still working on a few details (nailing down the location, getting things set up so we can accept session submissions and registrations, etc.), but if you're interested in coming, please head over to the official DrupalCamp STL 2016 website and sign up to be notified of further information!

We're also seeking sponsors this year—read through the sponsors page for sponsorship levels and benefits, and help make this another great community event!

Jun 06 2016
Jun 06

Drupal VM is one of the most flexible and powerful local development environments for Drupal, but one the main goals of the project is to build a fully-functional Drupal 8 site quickly and easily without doing much setup work. The ideal would be to install Vagrant, clone or download the project, then run vagrant up. A few minutes later, you'd have a Drupal 8 site ready for hacking on!

In the past, you always had to do a couple extra steps in between, configuring a drupal.make.yml file and a config.yml file. Recently, thanks in huge part to Oskar Schöldström's herculean efforts, we achieved that ideal by switching from defaulting to a Drush make-based workflow to a Composer-based workflow (this will come in the 3.1.0 release, very soon!). But it wasn't without trial and tribulation!

Before we switched the default from Drush make to Composer, I wanted to get initial build times down so users didn't have to wait for an excruciatingly long time to download Drupal. At first, using all the defaults, it took twice as long to build a Drupal site from a composer.json file or using drupal-project as it did to build the Drupal site from an equivalent make file. The main reason is that Composer spends a lot more time than Drush in resolving project dependencies (recursively reading all composer.json files and downloading all the required projects into the vendor directory).

I thought I'd share some of the things we learned concerning speeding up Composer installs and updates for Drupal (and other PHP projects) in a blog post, so the tips aren't buried in issues in Drupal VM's issue queue:

Use prestissimo

prestissimo is a Composer plugin that enables parallel installations. All you have to do is composer global require "hirak/prestissimo:^0.3", and all composer install commands will use parallel package downloads, greatly speeding up the initial installation of Drupal.

For Drupal VM, the Drupal download time went from 400 seconds to 166 seconds—more than 2x faster for the Composer installation!

Make sure XDebug is disabled

This one should be rather obvious, but many times, developers leave XDebug enabled on the CLI, and this slows down Composer substantially—sometimes making installs take 2-4x longer! Make sure php_xdebug_cli_enable is 0 in Drupal VM's config.yml if you have xdebug installed in the installed_extras list.

(If using Vagrant) Use vagrant-cachier

Many Vagrant power users already use vagrant-cachier with their VMs to cache apt or yum packages so rebuilds are quicker (you don't have to re-download frequently-installed packages anymore); but to use it with Composer, you need add one extra bit of configuration in your Vagrantfile:

if Vagrant.has_plugin?('vagrant-cachier')
  ... any other cachier configuration ...

  # Cache the composer directory.
  config.cache.enable :generic, :cache_dir => '/home/vagrant/.composer/cache'
end

We can't use vagrant-cachier's built-in Composer bucket, because PHP isn't preinstalled on the base boxes Drupal VM uses. So we use a :generic bucket instead, and manually point it at the Composer cache directory inside the VM.

Things that didn't seem to help

  • Shallow Git clones: Some people suggested using shallow Git clones (e.g. following this Composer PR), but it didn't make a measurable difference.
  • "minimum-stability": "dev": In the past, setting minimum-stability to dev could speed things up a bit while Composer sorts out the dependency tree (see this post). It seems to not have any measurable impact in this case, though.
  • There are still some other areas ripe for improvement, too—for example, the drupal-packagist project may be able to improve it's caching infrastructure to greatly speed up download times.

Please let me know if there are other tips and tricks you may have that can help speed up Composer—we've almost hit the same build times with Composer that we hit with Drush make files, but make files are still slightly faster.

May 27 2016
May 27

Any time there are major new versions of software, some of the tooling surrounding the software requires tweaks before everything works like it used to, or as it's documented. Since Drupal 8 and Drush 8 are both relatively young, I expect some growing pains here and there.

One problem I ran into lately was quite a head-scratcher: On Acquia Cloud, I had a cloud hook set up that was supposed to do the following after code deployments:

# Build a Drush alias (e.g. [subscription].[environment]).
drush_alias=${site}'.'${target_env}

# Run database updates.
drush @${drush_alias} updb -y

# Import configuration from code.
drush @${drush_alias} cim vcs

This code (well, with fra -y instead of cim) works fine for some Drupal 7 sites I work on in Acquia Cloud, but it seems that database updates were detected but never run, and configuration changes were detected but never made... it took a little time to see what was happening, but I eventually figured it out.

The tl;dr fix?

# Add --strict=0 to resolve the error Drush was throwing due to alias formatting.
drush @${drush_alias} updb -y --strict=0

# I forgot a -y, so Drush never actually executed the changes!
drush @${drush_alias} cim -y vcs

For the first issue, Acquia cloud generates its own Drush alias files, and in all the aliases, it includes some options like site and env. It seems that Drush < 8 would just ignore extra options like those... but Drush 8.x throws an error and stops execution for the current task because of those extra variables. Using --strict=0 tells Drush to squelch any erros thrown by those extra options. Eventually, I'm guessing Acquia Cloud's Drush aliases will be made to be fully-compatible with Drush 8.x, but this workaround is fine for now.

For the second issue, it was just my boneheadedness... if you're running any command that requires a prompt non-interactively (e.g. through an automated system like cloud hooks), you have to add the 'assume-yes' option, -y, just like I used to with fra -y in Drupal 7!

Before, I would get the following error message on every Cloud deployment:

The following updates are pending:

custom module :
  8001 -   Add customizations.

Do you wish to run all pending updates? (y/n): y
Unknown options: --site, --env.  See `drush help                         [error]
updatedb-batch-process` for available options. To suppress this
error, add the option --strict=0.
Unknown options: --site, --env.  See `drush help cache-rebuild` for      [error]
available options. To suppress this error, add the option --strict=0.
Finished performing updates.                                                [ok]

Even though it says 'Finished performing updates', they didn't actually get run. Now it runs the updates without any issue:

The following updates are pending:

custom module :
  8001 -   Add customizations.

Do you wish to run all pending updates? (y/n): y
Performing custom_update_8001                                                                              [ok]
Cache rebuild complete.                                                                                      [ok]
Finished performing updates.                                                                                 [ok]

May 26 2016
May 26

Raspberry Pi Dramble Cluster with Mini Raspberry Pi Zero Cluster

Another year, another field trip for the Pi Dramble—my 5-Raspberry-Pi cluster! I presented a session titled Highly available Drupal on a Raspberry Pi Cluster at php[tek] 2016, which just so happens to have moved to my hometown, St. Louis, MO this year!

For this presentation, I remembered to record the audio using a lav mic plugged into my iPhone, as well as iShowU to record what was on my screen. Sadly, I didn't have a secondary camera to capture the Pi Dramble itself, but you can glance at all the other 'Let's build a Pi Cluster' videos if you want to see it in action!

Here's a video recording of the presentation:

[embedded content]

And here are the slides:

If you haven't yet seen the Dramble, check out all the details at http://www.pidramble.com/.

It was a fun and lively session, and I thank the php[tek] organizers for allowing me to share my passion for electronics, Ansible, PHP, and Drupal with another great group of people. I'll be giving another talk tomorrow, on a completely unrelated topic, ProTips for Staying Sane while Working from Home.

May 19 2016
May 19

Drupal VM 3.0.0 "The Light Sailer" was just released, and you can grab it from the Drupal VM website now. We spent a lot of time during DrupalCon New Orleans sprinting on Drupal VM, fixing bugs, and updating ALL THE THINGS to make sure this release solves a lot of pain points for individuals and teams who need a great local development environment.

Drupal VM - Website Homepage

Let's get right into why this is the best release of Drupal VM EVER!

The fastest and most modern environment

Drupal VM now defaults to Ubuntu 16.04 (which was just released in late April), running MySQL 5.7 and PHP 7. This means you're getting the fastest, most reliable, and most modern development environment for your Drupal 8 projects.

But you can still stick with any of the old OSes and versions of PHP just like you always could: Ubuntu 16.04, 14.04, and 12.04, as well as CentOS 7 and 6, and even Debian Jessie or Wheezy are supported out of the box! Technically, you can still run any version of PHP from 5.1 to 7.0 in Drupal VM (depending on OS selection)... but only PHP 5.5+ is supported right now.

Also, for 2.5.1, Blackfire.io support was added, so you can now profile in any PHP version with either Blackfire or XHProf! (There was a great session on Blackfire at DrupalCon NOLA.)

The best team-based development environment

New features in Drupal VM allow teams to do the following:

  • Add Drupal VM as a composer.json dependency (that's right, Drupal VM is on Packagist! - docs)
  • Commit a shared config.yml, and let developers override only the settings they need in a local.config.yml (docs)
  • Use Drupal VM and Vagrant in a subfolder, meaning you don't have to cd into the Drupal VM directory to run commands like vagrant up
  • Use a custom Vagrantfile to add in or modify Drupal VM's default Vagrant configuration (docs).
  • Add custom shell scripts and/or Ansible task files to be run before and/or after provisioning (docs)

This is the best release yet for development teams, because Drupal VM can be configured specifically for a particular Drupal site—and then parts of the configuration can be overridden by individual developers without any hacks!

A stable, reliable upgrade path

During the course of the Drupal VM 2.x series, one of the major pain points was upgrading Drupal VM to newer versions. At the Drupal VM BoF at DrupalCon, many people mentioned that every time they upgraded Drupal VM, they ended up with some random errors that caused friction. Even if not upgrading, certain Ansible roles would cause problems with older versions of Drupal VM!

Drupal VM now specifies versions of Ansible roles that are used to build the VM (as of 2.5.1), so if you download Drupal VM today, and don't upgrade for a long time, everything should keep working. And if you upgrade to a new version, and read through the release notes, you should have a smooth upgrade process.

600+ stars!

When I started working on Drupal VM, I just tossed my own local Vagrant configuration on GitHub and hoped someone else would see some good ideas in it. When the project had 50 stars (then 100, then 200), I was amazed, and wondered when interest in Drupal VM would start waning.

Ansible Galaxy - Explore Role Downloads

Lo and behold, a couple years in, Drupal VM has been starred over 600 times, and all the most downloaded roles on Ansible Galaxy are roles used in Drupal VM! It's also humbling, and quite awesome, to meet a complete stranger at DrupalCon who uses Drupal VM; thank you to all those who have used Drupal VM, have helped with all the open source Ansible Galaxy roles, and also help fight the good fight of automating all the infrastructure things for Drupal!

Special thanks to all the users who have contributed to the last few releases: oxyc, rodrigoeg, thom8, scottrigby, iainhouston, quicksketch, Mogtofu33, stevepurkiss, slimatic, sarahjean, and derimagia (this list is not exhaustive, and I know I met more people at DrupalCon who helped but I forgot to mention here—please let me know if I forgot about you in the comments!).

What the future holds

I'm starting to work on better planning for future releases (e.g. 3.1.0, etc.), and you can always check the Drupal VM issue queue (especially 'Milestones') to see the latest. Somewhere down the line, parts of Drupal VM will likely start using Docker and/or other containerization tech to make builds and rebuilds much faster. There are already some users exploring the use of vagrant-lxc on Linux for speedier builds!

Where will we go for 4.0? I'm not quite sure yet... but I'll keep the guiding principles for my own development environment in mind:

  • Fast and flexible
  • Stable and secure (for local dev, at least)
  • Cross-platform compatible

Please download Drupal VM, try it out, and see what you think!

May 18 2016
May 18

Since a quick Google search didn't bring up how to do this in Drupal 8 (there are dozens of posts on how to do it in Drupal 7), I thought I'd post a quick blog post on how you can modify a user's roles in Drupal 8. Hint: It's a lot easier than you'd think!

In Drupal 7, $user was an object... but it was more like an object that acted like a dumb storage container. You couldn't really do anything with it directly—instead, you had to stick it in functions (like user_multiple_role_edit()) to do things like add or remove roles or modify account information.

In Drupal 8, $user is a real, useful object. Want to modify the account name and save the change?

<?php
$user
->setUsername('new-username');
$user->save();
?>

There are now dozens of simple methods you can call on a user object to get and set information on a $user object, and what used to be a little annoying in Drupal 7 (modifying a user's roles) is now very straightforward. Let's say I want to add the 'administrator' role to a user account:

<?php
$user
->addRole('administrator');
$user->save();
?>

Done!

Want to remove the role?

<?php
$user
->removeRole('administrator');
$user->save();
?>

Added bonus—you no longer need to retrieve a role ID using user_role_load_by_name(), because in Drupal 8 role IDs are now machine readable strings!

I often need to add a drush command that can be run in non-production environments that will make certain users administrators (e.g. developers who need full access in non-prod, but shouldn't have full access on production), and using this new logic, it's extremely easy to build a drush command to do this.

First, create a drush command file (I generally create them in the drush folder in the site's docroot, titled [command].drush.inc). In my case, I created npadmin.drush.inc (for "Non-Prod Admin"), with the following contents:

<?php/**
 * @file
 * Non-production admins drush command.
 */

/**
 * Implements hook_drush_command().
 */

function npadmin_drush_command() {
 
$items = []; $items['non-prod-admins'] = [
   
'description' => "Makes certain users administrators in non-prod environments.",
   
'examples' => [
     
'drush npadmin' => 'Make certain users admins in non-prod environments.',
    ],
   
'aliases' => ['npadmin'],
  ];

  return

$items;
}
/**
 * Implements drush_hook_COMMAND().
 */
function drush_npadmin_non_prod_admins() {
 
$users_changed = 0; // List of users who should be made administrators in non-prod environments.
 
$users_to_make_admin = [
   
'jeff.geerling',
   
'etc...',
  ];

  foreach (

$users_to_make_admin as $name) {
   
$user = user_load_by_name($name);
    if (
$user) {
     
$user->addRole('administrator');
     
$user->save();
     
$users_changed++;
    }
  }
drush_log(dt('Assigned the administrator role to !count users.', ['!count' => $users_changed]), 'ok');
}
?>

After creating the Drush command file, clear Drush's cache (drush cc drush), and then run drush npadmin in whichever environments should have these users become administrators.

Going further, you could turn $users_to_make_admin into a configuration item, so you could change it without changing code (in Drupal 7, I often used a variable for this purpose).

Programmatically managing a user's roles is a great example of OOP code in Drupal 8 making programming more simple and logical. Check out UserInterface for many more methods you can call on a user object!

May 10 2016
May 10

Another year, another Acquia Certification exam...

Acquia Certified Developer - Drupal 8 Exam Badge 2016

I'm at DrupalCon New Orleans, the first North American DrupalCon since the release of Drupal 8. In addition, this is the first DrupalCon where the Acquia Certified Developer - Drupal 8 Exam is being offered, so I decided to swing by the certification center (it's on the 3rd floor of the convention center, in case you want to take any of the certification exams this week!) and take it.

I've taken all the other exams up to this point, including the general developer exam, the front end and back end specialist exams, and the site builder exam, and I've posted short reviews of each of those exams (click the links to read the reviews), so I thought I'd post a review of this exam as well, for the benefit of others who will take it in coming months!

Note: If you'd like a good overview of my perspective on the Acquia Certifications in general, please read my post on the general developer exam, which was written prior to my Acquia employment. Not that I bias my posts here based on my employer, but I do realize there are many who are conflicted over the value of a Drupal certification (or of any software certifications in general), and there are pros and cons to taking the actual exams. I'm of the opinion that the exams are a good thing for the Drupal community, but probably not a great way to judge individual developers (e.g. for hiring purposes), unless taken as one data point in the evaluation process.

A little bit of the old...

For anyone who's taken one of the previous exams, this exam should feel familiar. Probably half of the questions (mostly the 'foundational web development' and 'site building' ones) could be used on any of the Drupal Certification exams. This is because Drupal 8 does carry over a lot of the same content and admin architecture of Drupal 7, even if things underlying the architecture have changed dramatically.

Things like adding fields, managing content types, adding relationships, displaying lists of content, etc. are fundamentally the same as they were in Drupal 7, though all the modules for performing these operations are included in Drupal 8 core.

Additionally, many parts of the theming realm (preprocess functions and the Render API, mostly) are the same.

...A good dose of the new

However, there are maybe 5-10 questions that are a little trickier for me (and would be for anyone working with both Drupal 7 and Drupal 8 sites), because the questions deal with APIs or bits of Drupal that have changed only subtly, or were only halfway re-architected in Drupal 8. For example, there are a few things in Drupal 8 that use Symfony's Event Dispatcher. But wouldn't you know, nodes still use the old-school hooks! As I have only worked on a couple custom modules that tie into node operations, I wasn't sure what the status of conversion to the new system was... but if you're wondering, HookEvent is still not part of Drupal 8 core.

Besides those ambiguous questions (most of which I'm sure I answered incorrectly), there were a number of questions related to basic Twig syntax (e.g. how do you print variables, how do you filter variables, how do you structure a template, how do you add a template suggestion, etc.), and thankfully, there wasn't anything super-deep, like how Twig blocks (not Drupal blocks) work!

You'll want to have at least a basic understanding of creating a route, creating a block plugin, defining a service, adding a CSS or JS file as a library, and using annotations properly in custom modules. You can look at some core modules (like the Book module) for simple implementation examples.

A few errors

Since this is a relatively new exam (I think it was 'officially' launched this week!), there are a few rough edges (grammatical issues, question style issues) that will be ironed out in the next few months; when I had taken the other exams, those little issues had already been worked out, so all my thought could focus on the question at hand, and the potential solutions, and not on style, phrasing, or content.

There was one question that seems to have been partially cut off, as it had a statement, a code block, then some answers... but no actual question! I inferred what the question would be based on context... but I also let the facilitator know, and by the time you take the exam, the erroneous questions will likely be fixed.

These are common issues with the launch of a new exam, since the exam questions are built up from a large number of contributed questions, and the tool (webassessor) doesn't always seem to offer the best interface/input for technical questions. The same caveats apply to this exam as the others—make sure you read through the question a couple times if it's unclear, and whenever there are code samples (especially in answers), make sure you're parsing things correctly!

My Results

I took the exam early in the week, but didn't have a lot of caffeine yet, so I may have been able to score marginally higher. But I'm happy with my overall 81.66%, broken down into the following:

  • Fundamental Web Development Concepts: 100%
  • Site Building: 77%
  • Front end Development: 80%
  • Back end Development: 80%

I'm guessing that after I have a few more Drupal 8 sites under my belt (especially ones that don't use the default Bartik theme, like the Raspberry Pi Dramble site), I can bump up some of the other scores a bit. There are a lot of subtle differences between Drupal 7 and Drupal 8 that can trip a seasoned Drupalist up in these questions!

May 08 2016
May 08

Raspberry Pi Dramble cluster - with a banana for scale

When I originally built the Raspberry Pi Dramble 6-node Pi cluster in 2014 (for testing Ansible with bare metal hardware on the cheap), I compiled all the code, notes, etc. into a GitHub repository. In 2015, I decided to take it a step further, and I started hosting www.pidramble.com on the cluster, in my basement office!

Every time I've shared the cluster with others during presentations or at events like DrupalCon, someone how it's built. While I do have almost all my notes and instructions for building the entire cluster on the Pi Dramble Wiki, a step-by-step visual guide is better. Therefore, I'm posting a series of videos, "Let's Build a Raspberry Pi Cluster", to my YouTube channel.

The first video goes over hardware parts and setup:

The second goes over microSD card setup and Raspbian OS configuration:

These videos will also serve as good background material for those attending my upcoming talks on Drupal 8 and the Raspberry Pi:

I'll post again once I've wrapped up the video series (it will likely be either 5 or 6 parts total). What do you think of the videos? Should I consider making more of this kind of video, or ditch the idea and stick to writing?

May 06 2016
May 06
When module authors decide to port their modules to a new major version of Drupal (e.g. 6 to 7, or 7 to 8), they often take the time to rearchitect the module (and sometimes an entire related ecosystem of modules) to make development more efficient, clean up cruft, and improve current features.
May 03 2016
May 03

In Drupal 8, Search API Solr is the consolidated successor to both the Apache Solr Search and Search API Solr modules in Drupal 7. I thought I'd document the process of setting up the module on a Drupal 8 site, connecting to an Apache Solr search server, and configuring a search index and search results page with search facets, since the process has changed slightly from Drupal 7.

Install the Drupal modules

In Drupal 8, since Composer is now a de-facto standard for including external PHP libraries, the Search API Solr module doesn't actually include the Solarium code in the module's repository. So you can't just download the module off Drupal.org, drag it into your codebase, and enable it. You have to first ensure all the module's dependencies are installed via Composer. There are two ways that I recommend for doing this (both are documented in the module's issue: Keep Solarium managed via composer and improve documentation):

  1. Run the following commands in the root directory of your site's codebase:
    1. composer config repositories.drupal composer https://packagist.drupal-composer.org
    2. composer require "drupal/search_api_solr 8.1.x-dev" (or 8.1.0-alpha3 for the latest stable alpha release)
  2. Install Composer Manager and follow Composer Manager's installation instructions to let it download module dependencies (after having downloaded search_api_solr into your codebase via Drush or manually).

Basically, you need to make sure Search API, Search API Solr, and Facets (formerly Facet API) are in your codebase before you can install them on the Extend page:

Install Facets, Search API, and Solr Search modules in Drupal 8

Install those three modules on your site (either on the 'Extend' page, or via Drush with drush en [module_name]), and if you don't need the redundant core search functionality (you probably don't!), uninstall the Search module.

Connect to your Solr server

Visit the Search API configuration page (/admin/config/search/search-api, or Configuration > Search and metadata > Search API in the menu), and click 'Add Server'. Add a server with the following configuration (assuming you're running this on a local instance of Drupal VM, which easily installs and configures Solr for you):

  • Server name: Local Solr Server (don't use localhost until this issue is resolved!)
  • Backend: Solr (should be the only option at this point)
  • HTTP protocol: http
  • Solr host: localhost
  • Solr port: 8983
  • Solr path: /solr
  • Solr core: collection1 (this Solr search index/core is set up for you if you use Drupal VM and uncomment the example post-provision script for Solr)
  • Basic HTTP authentication: (make sure this is blank)
  • Advanced: These options are up to you. Leave the version override and HTTP method set to their automatic defaults.

Click 'Save', and the server should be saved. Hopefully under Server Connection on the page that you're taken to, you see the message The Solr server could be reached. This means the server is set up correctly, and you can move on to creating a search index on the server.

Create a search index

Back on the Search API configuration page, click 'Add Index'. For the example, we'll create an index with only Article content, with the following configuration:

  • Index name: Articles
  • Data sources: Content
    • What should be indexed?: None except those selected
    • Bundles: Article
    • Languages: English
  • Server: Local Solr Server

Make sure the index is enabled (there's a checkbox for it), and click 'Save'. By default, Search API will index all article content that currently exists immediately (or none, if none exists at this point).

If you don't have any article content yet, create a few articles with a title, body, and some tags. Now that you have content, make sure it's indexed by running cron (either use drush cron or run it via the Reports > Status report page). Check the index page at /admin/config/search/search-api/index/articles to make sure articles are in the index:

Search API Article index status Drupal 8

Add fields to the index

Before you can search for content in the index, you need to make sure all the fields you're interested in searching are available in Solr. You need to go to the 'Fields' tab for the index, and click the 'Add fields' button to add fields from the content type to this index:

Search API Article index add fields to index Drupal 8

Under the 'Content' section, I chose to add the following fields:

  • ID
  • UUID
  • Content Type
  • Title
  • Authored by > User > Name
  • Authored on
  • URL alias > Path alias
  • Body
  • Tags

Note: As of May 2016, the UI for adding fields feels slightly jarring. There's an open issue to improve the field management UI in Search API: AJAXify and generally improve the new Fields UI.

Click 'Done' once finished adding fields, then make sure all the fields you added are present under 'Content' back on the field overview page. (Later, when you're going further into index customization and optimization, you'll spend a bit more time on this page and in this process of refining the fields, their boost, their types, etc.)

Add processors to the index

The last step in setting up the search index is the addition of 'processors', which allow the index to be more flexible and work exactly how we want it for a fulltext 'fuzzy' search as well as for faceted search results. Go to the 'Processors' tab for the Articles index, and check the following checkboxes:

  • Aggregated fields
  • Highlight
  • Ignore case
  • Node status
  • Tokenizer
  • Transliteration

Then, to give ourselves the ability to search with one search box on multiple fields (e.g. title, body, and tags), at the bottom of the page click 'Add new Field' in the 'Aggregated fields' configuration (note this might currently drop you into a different vertical tab once clicked—switch back to the Aggregated fields tab if so), and then call the new field 'Fulltext Article search'. Check Body, Tags, URL alias, Title, and Authored by.

Click 'Save' at the bottom of the form to save all changes. The search index will need to be updated so all the processors can have an effect, but before doing that, go back to the 'Fields' configuration for the Article search index, and switch the new aggregated field from type 'String' to 'Fulltext':

Use type Fulltext for aggregated field in Search API Article index

Click 'Save changes', then go back to the search index View page and click the 'Index now' button (or use cron to reindex everything).

Configure a search page and search facets

Now we come to the final and most important part of this process: creating a search page where users can search through your articles using a full-text, faceted search powered by Solr. Create a new View (visit /admin/structure/views and click 'Add new view'), and name the view 'Article search'. For the view's general settings:

  • Show: Index Articles
  • Page: Create a page (checked)
    • Page title: Article search
    • Path: search/articles
    • Display format: Unformatted list of Rendered entity

Click 'Save and edit' to configure the view. Now we will configure the search index view to our liking:

  1. Click the 'Settings' for the Rendered entity (Format > Show), and switch from the Default view mode to Teaser.
  2. Click the 'Add' button under Filter criteria, and add the "Fulltext Article search" aggregate field we added earlier.
  3. Check the 'Expose this filter to visitors' button, and change the label to 'Search', then Apply the changes.
  4. Save the view, then visit /search/articles.

Test out your new search page, and make sure you can search any part of any of the aggregated text fields.

Note: If you want to be able to search parts of words (e.g. word stems like 'other' match instances of 'another'), then you have to do that in your Solr schema, using Solr's EdgeNGramFilterFactory; see this documentation from Hosted Apache Solr: Enable partial word match keyword searching with EdgeNGramFilterFactory .

Add search facets

Now that we have the general search page set up, let's add a few facets to allow users to more easily drill down into their search results. Go to the Facets admin page (/admin/config/search/facets or Configuration > Search and metadata > Facets), and click 'Add facet' to add two different facets:

  1. A 'Publish date' facet:
    1. Facet name: Publish date
    2. Facet name for URLs: published
    3. Facet source: Article search > Page
    4. Facet field: Authored on
    5. Weight: 0
  2. A 'Tags' facet:
    1. Facet name: Tags
    2. Facet name for URLs: tags
    3. Facet source: Article search > Page
    4. Facet field: Tags
    5. Weight: 0

For the facet display settings, leave the defaults for now (you can customize the facet display and advanced behavior later), and click 'Save' to save the facet. For the 'Tags' facet, edit the display and check the 'Translate taxonomy terms' checkbox so the term name (and not the term ID) is displayed.

Go to the Block layout page (/admin/structure/block) to place the two new facet blocks you just created into the Left sidebar (or wherever in the theme you'd like them to be visible), and save the updated block layout.

Profit!

Now, if you visit the /search/articles page, you should see a faceted fulltext Apache Solr-powered search page, and from here you can customize anything as much or as little as you need!

Search API Solr Drupal 8 faceted fulltext Apache Solr search page

Note that there are a few bits and pieces of the UI and functionality that are still being worked out between Search API, Search API Solr, Facets, and other addon modules that extend these base modules' functionality. Some features that many rely upon in Drupal 7's Solr/search ecosystem, like 'pretty paths' for Facets (e.g. /search/articles/tag1/tag2 instead of /search/articles?taxonomy=tid1&taxonomy_2=tid2) are not yet available in Drupal 8, though there are some early ports (e.g. a Facet pretty paths port) for many of these modules in the search ecosystem.

I was pleasantly surprised how robust and complete the core search functionality already is in Drupal 8; with Drupal 8.1.0 just released, and more and more companies beginning to move to Drupal 8 as their core platform, I think we'll see more of these addon modules make their way to a stable release.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web