Apr 04 2019
Apr 04
Go to the profile of Julia Gutierrez

DrupalCon2019 is heading to Seattle this year and there’s no shortage of exciting sessions and great networking events on this year’s schedule. We can’t wait to hear from some of the experts out in the Drupalverse next week, and we wanted to share with you a few of the sessions we’re most excited about.

Adam is looking forward to:

Government Summit on Monday, April 8th

“I’m looking forward to hearing what other digital offices are doing to improve constituents’ interactions with government so that we can bring some of their insights to the work our agencies are doing. I’m also excited to present on some of the civic tech projects we have been doing at MassGovDigital so that we can get feedback and new ideas from our peers.”

Bryan is looking forward to:

1. Introduction to Decoupled Drupal with Gatsby and React

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 6B | Level 6

“We’re using Gatsby and React today on to power Search.mass.gov and the state’s budget website, and Drupal for Mass.gov. Can’t wait to learn about Decoupled Drupal with Gatsby. I wonder if this could be the right recipe to help us make the leap!”

2. Why Will JSON API go into Core?

Time: Wednesday, April 10th from 2:30 pm to 3:00 pm

Room: 612 | Level 6

“Making data available in machine-readable formats via web services is critical to open data and to publish-once / single-source-of-truth editorial workflows. I’m grateful to Wim Leers and Mateu Aguilo Bosch for their important thought leadership and contributions in this space, and eager to learn how Mass.gov can best maximize our use of JSON API moving forward.”

I (Julia) am looking forward to:

1. Personalizing the Teach for America applicant journey

Time: Wednesday, April 10th from 1:00 pm to 1:30 pm

Room: 607 | Level 6

“I am really interested in learning from Teach for America on how they implemented personalization and integrated across applications to bring applicants a consistent look, feel, and experience when applying for a Teach for America position. We have created Mayflower, Massachusetts government’s design system, and we want to learn what a single sign-on for different government services might look like and how we might use personalization to improve the experience constituents have when interacting with Massachusetts government digitally. ”

2. Devsigners and Unicorns

Time: Wednesday, April 10th from 4:00 pm to 4:30 pm

Room: 612 | Level 6

“I’m hoping to hear if Chris Strahl has any ‘best-practices’ and ways for project managers to leverage the unique multi-skill abilities that Devsigners and unicorns possess while continuing to encourage a balanced workload for their team. This balancing act could lead towards better development and design products for Massachusetts constituents and I’d love to make that happen with his advice!”

Melissa is looking forward to:

1. DevOps: Why, How, and What

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 602–604 | Level 6

“Rob Bayliss and Kelly Albrecht will use a survey they released as well as some other important approaches to elaborate on why DevOps is so crucial to technological strategy. I took the survey back in November of 2018, and I want to see what those results from the survey. This presentation will help me identify if any changes should be made in our process to better serve constituents from these results.”

2. Advanced Automated Visual Testing

Time: Thursday, April 11th from 2:30 pm to 3:00 pm

Room: 608 | Level 6

“In this session Shweta Sharma will speak to what visual testings tools are currently out there and a comparison of the tools. I am excited to gain more insight into the automated visual testing in faster and quicker releases so we can identify any gotchas and improve our releases for Mass.gov users.

P.S. Watch a presentation I gave at this year’s NerdSummit in Boston, and stay tuned for a blog post on some automation tools we used at MassGovDigital coming out soon!”

Lastly, we really hope to see you at our presentations:

We hope to see old friends and make new ones at DrupalCon2019, so be sure to say hi to Bryan, Adam, Melissa, Lisa, Moshe, or me when you see us. We will be at booth 321 (across from the VIP lounge) on Thursday giving interviews and chatting about technology in Massachusetts, we hope you’ll stop by!

Jan 30 2019
Jan 30

Why not just use the .gitignore file?

Go to the profile of Kaleem Clarkson Photo by Tim Wright on Unsplash

As many of you know, I am a huge Pantheon hosting fanboy and can still remember the days during the beta launch of being blown away that I have three different environments out of the box, with dev, test and live. Another great service they added recently is that all sites receive SSL certificates automatically and all you have to do is redirect all traffic to use HTTPS. In Drupal 8 they suggest doing this in your settings.php file.

After adding the redirect code everything works great until you fire up your local environment (I am currently using Lando) and you are getting a blank screen. After further investigation, you notice it’s the redirect to HTTPS that is causing the issue. My first thought was to make sure my settings.local.php file was correctly being used but for the life of me, I could not get that file to override the redirect code in my settings.php file. If you are reading this and have a better idea on to how to accomplish this then let me know in the comments :)

My next thought was to simply add the settings.php file to my .gitignore file but when I went to my production website I was prompted to reinstall my Drupal site. When adding a file to .gitignore the repo pretends it doesn’t exist so therefore Drupal was telling me to reinstall. Whoooops, my production site kind of needs this file hahahah. So I thought to myself,

How can I ignore my settings.php file in my local repo but still have the original file on production?

After attending Google University for 10 minutes, I stumbled upon a medium post by Ian Gloude regarding the git update-index command. In their article “Git skip-worktree and how I used to hate config files,” there is a great explanation of the concept, but for me the lightbulb really went off when reading the Git documentation hint, “see also git-add[1] for a more user-friendly way to do some of the most common operations on the index.” Basically git update-index tells Git what to watch in your repo.

Now that we understand what git update-index does, the real magic happens with the options that you can add to the command. In this case, the option that Ian Gloude suggested is the --skip-worktreeoption. The Git documentation explains that the skip worktree bit tells the git index to assume the file is unchanged from this point on regardless if there is an actual change. So what does this mean for us? It means you can change your file on your local environment while the original file on your production server remains unchanged.

Here is the command I use prior to uncommenting out the pantheon redirect code.

git update-index --skip-worktree /sites/default/settings.php

When I need to make some changes to the production settings.php file I can tell Git to watch the file again with this command.

git update-index —-no-skip-worktree web/sites/default/settings.php

Anyway, I hope this helps you keep your local and production environments running smoothly while maintaining your settings differently.

Dec 28 2018
Dec 28

Themes improperly check renderable arrays when determining visibility

Go to the profile of Kaleem Clarkson

Dec 28, 2018

Photo by Hello I'm Nik on Unsplash

One of the many great advantages of being a part of an open source project is that there are so many smart people out there are willing to contribute their time for the betterment of the project. This ability to crowdsource bugs and feature requests that rarely stumps the community is what makes Drupal such a powerful application.

While rare, sometimes the community finds a bug that is very difficult to solve. Let me introduce you to [#953034] Themes improperly check renderable arrays when determining visibility.

I was first introduced to this bug while trying to add a view block in the left sidebar. When the view was empty I expected the block and the sidebar to not be displayed. As you can see below, while the block was empty the sidebar was still being rendered.

The sidebar is Still being displayed.

I then googled and stumbled upon another issued, Empty view causes region to be displayed and it was exactly what I was looking for, but I noticed it was marked as a duplicate issue and linked to [#953034] Themes improperly check renderable arrays when determining visibility. This bug was reported to Drupal 7 core on October 26, 2010. The issue has over 310 comments and 230 followers.

You can really tell the severity and complexity of an issue when you see some of the brightest Drupal contributors have been making suggestions and striking out. They include but are not limited to: 
bleen, chx, Cottser, Crell, DamienMcKenna, EclipseGc, Fabianx, Jeff Burnz, jenlampton, joachim, joelpittet, JohnAlbin, lauriii, markcarver, mdrummond, moshe weitzman, mpotter, samuel.mortenson, tim.plunkett, webchick, Wim Leers, xjm.

While I am not a backend developer, I felt like I could still help by highlighting a major issue that maybe someone either inside or outside the community could help find a solution.

Please remember to read the complete issue before commenting as so many people have suggested solutions to fix but have ran into a roadblock.

Oct 21 2018
Oct 21

Wouldn’t it be nice if you could add any block you want to your paragraphs?

Go to the profile of Kaleem Clarkson

Oct 20, 2018

In years past, layout for Drupal has been in the hands of front-end developers, but over time various modules were developed that provided site-builders the ability to adjust the layout. An improvement yes, but there still wasn’t a clear cut option that empowered content editors to alter the layout during the editorial process.

Look out! Here comes the Paragraphs Module. This module has been taking the Drupal community over by storm because it allows content editors to add pre-designed components which gives each page the option to have different layouts. One of the limitations of the Paragraphs module, is that each paragraph can only be used once, and only for the current node you are editing. This means that you can’t re-use a common paragraph such as a call to action block, email signup or contact us form, so you end up finding yourself duplicating a lot of work if you want the same block on numerous pages. While the Drupal community has been working to help solve this problem by allowing the re-use of paragraphs, there are still going to be plenty of situations where you want to insert custom blocks, views, or system blocks such as the site logo or login block.

How do you allow your site editors to add re-used blocks into their content during the editorial process?

Let me introduce you to the Block Field Module. Maintained by the one and only Jacob Rockowitz (you know the webform guy ), you can be assured that the code follows best practices and that there will be support. The block field module allows you to reference any block regardless of where it is coming from and the best part, you don’t have to create some hidden region in your theme in order for the blocks to be rendered.

There are plenty of awesome articles out there that explains how to use paragraphs so I won’t get into that. To follow along with my steps be sure to have downloaded and enabled both the Paragraphs and the Block Field modules.

Steps to Add Blocks to Paragraphs

  1. Download and Enable the Paragraphs and Block Field modules.
  2. Create a paragraph type called Block Reference (or whatever name you want)
  3. Add a new field, by selecting the Block (plugin) field type from the dropdown and save it.
  4. Go to manage display and make the label hidden. 
    I always forget this step and then I scratch my head when I see the Block Ref field label above my views title.
  5. Now go to back to your content type that has the paragraph reference field and ensure the Block Reference paragraph type is correctly enabled. 
    The content type with the paragraph reference field was not covered in this tutorial.
  6. When adding or editing your content with a paragraph reference field. Add the Block Reference paragraph type. Select the name of the block that you would like to reference from the dropdown hit save on the content and watch the magic happen.

In conclusion, it does feel a little scary giving content editors this much freedom so it will be imperative that all views and custom blocks have descriptive names so that editors can clearly identify what blocks to reference. Overall I feel like this is a good solution for referencing existing blocks that can save a lot of time and really unleashes the power of the paragraphs module. The Drupal community continues to amaze me!

Mar 07 2018
Mar 07

So DrupalCon Europe is out for 2018. But that does not mean a EU-level event does not exist, to bind the community beyond the specialization of DevDays, FrontEnd United, CxO, GovDays, and all the DrupalCamps. Drupal Europe is that event, and to support the community who wants to prove a large Drupal conference can reasonably happen after all the trouble the Drupal Association had with it, the best way it to register for the conference.


So join me there :-) Drupal Europe early supporter badge

Be seeing you.

Jan 27 2018
Jan 27

The problem: XDebug doesn't work for Composer scripts

PhpStorm is quite convenient to debug scripts with XDebug (do you support Derick for giving us XDebug ?): just add a "Run/Debug configuration", choosing the "PHP Script" type, give a few parameters, and you can start debugging your PHP CLI scripts, using breakpoints, evaluations, etc.

Wonderful. So now, let's define such a configuration to debug a Composer script, say a Behat configuration generator from site settings for some current Drupal 8 project. Apply the configuration, run it in debug mode, and ....

...PhpStorm doesn't stop, the script runs and ends, and all breakpoints were ignored. How to actually use breakpoints in the IDE ?

Broken configuration to debug a Composer script

The diagnostic

Let's see how Composer actually starts: <?php
#!/usr/bin/env php
<?phpif (PHP_SAPI !== 'cli') {
'Warning: Composer should be invoked via the CLI version of PHP, not the '.PHP_SAPI.' SAPI'.PHP_EOL;




Composer\Console\Application;error_reporting(-1);// Create output for XdebugHandler and Application
$output = Factory::createOutput();$xdebug = new XdebugHandler($output);
$xdebug);// [...more...]

Now, we can step through this, and notice the Composer script actually runs during the $xdebug->check(); line. What's going on ?

// In vendor/composer/composer/src/Composer/XdebugHandler.php
public function check()
$args = explode('|', strval(getenv(self::ENV_ALLOW)), 2);

        if (

$this->needsRestart($args[0])) {
            if (
$this->prepareRestart()) {
$command = $this->getCommand();


// [...more...]

Here's the crux of the problem: in order to alleviate the extreme slowdown caused by XDebug when running Composer commands, any time Composer is run, it checks for the presence of Xdebug in the running PHP configuration, and if it finds it (the needsRestart() check), it rebuilds a command line with a different PHP configuration ($command = $this->getCommand();), which does not include the xdebug extension, and runs it in the $this->restart($command); call, which actually runs the new command using the passthru mechanism.

protected function restart($command)
passthru($command, $exitCode);
// [...more...]

Now, since this command no longer runs with Xdebug, there is no way a debugging tool can use it. So how does one fix the problem ?

The solution

Let's go back to the beginning of this check() method:

class XdebugHandler
ENV_ALLOW = 'COMPOSER_ALLOW_XDEBUG'; // [...snip...] public function check()
$args = explode('|', strval(getenv(self::ENV_ALLOW)), 2);

        if (

$this->needsRestart($args[0])) {
// [...more...]

So the call to needsRestart($args[0] does actually depend on the value of the XdebugHandler::ENV_ALLOW constant, i.e. COMPOSER_ALLOW_XDEBUG. Let's see.

private function needsRestart($allow)
        if (
PHP_SAPI !== 'cli' || !defined('PHP_BINARY')) {

        return empty(

$allow) && $this->loaded;

So, if COMPOSER_ALLOW_XDEBUG is not empty, needsRestart($allow) will return FALSE. In which case, as shown above, it won't cause the new command to be built, and Composer will just proceed with our script and allow our debugging to work.

In practice, this means all we need it to pass a COMPOSER_ALLOW_XDEBUG environment variable with a non-empty value in the PhpStorm run configuration, like this.

Debug configuration for Composer command in PhpStorm

Debugging a Composer script in PhpStorm

Problem solved_! BTW, did you notice this was explained in the Composer vendor/composer/composer/doc/articles/troubleshooting.md documentation ?

## Xdebug impact on Composer

To improve performance when the xdebug extension is enabled, Composer automatically restarts PHP without it.
You can override this behavior by using an environment variable: `COMPOSER_ALLOW_XDEBUG=1`.

Yeah, me neither, until I solved the problem as described. So let's all just keep in mind to RTFM.

Jul 20 2017
Jul 20
Go to the profile of xpersonas

Jun 8, 2017

Simple Style Guide was created to be a fully flexible style guide for Drupal developers and site builders.

I’ve been using style guides for a while now. I can’t imagine developing any site without one, regardless of size. The idea behind this module was to enable devs and site builders to create a fully functional, living, style guide with only the elements you want and nothing more.

What I wanted was the ability to create one in a fast, effecient manner. No elements are required. No elements are added by default. And all this funcationality is fully accessible to site builders without having to write a single line of code.

Style Guide Settings

Default Patterns
You can choose from a set of very basic default patterns such as headings, text, lists, blockquote, horizontal rules, table, alerts, breadcrumbs, forms, buttons, and pagination. Chosen elements will appear on the style guide page. Choose as many default options as you like, or choose none.

Color Palette
You also have the ability to create a color palette by adding a hex color code, a class name, and usage descriptions (if desired).

Live Example of a Color Palette

Custom Patterns
You can also create custom patterns. Custom patterns can be any chunk of html that you want. There are no restrictions.

Add Any Custom Patterns

With these tools, I hope you will be able to create a very flexible style guide/pattern library. To view a live example of a working style guide, you can check out this page:


Jul 20 2017
Jul 20
Go to the profile of xpersonas

Jul 7, 2017

Simple Password Reveal alters password fields on user login and user edit forms to show plain text by default, while also adding a checkbox for concealing the password as needed.

Rather than creating friction for a user to show a password every time by clicking a checkbox, the password is revealed by default. In my own experience, I generally prefer password fields to be plain text almost all the time. It’s only when in public or during a presentation that I want to conceal passwords. And I’m not the only one…

There is another module that provides similar functionality. However, Simple Password Reveal takes a different approach than the Password Toggle module. They use javascript to add a checkbox to each password field in any and all forms. They also have a Drupal 7 version.

This module attempts to keep things simple by concentrating solely on the user login and user edit pages. If you need this feature on custom forms, on forms loaded by ajax, or for a Drupal 7 site then this module may not be for you.

Simple Password Reveal also uses form alters to add one checkbox per form, rather than one checkbox per input. So, for example, when you are on the user edit page you have three password fields — current password, new password, and confirm password. Rather than having a checkbox for each password field, this module only has one.

Jul 20 2017
Jul 20
Go to the profile of xpersonas

Jul 7, 2017

The Simple MailChimp module for Drupal 8 intends to be the easiest way to add MailChimp integration to your site.

There is already a MailChimp module for Drupal, of course. There are several of them.

The main MailChimp module itself does a lot…

The MailChimp module allows users to manage email marketing efforts through MailChimp’s service. The module supports the creation and sending of campaigns, management of email lists and individual subscribers, and offers standalone subscribe and unsubscribe forms.

The problem with these modules is that they either do too much, or they are too specific in their use case. What I often need on my sites, more than anything else, is just a checkbox at the bottom of a form that will allow me to subscribe users to my MailChimp list if they choose to do so. Most likely, I need a checkbox at the bottom of many forms.

I don’t need to manage campaigns, lists, etc. from within my Drupal site. I just need a checkbox. Maybe a few options (MailChimp groups), but that’s it. And, again,I need it on all forms. I need it on my subsciption form of course, but I also need it on warranty registrations, user registrations, webforms, or any other form that may be included with my site.

This is where the Simple MailChimp module comes in.

Example form with “group” options.

Again, this module is not meant in any way to be as robust as the MailChimp module. You can’t manage subscribers. You can’t work with lists. It simply gives you the ability to add a checkbox for subscribing to a single MailChimp list, and also allows a field for one interest group option.

To configure this module, you will need your MailChimp API key, list ID, and a mapping of fields. See screenshot below. Under “Enabled Forms” you would enter one Drupal form id per line, and then map the email field (and other fields) to the appropriate merge fields.

Simple MailChimp supports most MailChimp field types:

  • text
  • zip_code
  • number
  • address
  • date
  • phone
  • birthday
  • website

The idea of this module is to be simple. It does not make any assumptions. It does not provide any public facing forms. It’s simply for adding a checkbox to existing forms.

Download Simple MailChimp and try it out.

Jul 20 2017
Jul 20
Go to the profile of xpersonas

Jul 7, 2017

One of my pet peeves is searching for a local event and finding details for that event… 3+ years ago.

Many Drupal sites feature some sort of event type node. It’s really anything with a start date, and likely, an end date. The problem is, most developers don’t take into account whether or not that content should live on once the end date has come and gone.

Perhaps, in some instances, keeping that content on your site makes sense. In most cases though, it does not.

For instance, my 3 year old was really into dinosaurs. I knew there was a dinosaur exhibit coming to town, but I didn’t quite remember the name. Searching online provided quite a few local results. And many of those results were for events in the past.


Discover the Dinosaurs (06/21/2014)
(event has since been unpublished!)

(event has since been unpublished!)

Dino Dig! (06/02/2015)

Event from 2 Years Ago

Discover the Dinosaurs Unleashed (02/18/????)

Sometimes sites will even have past events ranking higher in search results than upcoming events.

There’s a whole other blog post I could write about how useful it is to have the year accompanying the day and month on web content — particularly tech blog posts. Was this written in February of this year or 2006? How can I know?!?

The Drupal Solution

For Drupal sites, there’s a relatively easy fix. It requires a small custom module and the contributed Scheduler module.

The Scheduler module is simple and great. Simply enable it for your content type, and enable, at the very least, the unpublish setting. Once that is set up, create a custom module and invoke the hook_entity_presave() function.

This code is pretty self explanatory. All I’m doing is checking to be sure it’s an event node type that’s being saved, and if so, find the start and end date values to be used when setting the “unpublish_on” field.

You’ll of course have to make sure your node type and field names match up.

Once that’s set up, any time an event is saved, your node is scheduled to unpublish one day after the end date.

If you have a Drupal 7 site, this same idea can be applied. The code in the hook_entity_presave() will be a bit different.

I wish I could start a massive movement to help clean up web content that should have been unpublished or removed long ago. Until then, hopefully this article finds a few devs so that they can ensure their site isn’t one of those sending out poor results.

Jan 14 2017
Jan 14

The problem: packages.drupal.org broken

I was starting my weekly work for the Drupal GraphQL module by the customary composer update --prefer-source -vvv command, ready to watch composer spit out some hundred lines sipping my coffee, but this time something turned out to be wrong:

OK, so packages.drupal.org is down for now. How can we work around this ?

The diagnostic

By default, most of my Drupal 8 project (aren't yours ?) are either based on drupal-composer/drupal-project for actual projects, or a slightly modified drupal/drupal when working on Drupal core or contrib modules like GraphQL. This was the case of the latter model, done by adding specific repositories entries to core composer.json, like:

    "repositories": {
        "fgm_graphql": {
            "type": "vcs",
            "url": "https://github.com/FGM/graphql-drupal"
        "drupal": {
            "type": "composer",
            "url": "https://packages.drupal.org/8"

The explicit vcs entry on top is to enable me to work on my custom fork of the module (from which I send pull requests), no problem here. The issue is with the bottom one, used to download the other package in this project, specifically devel.

When working with such composer repositories, what Composer does is fetch a packages.json path below the url parameter, in this case https://packages.drupal.org/8/packages.json. That files is a list of data providers, basically one more level of metadata about available repositories and distributions (releases), looking like this:

     [ ...snip ...]

Composer will then download from each of these providers in turn, hence the URL displayed as returning a 404 at the top of this story. Sure enough, manual checking of the URL returned a 404, even with a cache-buster query added.

Sure enough, the issue was already mentioned on IRC on #drupal : what could be done at this point without being able to touch packages.drupal.org itself ?

The solution: skip one layer

Actually, the answer is already present in the existing repositories clause: vcs-type repositories do not need a "directory"-type service like Packagist or packages.drupal.org, because in essence what these directories do is provide a way to locate sources and dists. But vcs provide the same service, with the limitation that they have to be listed for each project. So let us skip the composer directory and list devel's repository directly:

    "repositories": {
        "fgm_graphql": {
            "type": "vcs",
            "url": "https://github.com/FGM/graphql-drupal"
        "devel": {
            "type": "vcs",
            "url": "https://git.drupal.org/project/devel.git"

Now, a composer require drupal/devel --prefer-source -vvv works normally, no longer needing to parse the broken directory:

Reading ./composer.json
Loading config file /Users/fgm/.composer/auth.json
Loading config file ./composer.json
Executing command (/Users/fgm/.composer/cache/vcs/https---git.drupal.org-project-devel.git/): git show-ref --tags --dereference
Executing command (/Users/fgm/.composer/cache/vcs/https---git.drupal.org-project-devel.git/): git branch --no-color --no-abbrev -v
Executing command (/Users/fgm/.composer/cache/vcs/https---git.drupal.org-project-devel.git/): git branch --no-color
Executing command (/Users/fgm/.composer/cache/vcs/https---git.drupal.org-project-devel.git/): git show '8.x-1.x':'composer.json'
Executing command (/Users/fgm/.composer/cache/vcs/https---git.drupal.org-project-devel.git/): git log -1 --format=%at '8.x-1.x'
Reading composer.json of drupal/devel (5.x-0.1)
Skipped tag 5.x-0.1, invalid tag name
Reading composer.json of drupal/devel (8.x-1.x)
Reading /Users/fgm/.composer/cache/repo/https---git.drupal.org-project-devel.git/70f62fd0773082a1a4305c6a7c2bccc649bc98a2 from cache
Importing branch 8.x-1.x (dev-8.x-1.x)

Time to return to actual work :-)

Dec 04 2016
Dec 04
OK, so you know the contents has almost nothing to do with the pseudo-clickbait-y title, right ? Well, actually it has: this is about the single most useful command in Drupal development. Guess which ?

Keep calm and flush cache

If you've done any Drupal development in the last ten years or so, you have probably noticed that the single most often used command in your work process is clearing the cache, be it in the Drupal UI at admin/config/development/performance (or its earlier incarnations), or hopefully with drush cr in Drupal 8 or drush cc all in earlier versions.

The thing is, while you probably switched from the web UI to Drush years ago, it's still a minor annoyance to have switch to a terminal, press the ▲ / ↵ combo and switch back to PhpStorm. Or switch view from the editor to the Terminal view. Isn't there a better way ?

You bet there is: as soon as I asked myself I knew it had to be easy, so let's see how it's done in PhpStorm in three steps:

  1. create an external tool
  2. map a key to it
  3. trick: there's no third step, it's already done

drush cr in PhpStorm 2016.3

Here is in detailed form how to implement the two steps above. Note that these instructions assume that (a) you are working in a Drupal 8 project opened one level above the Drupal docroot, as is typical in Composer-built projects, (b) Drush is local to the project, as it probably should. Adjust according to your actual situation.

Choice Value Notes Open tool dialog
  • macOS: PhpStorm / Preferences / Tools / External Tools / "+"
  • Linux: File / Settings / Tools / External Tools / "+"
Name Drush CR Or whatever... Description Keep calm and flush cache! ...fits your fancy Group External tools Although the widget looks like a selector, you needn't stick to existing groups, but may create one just by typing its label. Synchronize files after execution Checked If you haven't excluded sites/default/files you may be interested in the changes wrought by the command. Otherwise, keep unchecked to reduce IDE work. Open console Checked You normally want to see if something happens during the flush. Don't worry: only one console gets used for all calls to the tool. Show in As you prefer Probably at least in editor views Program $ProjectFileDir$/vendor/bin/drush Assuming you are working on a Composer-built project Parameters cr Or cc all for D7 projects. Yes, you can use Composer for these too. Working directory $ProjectFileDir$ Especially important for projects built on drupal-composer/drupal-project where Drupal is not the project root: you want to run Drush from the Drupal root. Open keymap dialog
  • macOS: PhpStorm / Preferences / Keymap
  • Linux: File / Settings / Keymap
Search for your tool Drush CR Or whatever you typed in the Name field above Map a key Your choice: I used ⌘D⌘R Now you know what happened when I hit ⌘D⌘R
Aug 10 2016
Aug 10

So your site has been hacked ? Or more likely you wonder what to do when it eventually happens : the video for my "Life after the hack" is now available. From initial diagnosis to return online, with a healthy dose of forensics along the way. [embedded content]

The slides are also available for easier access, but of course absent the extra speaker comments :

Initially designed for the Drupalcon Dublin audience, it was given earlier at the DevDays Milan in june, so you may have missed it, due to the smaller audience at DevDays.

Aug 04 2016
Aug 04

When you use Drush, especially in crontabs, you may sometimes be bitten by RAM or duration limits. Of course, running Drush with the "-d" option will provide this information, but it will only do so at the end of an annoyingly noisy output debugging the whole command run.

On the other hand, just running the Drush command within a time command won't provide fine memory reporting. Luckily Drush implements hooks to make acquiring this information easily, so here is a small gist you can use as a standalone Drush plugin or add to a module of your own:

Aug 04 2016
Aug 04

When you use Drush, especially in crontabs, you may sometimes be bitten by RAM or duration limits. Of course, running Drush with the "-d" option will provide this information, but it will only do so at the end of an annoyingly noisy output debugging the whole command run.

On the other hand, just running the Drush command within a time command won't provide fine memory reporting. Luckily Drush implements hooks to make acquiring this information easily, so here is a small gist you can use as a standalone Drush plugin or add to a module of your own:

Jul 03 2016
Jul 03

On behalf of all contributors to the MongoDB module suite for Drupal over the years, I am pleased to announce the 8.x-2.0-alpha1 release of the MongoDB package for Drupal 8, six years after we started this project on Drupal 6.

This release is the first step to an initial stable release of the MongoDB package for Drupal 8, containing:

  • mongodb a module exposing the new PHP library as Symfony services exposed to a Drupal 8.x instance. It is designed as a minimal and consistent connection layer on top of the PHP library for MongoDB, for all modules targeting MongoDB on Drupal 8.x, be they contributed or bespoke.
  • mongodb_watchdog a PSR-3 logger storing event data in MongoDB. On top of the features already present in 6.x and 7.x versions, it introduces a per-request report showing all events logged during a request, in order.


The 8.x-2.x series is a reboot of this package: initial work on the Drupal 8 version of the package, in branch 8.x-1.x, is a moonshot project aiming at providing a fully No SQL version of Drupal. Although it made good progress for the first years of Drupal core 8.x development, thanks especially to MongoDB Inc. sponsorship, it has shown little progress since 2015, and is not compatible with any stable version of Drupal 8 core.

To leave a chance for that ambitious branch to resume development, 8.x-2.x has been started as a lower-risk version focused on:

  • Not only SQL: supporting MongoDB on top of a standard Drupal distribution, rather than providing a full SQL-less CMS
  • Use of newer code like MongoDB 3.x series, the new mongodb PHP extension, and PHP 7.x, rather than continued support for older MongoDB versions, the legacy mongo extension, and PHP 5.5/5.6
  • Providing an actual usable release in the wake of Drupal Dev Days Milan 2016, rather than an ongoing dev branch with little visibility
  • Providing new services as ongoing incremental releases rather than wait for the whole set to be available


The MongoDB package for Drupal 8 is the work of multiple contributors: Karoly Negyesi (chx), Frédéric G. MARAND (fgm), Janez Urevc (slashrsm), Zsolt Tasnádi (skipyT), Marc Ingram (marcingy), Rok Žlender (Rok Žlender), Jakob Perry (japerry), Philippe Guillard (pguillard), ...and the many contributors to earlier versions, without whom this would not exist today.

Jun 30 2016
Jun 30

These are the slides of the presentation I gave last week at DrupalDevDays Milan about the ways using Queue API can help you speed up Drupal sites, especially in connected mode.

This is an updated for Drupal 8 version of the presentation we gave at DrupalCon Barcelona with Yuriy Gerasimov.
Oct 23 2015
Oct 23

One of the big trends during the Drupal 8 creation has been the replacement of info hooks by two main mechanisms: annotations, and YAML files. In that light, hook_drush_command(), as the CLI equivalent of the late hook_menu, replaced by various YAML files, looks just like a perfect candidate for replacement by a commands section in some mymodule.drush.yml configuration file. Turns out it is incredibly easy to achieve. Let's see how to kill some hundred lines of code real fast !

The goal

Just like its hook_menu() web counterpart, hook_drush_command() defines controllers, the main difference being that it maps them to CLI commands instead of web paths. Its structure is that of a very basic info hook: a nested array of strings and numbers, which maps neatly to a YAML structure. Consider the canonical sandwich example from Drush docs.

hook_drush_command mymodule.drush.yml <?php
function sandwich_drush_command() {
$items = array(); // The 'make-me-a-sandwich' command.
$items['make-me-a-sandwich'] = array(
'description' => "Makes a delicious sandwich.",
'arguments' => array(
'filling' => 'The type of the sandwich (turkey, cheese, etc.). Defaults to ascii.',
'options' => array(
'spreads' => array(
'description' => 'Comma delimited list of spreads.',
'example-value' => 'mayonnaise,mustard',
'examples' => array(
'drush mmas turkey --spreads=ketchup,mustard' => 'Make a terrible-tasting sandwich that is lacking in pickles.',
'aliases' => array('mmas'),
// No bootstrap at all.
'bootstrap' => DRUSH_BOOTSTRAP_NONE,
  # The 'make-me-a-sandwich' command.
    description' => "Makes a delicious sandwich."
      filling: 'The type of the sandwich (turkey, cheese, etc.). Defaults to ascii.'
        description: 'Comma delimited list of spreads.'
        example-value: 'mayonnaise,mustard'
      'drush mmas turkey --spreads=ketchup,mustard': 'Make a terrible-tasting sandwich that is lacking in pickles.'
    aliases: ['mmas']
    # No bootstrap at all.
    bootstrap: -1

The trick

There is no trick ! Well, almost... assuming that simple format for the mymodule.drush.yml file, all it takes is loading it. This can even be entirely generic, like this : <?php
use Symfony\Component\Yaml\Yaml;/**
* Implements hook_drush_command().
function beanstalkd_drush_command() {
$file = preg_replace('/(inc|php)$/', 'yml', __FILE__);
$config = Yaml::parse(file_get_contents($file));
$items = $config['commands'];

Since your drush plugin for module mymodule is named mymodule.drush.inc (or mymodule.drush.php if you write them like me), the name of the Yaml file can be deduced from the plugin name. And then, the Symfony Yaml component parses it to a plain array matching the expected hook_drush_command() structure.

This is the mechanism used by the Drupal 8 version of the Beanstalkd module.

The limitations

You may have noticed a small limitation : hook_drush_command() implementations may need to use constants like DRUSH_BOOTSTRAP_NONE. With such a limited implementation, you will need to convert the constants to their values from drush/includes/bootstrap.inc. Should the mechanism come to be included in Drush at some point, a more evolved mechanism will likely provide translation for these constants, too.

Sep 12 2015
Sep 12

UPDATE 2015-12-04 : Tinytest is now documented, for the upcoming Meteor 1.3 release. See the official README.md. Better use the official information than my unofficial cheatsheet.

I've been doing a lot more Meteor these days, especially working on Drupal 8 SSO with Meteor, and could not find a reasonably complete and up-to-date (for Meteor list of the Tinytest assertions, so I updated, reordered, and completed the existing gist on the topic.

So here is the Meteor Tinytest cheatsheet: : complete list of assertions and helpers for your test methods.

Aug 27 2015
Aug 27

Autoloading in D8 is much more convenient that in previous versions, however, it still has limitations. One such issue is with hook_requirements(), which is supposed to be present in the module install file, not the module itself: when called at runtime for the site report page, the module is loaded and the PSR/4 autoloader works fine. However, when that hook is fired during install to ensure the module can indeed be enabled, the module is not yet enabled, and the autoloader is not yet able to find code from that module, meaning the hook_requirements('install') implementation cannot use namespaced classes from the module, as they will not be autoloadable. What are the solutions ?

The bad
The first solution is obvious but painful : since the file layout within the module is known at the time of committing, it is possible to avoid autoloading using require_once for each class/interface/trait needed by the code, typically like this:

// to male \Drupal\mymodule\Some\Namespace\Class available in mymodule_requirements().
require_once __DIR__ . '/src/Some/Namespace/Class.php';
// ... for each such class

The good
But there is a better way: just make the classloader aware of the module being installed:

// Not needed at runtime.
if ($phase === 'install') {
$module = 'mymodule';
drupal_classloader_register($module, drupal_get_path('module', $module));

During module enabling, its path is already known, so drupal_get_path can be used. With this little help, the module autoloaded code will be available during the install, and allow the hook_requirements() implementation to use it.

The ugly

One obvious temptation would be to put the hook_requirements() implementation in the module file, reasoning that, since if is in the module file, it implies that the module is available when the hook is fired. And indeed, this works for hook_requirements('runtime').

However, during install, the module is still not loaded, so the implementation for hook_requirements('install') is simply not found and the requirements check is ignored. Don't do it.

Aug 22 2015
Aug 22

One nice thing during Drupal 7/8 development is the ability, thanks to the devel module, to get a list of all SQL queries ran on a page. As I've been working quite a bit on MongoDB in PHP recently, I wondered how to obtain comparable results when using MongoDB in PHP projects. Looking at the D7 implementation, the magic happens in the Database class:

// Start logging on the default database.
define(DB_CHANNEL, 'my_logging_channel');
Database::startLog(DB_CHANNEL);// Get the log contents, typically in a shutdown handler.
$log = \Database::getLog(DB_CHANNEL);

With DBTNG, that's all it takes, and devel puts it to good use UI-wise. So is there be an equivalent mechanism in MongoDB ? Of course there is !

The answer turns out to lie in the little-used, optional, third parameter to MongoClient::__construct($server, $options, $driver_options).

These driver_options can contain a context, which is a hash in which one valid key happens to be mongodb, the data for which is a hash of logging callbacks by loggable events.

Of course, there is this little issue of the PHP doc not mentioning some of the most useful events, like log_query and log_batchinsert, but once this is cleared, the logging becomes almost as simple as in Drupal:

  1. create a stream context containing these callbacks
  2. pass it to the MongoClient constructor
  3. that's it : events are now directed to your logging callbacks

This turns out not to be so much fun, though, because each of the callbacks has a specific signature, so I wrote a helper package wrapping a set of logging callbacks in a single class, and converting all of the callbacks to log to a standard PSR-3 logger. Using it, all it takes to send your MongoDB logs to your PSR-3 logger of choice is something like:

= new Emitter();
$context = $emitter->createContext();
$client = new \MongoClient($server, $options, ['context' => $context];

The code is available on Packagist/Composer, as fgm/mongodb_logger, or straight on Github as FGM/mongodb_logger. I'm quite interested in feedback on this one, as we have a pending feature request on this topic for Drupal, so feel free to comment on the Drupal issue, or send pull requests if you feel this is useful but needs some improvements.

Oct 05 2014
Oct 05

Excerpt from a Drupal 8 menu links treeOne of the interesting aspects of the revamped menu/links system in Drupal 8 is the fact that menu links are now in easily parseable YAML files, the "(module).links.menu.yml" in each module, in which each menu link can be bound to its parent link, hopefully producing a tree-like structure.

This provides an easy way to check consistency of the links graph, by ensuring links newly defined in your shiny custom module are linked to a valid parent link path, to help with the overall navigability of the site. This can be checked using the Tooling module, and its Drush command tolt, like in this example.

  • clone the module to (yoursite)/modules/tooling
  • ensure Drush is working on your site
  • ensure GraphViz (the dot command) is working on your system
  • drush tolt | dot -Tsvg > mysite.svg

That's it: the mysite.svg file should now contain the tree of your menu links, with all orphan links flagged in orange for you to examine, like in the image.

Attachment Size drupal8-menu-links.png 141.34 KB
Feb 16 2014
Feb 16

Doctrine Annotations is a central part of the Drupal 8 Plugin API. One of its conceptual annoyances is the need for a separate autoloading process for annotations, which in some scenarios leads to duplicated path declarations. Let us see how to remedy to it.

The Annotations loading process

Doctrine Annotations are classes, and the whole Doctrine annotations process, like PHP autoloading in general (the single autoload function in plain PHP, the SPL autoload stack otherwise), relies upon a single global, the AnnotationRegistry class, to which annotation classes can be registered.

This registry supports three registration mechanismes, as described in its documentation:

  • direct file registration, with the AnnotationRegistry::registerFile($path) method, which is just a wrapper for require_once $path. This is useful to load annotation files containing multiple annotation classes, like Doctrine ORM's own DoctrineAnnotations.php, without going through multiple autoloads
  • namespace(s) registration with the AnnotationRegistry::registerNamespace($ns, $path)/AnnotationRegistry::registerNamespaces($namespaces) methods, which register autoload namespaces to the annotation autoloader, implemented in AnnotationRegistry::loadAnnotationClass.
  • custom loaders with the <code>AnnotationRegistry::registerLoader($callable), to handle specific scenarios

Implementation in PSR-0/PSR-4 projects with Composer

In most cases, registration will be handled by the namespace-based registration. However, since the annotations autoloader is not tied to a specific autoloading scheme, it needs to receive a path parameter for each namespace it handles. In a typical PSR-0/PSR-4 project with Composer, it can mean something like this:

::registerAutoloadNamespace("Symfony\Component\Validator\Constraints", "$basePath/vendor/symfony/validator");
AnnotationRegistry::registerAutoloadNamespace("MyProject\Annotations", "$basePath/src");

As evidenced by that fragment, this means having to specify in code the namespace-to-path mapping already specified in the composer.json file, under the PSR-0/PSR-4 specification, which is a source of extra work and possible errors.

Autoloading error handling

Why is it necessary ? To quote the Doctrine doc:

How are these annotations loaded? From looking at the code you could guess that the ORM Mapping, Assert Validation and the fully qualified annotation can just be loaded using the defined PHP autoloaders. This is not the case however: For error handling reasons every check for class existence inside the AnnotationReader sets the second parameter $autoload of class_exists($name, $autoload) to false. To work flawlessly the AnnotationReader requires silent autoloaders which many autoloaders are not. Silent autoloading is NOT part of the PSR-0 specification for autoloading.

Indeed, the requirement for silent autoloaders is lacking in PSR-0 and had been hotly and repeatedly debated in the PSR-4 discussions (see the PSR-4: The registered autoloader MUST NOT throw exceptions thread for an example), so caution is indeed needed in the absolutely general case.

However, in most cases, the autoloader is known to be silent: Composer's findFile() method is silent, which means this extra caution is not needed in a project where Composer is used for autoloading.

Removing duplication via registerLoader()

So if we want to avoid re-declaring our namespace-to-path mapping and rely on the existing declarations, what can we do ? Create a custom loader and use it with AnnotationRegistry::registerLoader($callable). Since we will want to pass it parameters, and loaders only receive the name of the class to load, we will implement it as a class with the __invoke() magic method.

All this pseudo-loader will have to do will be to answer true for classes in the namespace it handles, to claim it actually found the class, and let the default autoloader work its magic later on, when the Annotations DocParser will try to access the annotation classes.

That way we can parse the example given on the Annotations documentation page without repeating the namespace to path mapping:

That's it: annotation parsing, without duplicated mappings.

Feb 08 2014
Feb 08

A few days ago, while I was writing a bit of Silex code and grumbling at Doctrine DBAL's lack of support for a SQL Merge operation, I wondered if it wouldn't be possible to use DBTNG without the rest of Drupal.

Obviously, although DBTNG is described as having been designed for standalone use: DBTNG should be a stand-alone library with no external dependences other than PHP 5.2 and the PDO database library, in actual use, the Github DBTNG repo has seen no commit in the last 3 years, and the D8 version is still not a Drupal 8 "Component" (i.e. decoupled code), but still a plain library with Drupal dependencies. How would it fare on its own ? Let's give it a try...

Bring DBTNG to a non-Drupal Composer project

Since Composer does not support sparse checkouts (yet ?), the simplest way to do bring in DBTNG for this little test is to just import the code manually and declare it to the autoloader manually. Let's start by getting just DBTNG out of the latest Drupal 8 checkout:

# Create a fresh repository to hold the example
mkdir dbtng_example
cd dbtng_example
git init

# Declare a remote
git remote add drupal http://git.drupal.org/project/drupal.git

# Enable sparse checkouts to just checkout DBTNG
git config core.sparsecheckout true

# Declare just what we want to checkout:
# 1. The DBTNG classes
echo core/lib/Drupal/Core/Database >> .git/info/sparse-checkout
# 2. The procedural wrappers, for simplicity in an example
echo core/includes/database.inc >> .git/info/sparse-checkout

# And checkout DBTNG
git pull drupal 8.x

ls -l core/
total 8
drwxrwxr-x 2 marand www-data 4096 févr.  8 09:32 includes
drwxrwxr-x 3 marand www-data 4096 févr.  8 09:32 lib
That's it: DBTNG classes are now available in core/lib/Drupal/Core/Database. We can now build a Composer file with PSR-4 autoloading on that code:
  "autoload": {
    "psr-4": {
      "Drupal\\Core\\Database\\": "core/lib/Drupal/Core/Database/"
  "description": "Drupal Database API as a standalone component",
  "license": "GPL-2.0+",
We can now build the autoloader:
php composer.phar install

Build the demo app

For this example, we can use the traditional settings.php configuration for DBTNG, say we store it in app/config/settings.php and point to a typical Drupal 8 MySQL single-server database:

// app/config/settings.php$databases['default']['default'] = array (
'database' => 'somedrupal8db',
'username' => 'someuser',
'password' => 'somepass',
'prefix' => '',
'host' => 'localhost',
'port' => '',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',

At this point, our dependencies are ready, let's build some Hello, world in app/hellodbtng.php. Since this is just an example, we will just list a table using the DBTNG Select query builder:

// app/hellodbtng.php

// Bring in the Composer autoloader.

require_once __DIR__ . '/../vendor/autoload.php';// Bring in the DBTNG procerural wrappers.
// @see https://wiki.php.net/rfc/function_autoloading
require_once __DIR__ . '/../core/includes/database.inc';// Finally load DBTNG configuration.
require_once __DIR__ . '/config/settings.php';$columns = array(
$result = db_select('key_value', 'kv')
fields('kv', $columns)
condition('kv.collection', 'system.schema')
range(0, 10)

foreach (

$result as $v) {
$v = (array) $v;
$value = print_r(unserialize($v['value']), true);
printf("%-32s %-32s %s\n", $v['collection'], $v['name'], $value);

Enjoy the query results

php app/hellodbtng.php

system.schema                    block                            8000
system.schema                    breakpoint                       8000
system.schema                    ckeditor                         8000
system.schema                    color                            8000
system.schema                    comment                          8000
system.schema                    config                           8000
system.schema                    contact                          8000
system.schema                    contextual                       8000
system.schema                    custom_block                     8000
system.schema                    datetime                         8000

Going further

In real life:

  • a production project would hopefully not be built like this, by manually extracting files from a repo
  • ... and it would probably not use the procedural wrappers, but wrap DBTNG in a service and pass it configuration using a DIC
  • I seem to remember a discussion in which the full decoupling of DBTNG for D8 was considered but postponed as nice-to-have rather than essential for Drupal 8.0.
  • Which means that a simple integration would probably either
    • use the currently available (but obsolete) pre-7.0 version straight from Github (since that package is not available on Packagist, just declare it directly in composer.json as explained on http://www.craftitonline.com/2012/03/how-to-use-composer-without-packagi... ),
    • or (better) do the required work to decouple DBTNG from D8 core and submit a core patch for that decoupled version, and use it from the newly-independent DBTNG Component.
Jan 04 2014
Jan 04

Yesterday, Seldaek committed PSR-4 support to Composer, as his New Year's Day gift to the PHP community.

So, since I always supported PSR-4 in FIG discussions, and after the tl;dr discussion about it regarding autoloading in Drupal 8, I jumped on the occasion and converted the code base for PlusVite to PSR-4 to get an idea of how PSR-4 "felt" in practice.

A the documentation explains, to convert from PSR-0 to PSR-4 to shorten paths, the change is extremely simple: just edit your composer.json and change it like this:

  PSR-0 PSR-4 with PSR-0 layout Pure PSR-4 \Foo\Bar\Bazclass is in src/Foo/Bar/Baz.php src/Foo/Bar/Baz.php src/Baz.php composer.json contains "psr-0": { "Foo\\Bar\\": "src/" } "psr-4": { "Foo\\Bar\\": "src/Foo/Bar/" } "psr-4": { "Foo\\Bar\\": "src/" }

Of course, removing two levels of directories provided the expected small degree of instant gratification, making everything seem simpler. But then, doubt appears: maybe this is just because I've been using PSR-0 for too long (3 years, already ?), but now, seeing these namespaced classes/interfaces/traits in short paths like src/Baz.php for a simple project like this seems to be hiding information, degrading DX instead of improving it. Eewww !

Now, in Drupal 8, the situation is different: classes core modules are located under

core/modules/(module)/lib/Drupal/(module)/(subnamespace hierarchy)/someclass.php

while PSR-4 will change this to

core/modules/(module)/lib/(subnamespace hierarchy)/someclass.php

so the "hiding" effect will not be the same, since the name of the module (hence the namespace) will be in plain sight. But still, this is an unexpected feeling and I hope we will not regret it later on.

Apr 25 2010
Apr 25

Every so often, I get asked about whether it is really worth it to chase double quotes and constructs like print "foo $bar baz", and replace them with something like echo 'foo', $bar, 'baz', or even to remove all those big heredoc strings so convenient for large texts.

Of course, most of the time, spending hours to fine comb code in search of this will result in less of a speedup than rethinking just one SQL query, but the answer is still that, yes, in the infinitesimal scale, there is something to be gained. Even with string reformatting ? Yes, even that. But only if you are not using an optimizer.

Just don't take my word for it, Sara Golemon explained it years ago with her "How long is a piece of string" post, in 2006.

Note that her suggestion about apc.optimization is no longer relevant with APC 3.x, though, as this has been removed in APC 3.0.13, with optimizations now always on.

2011-02 UPDATE: as explained by TerryE in the comments, while this applied with older PHP versions (remember, this was written in 2006) which one still found live when reviewing older sites for upgrades, it does not apply to current versions of PHP 5.2.x and 5.3.x.

2010-04 UPDATE: the initial version of this post incorrectly mentioned phpdocs instead of heredocs. Thx dalin for pointing out the error. phpdoc-type comments DO come at a cost but that's when using the Reflection API, which actually interprets them to add information to ReflectionParameter instances, and this has nothing to do with string processing.

trackback link

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web