Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
May 04 2016
May 04
Drupal Watchdog 6.01 is the first issue published by Linux New Media.

Drupal Watchdog 6.01 is the first issue published by Linux New Media. Come see the Drupal Watchdog team at DrupalCon 2016!

Drupal Watchdog was founded in 2011 by Tag1 Consulting as a resource for the Drupal community to share news and information. Now in its sixth year, Drupal Watchdog is ready to expand to meet the needs of this growing community.

Drupal Watchdog will now be published by Linux New Media, aptly described as the Pulse of Open Source.

“It’s very clear that the folks at Linux New Media know what they’re doing, and that they truly value the open source culture,” said Jeremy Andrews, CEO/Founding Partner, Tag1 Consulting. “I’m ecstatic that the magazine will not just live on, but it will thrive as a quarterly publication … this is a wonderful step forward that benefits everyone who reads and contributes to Drupal Watchdog.”

The magazine will continue to be offered in print and digital formats, and Linux New Media’s international structure provides better service to subscribers worldwide, with local offices in North America and Europe and ordering options in various local currencies.

“We don’t want to change what has brought Drupal Watchdog this far, but we do want to see it grow and expand to the next level, which mainly means – extending the reach of the magazine,” said Brian Osborn, CEO and Publisher, Linux New Media. “As our first step, Drupal Watchdog will now be published quarterly, helping us stay even more current in our coverage and in more frequent contact with our readership.”

Drupal Watchdog is written for the Drupal community and will only thrive through community participation.

Here is what you can do to help:

The first issue of Drupal Watchdog published by Linux New Media will be available May 9th! All DrupalCon attendees will receive a copy at the event. Come meet the new team, and learn more about the future of Drupal Watchdog!

Feb 02 2016
Feb 02

For all you schedule-challenged CEOs – and ADHD coders – this Abbreviated Official Director’s Cut is just what the doctor ordered.

Yes, Welcome to DrupalCon can now be watched in half the previous time! But if eight minutes is still too daunting, we suggest you absorb it in a series of one-minute bursts, maybe during the rest-intervals in your 30-20-10 training, or on down time while obsessively clicking your pen waiting for the Adderall to kick in.


Dec 17 2015
Dec 17

Did we have fun in Barcelona?
OMG, yes!

Did we eat all the tapas on the menu and wash them down with pitchers of sangria?
Yes indeed!

Did we meet a host of Drupalists, make new friends, network like crazy, and learn all kinds of Drupal stuff?
Yes-yes-yes and yes!

So now, relive the moment (or, if you missed it, see what it was like): the people, the places, the food, the whole DrupalCon Experience.

And, as a bonus: Are you thinking of attending DrupalCon Dublin (Sept. 26-30, 2016)? Need some convincing? Watch this video – and show it to your boss!

Thanks (in order of appearance):
Jeremy Andrews, Holly Ross, Rachel Friesen, Mikkel Mandel, Saket Kumar, Rakhi Mandania, Marja Doedens, Christian Wanscher, Verónica Vedia, Saran Raj, Shankey Thukral, Mudassar Ali, Kristof Vercruyssen, Cyrielle Charriere, Emmanuel Quedville, Fabian Franz, Jeff Sheltren, Dylan Clear, June Gregg, Mark Carver, Rudy Grigar, Amanda Gonser, Cathy Theys, Nancy Beers, David Archer, Sweta Shahi, Tony Williams, Steve Richards, Kate Marshalkina, Klaas Chielens, Kristof Van Tomme, Jared Smith, Antonella Severo, Hilmar Kári Hallbjörnsson, MortenDK, Anthony Lindsay, Stella Power.

Dec 09 2015
Dec 09

Photo Updating a Drupal website is of paramount importance for security. While the update process may be a simple one – and a backup before the update can quickly take you to a previous sane stable state – having tests in place and ready to roll can help you find the not-so-obvious issues.

How Test Automations Can Save the Day

Do you have a Drupal-based web platform that uses more than 30 core and contributed modules?

Is your web platform more than a year old?

Are you a developer who prefers to keep the code base up-to-date with the latest Drupal core?

If your answer to any of these questions is “yes,” then there will always be a need to stay on top of the regular Drupal core updates, contributed module updates, and the security patches.

NOTE: In this article, updates refer to installations of new, minor versions of core and contributed modules.

With every introduction of new code, there is a risk of affecting the existing functionality of your web application, not to mention the additional overhead of doing a complete regression test of the web application functionality. Although this is more important for applications in continuous development mode, even stable and mature applications which have an active user base will require you to think, re-think, and eventually plan the update.

The Drupal Security Advisories page on Drupal.org announces updates to Drupal core, contributed modules, and the latest security patches. (To check the updates available for currently installed modules, visit the site’s /admin/reports/updates page for a report.) The biggest trigger for the update process is an updated version of Drupal core being released, followed by “Critical” security patches to one or more of the modules used by the current system. If any of these happens, the planning process should start. The following steps can help plan, prepare for, and manage the update process:

What to update: Review the Security Advisories page and the /admin/reports/updates to create a list of which specific items need to be updated.
How to prioritize: As mentioned above, the highest priority would be core, followed by critical security patches and regular updates. If the site hasn’t been through an update for a while and has over 100 modules installed, the list of updates would be long. In our experience with a somewhat complex Drupal platform (over 200 modules in place), in spite of doing updates every month, we still routinely have a backlog of 4-5 updates waiting in the queue.

If the list is long, the updates will have to be spread across phases, with each phase including a mix of module updates and security patches. When all the planned updates are finally completed, sure enough, there will be new items that need to be updated. Regular updates will lead to a better system and a better experience for users of the system.

Getting started: Begin the update process by setting up a separate test environment. Deploying an independent code branch (with the latest code) on this test environment will allow you to carry out all post-update tests without disturbing the continuous delivery mode of the development and quality assurance teams. On this environment, the planned updates can be done in phases, and a quick check of impacted code and functional areas can be tested by those handling the updates. But what about the less-obvious impact areas that one may not be aware of?
Here come the post-update tests! If there are no automated regression suites already built for your system, have no fear. There are steps you can take to quickly create an automated regression suite. This is a one-time activity, albeit an automated regression suite that can be re-used after any kind of update activity with little or no modifications. Here are the ingredients:

  1. Install Java.
  2. Download the Apache JMeter binaries.
  3. Create a CSV file with all accessible URLs in the application. Use only relative paths in the CSV file, for example, /user/login, /home, /logout – instead of example.com/user/login. If possible, create separate csv files with URLs for anonymous users and authenticated users.

Write the tests: Here’s a surprise: you don’t have to actually write the tests. You can simply download the template script. Some modifications to this script will be required. Open the script in JMeter and expand the keys to see the complete script. Here are the things to change:

  1. Under “User defined variables” set the value of the host variable.
  2. Change the protocol to http or https, depending on what is being used by the current application.
  3. Disable the thread group for “User Type X” with a right click on Disable, or Ctrl-T.
  4. Disable the thread group for “Site Admin” to start with.
  5. Rename the CSV file with URLs for anonymous users as Anonymous_ValidURL.csv.
  6. Also rename the logged-in users CSV file as Anonymous_InvalidURL.csv. Make sure that all URLs in this file are only accessible to logged-in users. This CSV will act as an access control check to make sure anonymous users are not able to access any unauthorized content.
  7. Count the URLs in both files. Go to the “Thread group for anonymous users” and click on “1st Loop controller”. In the “Loop count” field, enter the number of URLs in the file. Repeat for the second “Loop controller” and enter the count of invalid URLs for Anonymous users.
  8. Place the two CSV files in the same directory as the downloaded .jmx file. You can place them elsewhere too, in which case add the full path followed by either the forward slash ‘/’ or backslash ‘\’ as used by the operating system in the “User Defined Variables”.
  9. For a valid URL, JMeter checks the assertions that the response code returned is not one of 404 or 403. It also checks that text like “error” or “access denied” is not present on the page body. Similarly, for invalid URLs, it looks for text that anonymous users get when trying to access a restricted URL. Change the text assertion to match what the web application provides, by expanding the loop controller and the nested requests. You can add as many assertions as needed. They will apply to each URL in the CSV file.
  10. And now you are ready to roll! Run the script by clicking on the Run button. (Duh.)
  11. Click on the “View Results tree” section to view the results. You should see all greens.
  12. Extend the test to all roles in the system by creating two CSV files for each role. The above JMeter script has two identical thread groups to illustrate this as “Site Admin” and “User role X”.

Debug the test: If you do not see all greens yet, it is possible that the assertions are failing. Look at all the assertions, and add or remove the relevant assertions for your web application. Once everything is running fine, you can extend the test to cover site admin URLs or other roles, as explained in the steps above. You will also need to add the credentials for the role in the “Enter Credentials & Login” step in the script. It is recommended that you add URLs for only one role at a time. Test, and then add another role. The greatest benefits can be achieved only by covering all possible pages in the application.

Finally, a look at the reports: After all the desired roles and URLs have been added to the JMeter test, you can view and analyze the Summary report for collective average response times on various valid URLs. This test script can also be used to generate a uniform load on the web application, and you can monitor server resources to identify performance bottlenecks as well.

Run this test before and after every update: So, once the regression suite is complete and correct, running the test suite will do a complete regression of all the GET requests on the site for all roles, and negative tests as well.

NOTE: The GET requests can cover navigation to listing pages, landing pages, and add and edit content pages. However, dynamic components that are loaded using Ajax or jQuery or form submissions are not covered by these tests.

Now, a quick check of the critical form submissions (i.e., POST requests) is all that remains to be done. Regression issues due to updates will be quickly identified. The automated tests will at least point out if there are bigger problems caused by regression. The effectiveness of these tests depends on the amount of coverage they provide. You can increase coverage simply by adding more URLs to your CSV files.

Future Steps With Continuous Integration

Reducing any kind of manual intervention is the next step here; continuous integration is the answer. Setting up Jenkins and creating a free-style project which can be scheduled to run at regular intervals, or on demand, is what needs to be done. To run the same JMeter script using Jenkins, you will need to create a Mavenized project structure, and in the pom.xml, mention the JMeter-Maven plugin as a dependency.

To include verification of dynamic components on the application and different kinds of form submissions, additional “HTTP Requests” need to be separately created under a Thread group. This will involve a little more time to correlate the dynamic form parameters in Drupal like form_build_id and form_token for every request. A tool like Fiddler can be used to learn the parameters in the request or the Blazemeter plugin can help to record the requests.

Case Study

The web application or web platform we have been building and enhancing for over two years now – with 244 core and contributed modules ­– has these JMeter tests in place, which run every day through Jenkins. The reports are sent to everyone on the team, and updates are also posted on Slack when the tests pass or fail.

Challenge: Until four months ago, only Drupal core was being updated, and we had a backlog of over 70 contributed module updates. It seemed like an uphill task to update them all in one go.

Solution: The modules to be updated were classified into Low-, Medium-, and High-risk categories. The updates were then divided into multiple sprints, with each sprint having progressively more low-risk updates. With this approach, we were able to clear the initial backlog of updates and regression tests in three months. By the time we struck off old items from the backlog, there was already a new bunch of updates, but the list is much smaller now, and we consistently address all core and module updates in every sprint.

The automated regression tests are also used to check the “deltas” in performance if some key areas of the application are changed by comparing a before and after report. They also act as a smoke test if a developer wants to quickly verify if there's been no unintended impact. Recently we used the tests after updating the PHP version from 5.3 to 5.6 to observe the response time trend across requests, before and after the change.


With a dedicated and passionate Drupal Community, Drupal core and its contributed modules go through constant updates. This makes it critical for webmasters and administrators to ensure timely application of these updates to prevent security vulnerabilities, enhance stability, and enable new features. Updates need to be structured without impacting current functionality, continuous development, delivery, and maintenance cycles.

To keep sites up-to-date, a few simple regression tests can cover your entire web application with minimum investment of time and effort; modules or other environment-related updates become a lot easier and faster.

Image: "Infinity by Juan Salmoral is licensed under CC BY-NC-ND 2.0

Dec 02 2015
Dec 02

PhotoWe now know that texting and driving do not go well together, but why?

It's not simply because those two tasks don't go together the same as, say, a black suit and brown shoes. No, it's because both activities require cognitive thought, and most of us are incapable of having two cognitive thoughts simultaneously, because one tends to interrupt the flow of the other, and interrupting the flow of anything that requires flow…water, electricity, rockslides... is a bad thing.

It's not just texting and driving. Talking on the phone and driving also escapes us, particularly when the conversation requires that we be alert and ready to analyze and process what is being said (like a meeting). Yes, people do seem able to perform both simultaneously, but in reality, the brain is doing its best to handle the two flows by interrupting and switching back and forth between them.

In computing, that is a known issue. On a single thread, if a process of higher priority doesn't yield any cycles until it has completed its work, the other processes wait. When we talk on the phone while driving, we drive by rote. Should a situation arise that we cannot handle by rote, such as another vehicle swerving into our lane, the conversation ceases. (The problem, of course, is that one person's rote is another person's fender.)

It’s also a known problem in the new age of “open floor plans.”

Really? For coding? Developers require focus, and achieving it in an open floor plan typically means headphones. I don’t know what will happen when someone discovers that white lab rats get brain cancer from white noise.

The computing issue was greatly improved by the introduction of multi-threaded operations. Humans, too, have multi-threads (breathing, heartbeat, blinking, e.g.) but only one cognitive thread. So, any activity that requires analytical thought is likely to interrupt the flow of another.

Right now you're thinking of exceptions. Here’s a few:

  • Walking and chewing gum Most people can accomplish this risk-free, although walking down a flight of stairs while contemplating the Views module can be deadly.
  • Playing piano or guitar (both hands) or drums (all limbs) After a great deal of practice, hands (and/or feet) work together in one harmonious flow, like touch-typing. When starting out, though, the activity seems impossible.
  • Playing an instrument while singing This is interesting. When we sing, when we recall lyrics and melody, it actually uses a different part of the brain that seems not to disrupt our flow. (It can be problematic if you play a wind instrument.) So feel free to sing while you drive, though singing in a language in which you are not fluent could have less-than-desired results.
  • Coding and singing Since singing does not require cognitive thought, the flow we crave in order to stay “in the zone” is fairly impervious to song. (Avoid coding while listening to a podcast.)

An example of how paralyzing it can be when two activities compete for the same flow, is attempting to engage a presentation audience while debugging the failed technology required for the presentation.

You might remember Bill Gates introducing Plug and Play, plugging in a scanner and receiving a blue screen of death. He knew enough to choose engaging over debugging… just not enough to buy a Mac.

I was recently driving in Mexico (yes, yes, my primary mistake), and two police officers in a cruiser decided I needed a lecture on the issue of talking on a phone while driving. Informing them that I wasn't, in fact, talking, just holding my phone to hear the GPS, they switched, without missing a beat, to a lecture on one-handed driving.

Luckily, the “good” cop offered to accept half the usual fine on the spot, instead of having me surrender my driver's license until I paid the fine at the station. Heck, they were even nice enough to lead me to an ATM to get the cash. I discovered that uttering profanity to an ATM while pressing buttons, in Spanish, can actually be done simultaneously, though the need to curse is a good indicator that one isn’t in a flowing frame of mind in the first place. (See note, above, about open floor plans.)

Right about now you're expecting me to deftly tie all this to something Drupalish and meaningful. Uh, no. You see, another example of one cognitive thought that breaks the flow of another is tying together plot points and writing.

Acknowledgment Thanks to Kelly Bell of Gotham City Drupal for being my it’s-all-about-the-flow-just-the-flow-not-cognition shleger[1].

[1] bully

Image: ©Simone Becchetti via Stocksy

Dec 01 2015
Dec 01

PhotoThe Workbench module is a flexible tool for building an editorial workflow in Drupal. It offers a standard workflow and customization options for tailoring the process to your needs. Used by more than 25,000 sites to date, it is a tried and true publishing workflow solution that you can extend with moderation, access control, and media management.

Why Use Workbench?

Most organizations producing online content have an editorial process in place: content creators collect information and write content, editors review, edit, ask for changes, and publish the content once it’s considered final. Depending on the size and processes of the organization, these roles can be filled by one or more people, even groups, who might not be familiar with how Drupal works. With Workbench you can build an editorial process in Drupal that can reflect these diverse roles and processes.

Workbench offers different advantages for each group of stakeholders:

  • Users who only have to work with content benefit from a simplified interface. Training content creators is easy because they are already familiar with the concepts of the editorial workflow and only have to learn how it is modeled on the interface. They don’t have to learn much about the CMS itself. The short learning curve and familiar activities help them add and update content more frequently.
  • Organizations can fully replicate and digitalize their editorial workflow: access control is based on organizational structure, not website architecture. The process itself is customizable based on the roles and practices inside the organization. The workflow can be applied to media files as well. With the straightforward use, organizations can save on decreased support costs.
  • Developers can implement various workflows thanks to the module’s modular and extensible structure.

Talking about online content, you also want to consider post-publication phases of the editorial workflow: content teams might need to update already-published content. Workbench offers a solution for this scenario as well.

We used Workbench and another module from the Workbench Suite, Workbench Moderation out-of-the-box to implement the editorial workflow of Saïd Business School’s public website, winner of Acquia’s Partner Site of the Year Award in the Higher Education category. The school has many different types of content editors on board, who only access their own content, so we built the permission system for content editors using Organic Groups. Each editor is responsible for different statuses of the same content item, and can be assigned to several groups.


Although Workbench configuration is not covered in this article in detail, it’s worth getting a general overview of the setup process. What you’ll need to do:

  • Install and enable module dependencies.
  • Set up user roles (content creator, editor, publisher, etc.).
  • Configure content types that go through the editorial process.
  • Configure permissions for the roles.
  • Set up states (e.g. draft, needs review, published) and the transitions between them.

Workflow Process

In Drupal 7 core, content types can have either a Published or an Unpublished flag to track their workflow states. With Workbench, you can extend this with other workflow states and set up which roles can see and transition content in which states.

You’ll get a good idea about how you can use the Workbench module by walking through a publishing workflow example:

  1. The process starts with content creation. Workbench can be configured to work with any content type, from blog posts, pages, events, news items to customized content. The content creator role has the permissions to create specific content types, so you might have a distinct group of news editors and site editors. They don’t necessarily have any other permissions than creating and editing their own content.
  2. The content creator sets the moderation state to one of the following:
    • Draft: Content is not ready to be submitted for review, the content creator might want to work on it more or would like to submit it later.
    • Needs Review: The content is ready to be reviewed.
  3. Items flagged as “Needs Review” show up in the review queue of the publisher. The publisher can edit the draft, add revision information (e.g. comments for the content creator about required changes), and set the state to “Draft”.
  4. The content creator reviews notes from the publisher, edits the content, adds notes for the publisher, and resubmits it for review.
  5. The publisher reviews the content and if it is ready for publishing, sets the state to Published.

Once the workflow process is set up, in-progress updates to published content are tracked the same way. So all edits to published content can be handled in the background and the publisher can decide when to replace the live content with the new revision.

The Workbench Suite

Workbench has been designed with a modular architecture, so you can select which components to install and enable for each use case. You’ll need the following modules to use all Workbench features, or you can install Workbench and add only the ones you need:

  • Workbench Access controls which role has access to which content.
  • Workbench Moderation provides a standard workflow with some default states (Draft, Needs Review, and Published). You can customize these states or add new ones based on your project’s requirements.
  • Workbench Media adds an integration with the Media module so that you can use Workbench moderation for media content as well.

I hope this article helped you get a general overview of the Workbench module, so that you can explore this flexible solution further if your next project requires you to build an editorial workflow in Drupal.

Image: Workbench by Paul Englefield is licensed under CC BY 2.0

Other articles from this issue:


Melissa Anderson

Drupal 8 Usability Study

Coming Soon

Larry Garfield

Collaborate Your Way to Success

Coming Soon


J. Ayen Green

Cancun's Flow Police

Coming Soon

Nov 24 2015
Nov 24

Photo Drupal has a pretty secure structure: a small, simple and stable core that can be extended with tons of modules and themes. From Drupal 7’s initial release on January 5, 2011 until now, there were only 17 core security updates, which is quite a small number for a period lasting longer than four years.

But when it comes to third-party modules and themes, the picture is quite different. Although only modules with official releases are reviewed by the security team, or have security announcements issued, the majority of the 11,000+ third-party modules and themes for Drupal 7 get weekly reports for security issues.

And using custom modules is even more dangerous if they are not tested properly. Let’s face it: no one uses Drupal without modules. That’s why I will share with you some of the best open source tools to improve the security of your website.

Knowing your opponent’s moves helps you better prepare your defenses. That’s why we will try to attack with every known-at-the-moment method of testing vulnerability. All the tools I will show are easy to use without any knowledge of the source code. And the best part is, you can use this strategy indefinitely, if you keep these tools up-to-date. Remember: update first, then test.

Being Up-to-Date

I can’t emphasize enough how important it is to keep all your stuff up-to-date, so let’s start with that idea: If one tiny part of your website has a security breach, the whole system is corrupted. That’s why you should check for updates for the core and the modules you are using. There are reports you can find on Drupal’s official page; if you find that there is a security update available, immediately apply it.

Metasploit + Armitage = Hail Mary!

Start with Kali Linux: it's small, and has Metasploit and Armitage pre-installed. Armitage gives you a GUI, exploit recommendations, and use of the advanced features of Metasploit Framework's Meterpreter. (But remember to get updates every time you're about to run tests.)

Then, get an exact clone of the server; same machine, database, structure, OS version, etc.

NOTE: It is not recommended you use this technique on live websites because there is a chance the server will go down.

Now you’re ready to put on the white hat and get the party started.

  1. Do a scan. Nmap Scan is integrated into Armitage. However, I recommend using it outside Armitage since you can configure the scan parameters better. There are a lot of different options to choose from. I use the GUI version Zenmap – which also comes preinstalled on Kali Linux – and the following command:
    nmap -sS -p 1-65535 -T4 -A -v
    • sS: Stealth SYN scan
    • p 1-65536: All ports
    • T4: Prohibits the dynamic scan delay from exceeding 10 ms for TCP ports
    • A: Enable OS detection, version detection, script scanning, and traceroute
    • v: Increase verbosity level
  2. After you scan, save the file (scan.xml).
  3. Add host: From the navigation menu “Hosts” -> “Import Hosts” and choose scan.xml.
  4. From the navigation menu, choose “Attacks” -> “Find Attacks”.
  5. From the navigation menu, choose “Attacks” -> “Hail Mary”.

PhotoHail Mary finds exploits relevant to your targets, filters the exploits using known information, and then sorts them into an optimal order.

Important: When you use msfupdate, the database doesn’t get all the possible exploits. When you find some exploit that you want to try on your site, you have to manually add it and execute it. Here’s how:

  1. Download the exploit from exploit-db or write a script on your own.
  2. Put it in the ~/.msf4/modules/exploit/<your_folder> directory. Any exploit put here will be detected by Metasploit when it starts.
  3. Execute it with: use /exploit/your_folder/exploit_name.


Wapiti is a powerful scanner. It supports a variety of attacks and, in the end, provides nice reports in different formats. You can read more about it on the official site. When you open the console and type in wapiti, the wapiti help will load. I use
wapiti <a href="http://example.com">http://example.com</a> -n 10 -b folder -u -v 1 -f html -o /tmp/scan_report.

  • n: Define a limit of URLs to read with the same pattern, to prevent endless loops, here, limit must be greater than 0.
  • b: Set the scope of the scan; analyze all the links to the pages which are in the same domain as the URL passed.
  • u: Use color to highlight vulnerable parameters in output.
  • v: Define verbosity level; print each URL.
  • f: Define report type; choose HTML format.
  • o: Define report destination; in our case, it must be a directory because we chose HTML format.

NOTE: It is possible that you’ll encounter “You have an outdated version of python-requests. Please upgrade.” The fix is pip install requests –upgrade.


So, CMSmap is another free open source vulnerability scanner which supports Wordpress, Joomla, and Drupal. It also supports brute force, but Drupal is solid there since it blocks the user after a fifth wrong password attempt.

CMSmap is not preinstalled in Kali, so you’ll have to download it: git clone <a href="https://github.com/Dionach/CMSmap.git">https://github.com/Dionach/CMSmap.git[/geshifilter-code]
To run the tool, type:

cd CMSmap/

I use the following configuration command:
./cmsmap.py -t <a href="http://example.com/">http://example.com/</a> -f D -F -o CMSmap_example_results.txt

  • t: Target URL.
  • f D: Force scan for Drupal.
  • F: Full scan using large plugin lists.
  • o: Save output in file.

That’s all, folks.

But remember: “The quieter you become, the more you are able to hear.”

Image: "Security" by Henri Bergius is licensed under CC BY-SA 2.0

Nov 20 2015
Nov 20

Photo Large website projects involving multiple people in different roles face special challenges. The work needs to be coordinated and scheduled in such a way as to allow for parallel development of different parts of the project on different systems. As a consequence, sooner or later the different parts of the whole must be brought back together and integrated into the project’s main development trunk. Often, this process happens seamlessly; at other times, overlapping changes must be integrated manually.

In a Drupal 8 project, the Configuration Management module allows the website configuration files to be stored in a Git repository alongside the project code. As an added bonus, Git also provides features that facilitate distributed development, allowing work to be done on branches and merged together as required. When there are conflicts, external tools are available to visually present the conflicts to the user, making it easier to determine what happened in each instance.

Recently, the Drush project introduced a new command, config-merge, that streamlines the tasks needed to manage the configuration workflow steps to fetch and merge changes from a remote Drupal site.


Image A shows a typical workflow for using Git to manage configuration changes between a local development server and a remote staging server. Releases are deployed to the staging server by first committing them to the master branch in Git, and then making a tag – <T1> in the diagram above – to keep track of what was deployed. Both code and configuration files are stored in Git. The deployment process on the staging server involves first checking out the tag being deployed and then running updatedb and config-import. Later, if configuration changes [a] and [b] are made on the staging server at the same time the developer is making configuration changes [u] and [v], these changes will need to be merged. This could, of course, be done manually, by running config-export on both the staging and development server, committing them each on their own branch, and then using git fetch followed by git merge to combine them together into commit (M) on the development machine. Doing this all takes quite a few steps, though, particularly if using the configuration management admin interface in a web browser to export and download the configuration files. All of this can happen much more quickly with the Drush config-merge command.

Drush's config-merge tool is a lot like drush sql-sync, which has saved Drupal developers countless hours by combining the sql-dump, rsync, and sql-import operations to move a database from one Drupal site to another. config-merge operates on the same principles, except that it uses config-export, git fetch and git merge, and config-import instead. To merge configuration from a Drupal site @dev with the configuration from a remote site @stage using Git, run:

drush @dev config-merge @stage –git

If there are no conflicts, then you will find the merged configuration imported on your development site; you are then ready to test it and deploy the combined changes back to the staging server. If there are conflicts, though, then config-merge will launch a three-way merge tool and allow you to manually resolve the differences.

A three-way merge is so named because it works with three distinct revisions of each conflict:

  • The revision that came from our repository, called "local" or "ours";
  • The revision that came from the other repository, called "remote" or "theirs";
  • The original state of the text, prior to either change being applied, called “base”.

All three of these revisions are shown together, side-by-side-by-side, and controls are provided to allow the user to select which changes to keep and which to discard. Usually, the user is also supplied with an editable fourth pane, where the final state of the file is shown, post-merge. In some cases, the final desired state requires that the "ours" and "theirs" revisions be combined or reordered in some way. In this event, the user may edit the text manually to fill in any gaps that the tool could not derive automatically.

Image B shows a screenshot of kdiff3, one popular three-way merge tool available for Linux and Mac OS.


It offers an example of where two different users have changed the same property – in this case, the region that the search block is displayed in. In the first of the three columns at the top, labeled the “Base” revision, we can see that the search block was originally in the sidebar_first region. On the local system, shown in the middle column, it was moved to the header region; at the same time, someone else moved search block to the sidebar_second region. A block cannot be placed in more than one region at once, so we are going to have to choose one value to keep. Kdiff3 provides three buttons, labeled “A”, “B”, and “C”, that can be used to select the text from the base, local, or remote revision, respectively, and copy it into the output pane on the bottom of the window.

This is just the beginning of what you can do with config-merge and kdiff3. For more details, see the Pantheon Systems GitHub repository, Drush Configuration Workflow.

This repository contains documentation that describes the workflow discussed here in greater depth, and also contains another workflow based on rsync, for use in instances where configuration changes are being made on a system that does not have access to commit to the central repository. There is even a handy quickstart script to help you get up and running quickly, should you not happen to have multiple copies of a Drupal site to work with.

With a little investment in time, you’ll be using config-merge like a champ. In the long run, the time you save will astound you.

Image: "spot the differences" by Giulia van Pelt is licensed under CC BY-NC-ND 2.0

Nov 09 2015
Nov 09


Any improvements made anywhere besides the bottleneck are an illusion. — Gene Kim

How To Identify And Resolve Your Pinch Points

Rush hour is a cruel juxtaposition between drivers ready, willing, and able to get to their destination as quickly as possible, only to find themselves creeping along (or at a dead stop) due to traffic jams caused by the roads being over capacity and the lack of coordination or optimization of the travel plans of each individual driver.

Most of those drivers also know that by consulting Google Maps and Waze, they can discover the cause of the jam-up, their distance to it, and estimated time of delay; they can then make the decision to stick it out or take the next exit and proceed through the streets.

How motor vehicles flow through a network of highways and byways is a good analogy of how work flows through an organization. When the volume of your company’s work ramps up, and there comes a moment when a single stage becomes the rate-limiting constraint for the entire system, your first step is to locate that damn bottleneck.

And Waze won’t help.

Making Work and Workflow Visible

While the system-wide constraint may be obvious in some organizations, it's not always apparent until the flow of work increases to the degree that the bottleneck is overloaded and potentially damaged, making things worse (e.g. a car accident causing traffic to back up even more). The following three exercises will help identify the problem area.

Exercise #1: Inventory the Four Types of Work

As organizations grow, so does the quantity and variety of projects that are being worked on simultaneously across different teams and divisions. Unfortunately, the net result is that it becomes more difficult to quantify and prioritize what is being done. Therefore, the first order of business is simply to locate this information. You can perform this exercise digitally, using a spreadsheet, but it's much more powerful if done against a wall with index cards. Either way you'll gain insight into your organization.

To start, create four columns on the wall and label them: Business Projects, Internal Projects, Changes, and Unplanned Work. Then, fill out a single index card for each active project or recurring activity that falls under each heading. To make sure you're not missing anything, invite other members of the team to review.

Here are some descriptions of the four groups.

  • Business Projects: Any product or service delivered to a client. (As a rule of thumb, if it's revenue-generating, it's a business project.)
  • Internal Projects: Any work done to improve or maintain the state of the organization. Examples include infrastructure improvements, training, hiring, attending trade shows, strategy meetings, etc.
  • Changes: Anything that can disrupt the ability to deliver products and services to a customer, or anything that can disrupt internal projects. Examples include server configurations, software updates, etc.
  • Unplanned Work: Typically referred to as “fire fighting” because it's often chaotic and resource intensive. It's only productive in the sense that it restores operations.

For most organizations, the results of this exercise can be shocking. You may find that your number of internal projects exceeds your business projects. You may find that as much as 30-35% of everyone’s time was spent on unplanned work. This can be a bitter pill to swallow, particularly if project managers and business developers forecast timelines under the assumption that 80-90% of an individual's time can and will be spent on business projects.

While this type of information is incredibly valuable, it's generally not enough to identify the key bottleneck, although it should reveal how much effort is going toward client versus non-client work.

Exercise #2: Identify the Work States

The different roles that exist within an organization tend to work at different stages in the pipeline of building Drupal sites, so it's valuable to learn whether or not the current distribution of projects and their various states matches the current makeup of your organization.

To do this, we need to introduce the concept of an agile or kanban board, in which we create a column for each major transition point of a project. While the specific breakdown will be unique to your organization, we can generalize this for many Drupal projects with the following buckets:

  • Sales: The process of converting leads to contracts.
  • Discovery: The process of further refining the requirements, design, and overall blueprint.
  • Development: The actual prototype or build phase.
  • User Acceptance Testing: Ensuring requirements are met before delivery.
  • Deploy: Ensuring a smooth launch.

To complete this exercise, take the cards that you used in the previous step and color code them as a function of type. Then place them in the appropriate column. You may need to add additional columns for projects in a backlog or “paused” state, but, as best you can, the board with anything actively worked on in the past week.

Ideally the distribution matches your team's capacity. However, you may start to see disturbing patterns emerging. If the development stage is already at full capacity and a significant number of projects are in discovery, it might prove wise to delay any further lead conversion until enough projects move out of development, in order to reduce the number in progress in discovery.

Personally, I find it critical to perform this exercise every week, so that all teams across all stages can begin to anticipate when work may ramp up or slow down over the next two to four weeks. Armed with this knowledge, the organization can be proactive in moving some projects through the pipeline faster or pause other projects in order to even out the pipeline. (See Image A.)


Exercise #3: Identify How Work Flows

This process is the most detailed of the three, but it's valuable in unearthing processes that are overly complex or overly reliant on an individual. Here, we start with the end-to-end flow of how work moves forward and backward from stage to stage. We take it one step further by identifying the processes within each stage to uncover the true path of work as it flows through the organization.

To demonstrate this, I'm going to provide a hypothetical flow diagram at both the global level and within a particular stage (development). We'll then review how this information can be used to identify a bottleneck.


In Image B, we see the overall flow of work between the different stages of the website delivery pipeline. Note that we have to be realistic with ourselves: work can flow backwards, particularly in cases where a shortcut or an error introduced upstream wasn't captured. We have also explicitly noted the reliance on tools, systems, and other IT related needs to perform work at each stage.

In Image C, we zoom in on a hypothetical development workflow. To help surface our reliance on particular roles or individuals, we add that contextual data to each step. Similar to Image B, we're also explicit about scenarios where we go back upstream to business development or requirements analysis.

Although it can take a bit of time to think through each state with this much detail, the benefits are immense.

Suppose you discover that nearly every project has 10-20 hours that needs to go back to sales because of a misunderstanding of what was in or out of scope on a fixed-bid project. From the perspective of flow throughout the entire system, that is unplanned work that ultimately may require ripping up and replacing work in the discovery and development phases. While there is no way to prevent every possible type of miscommunication, knowing the heavy costs of these movements back upstream should serve as a motivating factor to spend more time upstream to ensure requirements are properly conveyed such that the overall flow is optimized.
Photo Suppose you discover that a single individual is required at nearly every step of the process. What happens if that team member has to take a sick day or gets pulled into another project? This can quickly result in a bottleneck because any number of steps in the development process could be stalled simultaneously.

By the time you've completed all three of these exercises, the biggest bottleneck in the organization should become apparent. And given that this bottleneck is the single biggest block in increasing the flow throughout the organization (e.g. roadwork on a highway during rush hour slows all cars), addressing it should become a top priority.

Here’s how.

Breaking the Bottleneck

There are three basic themes we need to follow to maximize the flow through the system-wide bottleneck:

  1. Protect: Flow through the bottleneck should only be interrupted when absolutely necessary. Remember, “An hour lost at the bottleneck is an hour lost for the entire system.” (Eliyahu Moshe Goldratt)
  2. Elevate: Here, we optimize the bottleneck (hire more people, introduce efficiencies, reduce number of steps, etc).
  3. Subordinate: Work should only be released to the bottleneck as capacity is available. Any release of work prior to that can result in the bottleneck becoming overwhelmed and cause thrashing.

For more details on these terms, I highly recommend reading Goldratt’s The Theory of Constraints. For now, we can use these simplified definitions as a basis for each of the strategies listed below.

Expanding Capacity

This is the most obvious solution. If a particular segment of your company faces a consistent mismatch between personnel and workloads (see exercise #2), then hiring more people will theoretically allow more work to flow through. However, this is often the most costly of all solutions, particularly if your bottleneck is at the development stage, given that the Drupal community has been suffering from a talent shortage for years.

Diverting Flow

Does everyone within your organization work on their highest value tasks at all times? Chances are if you completed exercise #3, then you identified certain high-demand individuals working on things that could easily be offloaded to someone else with much more available capacity. A great example of this occurs during the QA process for cross browser testing, which is something that nearly everyone in a Drupal shop can assist with. By identifying areas that can be offloaded to others, overall flow can increase.

Role vs Person Dependencies

It's incredibly helpful to have smart, effective people in an organization that can jump in and get things done. Unfortunately, with great success often comes an over-reliance on those same individuals. This can become a dangerous habit. If unresolved, a single person could become a dependency in many different stages, as you may have identified in exercise #3.

If you're still not sure whether you have a person-specific dependency, perform the hit-by-a-bus thought experiment. If this person was hit by a bus tomorrow and you now had to have someone else step in, could you do it? If the answer is no, then you have a bona fide person dependency which should be quickly resolved.

Documentation can be incredibly valuable, both for emergencies as well as for training and/or delegating certain tasks to others. It also externalizes all the tribal knowledge in everyone's head. Knowledge transfer of any kind ensures that there is at least some redundancy.

Reducing Waste

If you took the time to dive deep into exercise #3, you may have been shocked how simple five-minute tasks actually had to pass hands multiple times to be completed. Using this information, you can see where steps can be consolidated, eliminated, or replaced by fewer, more efficient steps.


Piggybacking off the concept of reducing waste, you may discover that certain steps require the same manual intervention over and over again, resulting in both human error and dependency on a particular individual. In these circumstances, automation can improve the throughput of work through the bottleneck.

A great example here might be the creation of a common virtual machine that a developer can download and boot up so that they spend as much time as possible on the task at hand rather than fighting with server settings and configurations.

Controlling Release of Work

It's counterintuitive, but how quickly work is released to all downstream teams can break an organization. For example, ramping up the number of converted sales while the team responsible for performing discovery is already overbooked will only result in thrashing. It's better to slow down the number of leads until some space is created by work leaving that stage.

Prioritize Internal Projects

In exercise #1, you may have identified a significant number of internal initiatives ranging in importance from mission critical to nice-to-have. Unfortunately, if the number of internal projects exceeds a certain threshold, you can cause both internal projects and client projects to stagnate.

A certain level of self-discipline is in order. Prioritize the top 5-10 initiatives that will truly result in a big impact (i.e., improve throughput in the bottleneck). By eliminating unnecessary work in progress (WIP), you've created the space to complete these important items, which then frees up more time and resources to work on the next batch.

Taming the Traffic Jam

Would these principles reduce the quantity and severity of traffic jams within a particular network? While we can never fully avoid the effects of a bad accident producing a mile-long parking lot, we can become proactive in how we can prevent, avoid, or work around more common situations (e.g. planned roadwork). The good news is that bottlenecks in a company’s workflow pose simpler problems with simpler solutions.

As Drupalistas and Drupal agencies, we need to improve our ability to execute, innovate, and ultimately deliver kickass Drupal websites to our clients.

Image: © Shutterstock/imageshunter

Nov 03 2015
Nov 03

Photo Perhaps the most important workflow any of us encounter is the one least talked about: That is, the flow of, well, our work. Not of our content through a CMS, but of the process of building that CMS in the first place. Or, for that matter, any web project.

Many electrons have been spilled extolling the merits of Agile development and the evils of Waterfall development, the two popular workflow archetypes. The real world, as ever, is not that simple. Especially when we speak of consulting projects rather than in-house teams, pure Agile is rarely acceptable; it lacks long-term predictability (by design). And good design (both visual and technical) requires some level of up-front gestalt planning.

Every project is different, but what does an ideal project workflow look like? And how can we try to approach that ideal more readily?

Plan Ahead

There is an old British military adage: Proper Prior Planning Prevents Piss Poor Performance.

Up-front planning of a project is crucially important, especially when some stakeholder asks the all important question: “So will we be able to do it within budget and/or timeline?”

Good planning is how you are able to answer that question.

More important, though, that planning phase (often called “discovery”) is where the architecture of the project is defined. That architecture encompasses three different, closely-related areas: Content Strategy, Design, and Architecture.

Content Strategy

Content strategy is one of those fields that has only recently received a name, but has been around as long as good websites have existed. It can be thought of as the “architecture of information” (not to be confused with information architecture, which is a subset of content strategy), or perhaps design of information. Content strategy is concerned with questions such as:

  • What message do we want our users to get from this site?
  • What is the actual information we’re showing them in order to support that message?
  • What structure does that information have? How can we model it in a way that lets us do dynamic things with it?
  • How do those pieces of information relate to each other?

For our purposes, one of the most important parts of content strategy is content modeling; that is, designing an abstract skeleton of the content, abstracted away from any particular presentation, in a sort of platonic form. (For more on platonic content, see my blog post.)


By design, we're talking specifically about visual design and user experience: The look and feel of a site, the user-flows through the site, the mood and tone the site should have, and so forth.

But that user interaction depends heavily on what the content actually is. What content is the user interacting with? What are the design components we need, and how do those relate to the content model? Are they aligned with each other, or at odds? If we don't know that, then we're designing in a vacuum, and that almost never ends well.


Even the most brilliant content strategy needs a technical system that can house, maintain, and sustain it: architecture.

Not all CMSs are created equal. That's not a snark on any particular platform. Every CMS, Drupal included, has its strengths and weaknesses, and those will impact both the design and the content strategy. Just like planning a painting requires knowing whether you're working with acrylic or watercolors, a successful content strategy and design needs to know what its medium will be. A design that has no viable editorial process to back it up, for instance, will grow stale very quickly.

In the discovery phase, all three of these lines of work (content strategy, design, architecture) need to happen in concert. They cannot proceed waterfall-like from one to the next, because each one reinforces the others. The architecture may make certain seemingly difficult design concepts surprisingly easy (or vice versa), and certain design needs may necessitate certain content strategy decisions. And someone on the team (really, everyone) needs to consider, at all times, the editorial experience – that is the most easily forgotten part of planning, but the worst part to leave to the last minute.

The most important questions to ask during discovery are “What is this for?” and “How is it updated?”

From a workflow perspective, the content strategist, designer, and architect all need to be collaborating. (Sometimes one person can fill two of those roles, but very rarely three.) The tools they use can vary. User stories and acceptance criteria are common tools, but they are just tools and should not be considered sacred parts of the workflow: the sacred part is collaboration – a give-and-take effort between all three disciplines.

Making it Happen

Often when that three-part conversation doesn't happen, it's for business reasons. A design firm is hired first, then, when they're done, a development firm is brought on board. Or, in the case of an application rather than a web site, sometimes the other way around. Neither is good, because it robs both of the necessary feedback.

It's easiest if all three parties work for the same company, but as long as they are in regular communication, that's not a requirement. What is a requirement is that none of them has veto over the others.

One of the most important parts of a project is when the implementation and design are compared, and the team can push back and say, “This thing here is going to be vastly more complex than it looks in the mockup,” or “You know, we could do this other thing easily; would that enhance the content plan and design?”

The earlier that interaction happens, the better.

A design made with no regard for implementation complexity or content model will cease to be cool as soon as it leaves Photoshop; a content strategy that ignores the modeling capabilities of the CMS will be non-implementable; and an architecture that goes for the easy cookie-cutter will result in a site that is, well, cookie-cutter.

The Implementation

The output of discovery is a plan for how to implement all three aspects of the project: How it will be built, how it will be styled, and what tools will be used. Of course, that brings us to a German military adage: No plan survives contact with the enemy.

No matter how carefully the plan was laid, it will inevitably change before the project is done. There are two key reasons for that: first, the project requirements change, due to changing business circumstances, some senior executives deciding to leave their mark, or simply some need or nuance that was forgotten and not caught during planning; second, expectations involving the level of effort required to actually carry out the plan could have been off, and some feature may require substantially more work than anticipated (or, rarely but pleasantly, much less).

Make no mistake: those snags will happen on nearly any project. It's not a question of if, but when. For that reason, a good project workflow has no such concept as “final sign-off.” No part of the content strategy, design, or architecture can be treated as irrevocably locked. However, it should only be “unlocked” by the project team. They are the ones that are "in the trenches" (to continue the military theme), and therefore they are the ones who have the best sense for when a given decision needs to be revisited.

When the plan needs to be adjusted, all three elements – content strategy, design, and architecture – need to be accounted for. If they were developed in tandem, then any change that is logical for one is likely to be logical for the others but not always. So ideally, the content strategist, the designer, and the architect will be available throughout the project to guide adjustments as needed.

Often, that is cost prohibitive however. If only one element can be kept throughout the duration of the project, I would argue that the architect is best suited to work solo. This is the implementation phase; the technology is the part most likely to necessitate unexpected changes. Even then, the architect should be at least semi-fluent in content strategy and design, or have someone on the team who is.

Building Up

While the focus of discovery is on collaborative planning, the focus of the production phase is incremental enhancement. That brings us to the topic of iterations.

Agile loves its iterations, but often has an expectation that every iteration includes a shippable product. In practice, that is counter-productive on a consulting project rather than a startup with a single application that is going to pivot every month. The goal of tackling one complete piece of functionality at a time, in order to have a small number of finished features rather than a large number of half-finished features, is laudable. But don’t allow that goal to result in wasted work.

This is an area where an in-house project and a consulting project differ greatly. With an in-house project, the end goal is generally more amorphous and subject to change. With a consulting project, the consultant was hired to solve a particular problem and that problem should remain relatively stable over the course of the engagement.

If the project requirements are rapidly and indefinitely changing, and the production team is expected to adjust without pushback, then it's not a consulting arrangement but rent-a-coder. There's nothing necessarily wrong with that arrangement, but the two are not the same.

For example, we said we should have a plan for the complete content model before production begins. In Drupal-speak, that means the content types, views, taxonomies, and so forth are known up-front and often will depend on each other. In practice, it is generally far more efficient to build out all content types and taxonomies at once, (especially in Drupal 7 where they have to be Feature-ized) than to spread it out piecemeal over several iterations. If you've done it properly, then you already know every field name, because they will sometimes be shared between content types. So just build them all at once and save time.
In my experience, while the content model will shift a bit during production, it rarely changes radically unless the planning was faulty or the business needs change radically. More often than not, the build spec we start with is 95% the same as the one we end with. Some content types or view modes may not get themed until much later, but it doesn’t harm anything to have them built out.

Various steps will also have dependencies. User stories, ideally, do not have dependencies on each other. However, I have yet to see a project where that was universally the case. Certain features or design components only make sense in the context of some other feature or design component. While that should ideally be minimized, it cannot be eliminated. That will also dictate what work happens in what iteration.


In a consulting workflow, each iteration is valuable not so much for the ability to pivot and reprioritize but as a way to have frequent “stability checkpoints.” Each iteration should be its own QA testing. A large QA period at the end of the project is an indication that there was insufficient QA earlier, which is a bad workflow: it encourages the accumulation of technical debt (or design debt, as the case may be).

Instead, a rapid turnaround time for the team to validate its work is crucial. If problems are found that cannot be fixed within the same iteration, they should be prioritized for the next iteration. Bugs before features. While hard for some product owners to swallow, prioritizing features over bugs only encourages bugs to breed. Technical debt piles up, errors increase, and the last few iterations need to abandon feature development entirely just to fix everything that was left over from before.

Don't let that happen to you!

If the number of issues found at the end of an iteration is higher than desired, don't ignore them for the sake of feature velocity. Stop, fix them immediately, and improve the in-iteration QA process for future iterations. A good feedback loop addresses the root of a problem as soon as possible, rather than leaving it for later.

Another part of validation is deployment. Every iteration should end with the latest code running on a production or production-equivalent environment. Not doing so is the same as having a design flaw or code bug that is left unaddressed. The longer it is left to fester, the worse it gets.


No two projects are the same, yet the processes by which they are addressed should be. The underlying principles for a successful consulting project workflow are fairly consistent:

  • Plan ahead, in a cross-disciplinary fashion. Content strategy, design, and architecture must all feed into each other.
  • Accept that the plan will change, but don't change it too readily without cause.
  • Iterate up from a strong foundation.
  • Fix problems immediately; the longer they linger, the more it will hurt feature development.
  • Deploy early and often, even if a given release is not shippable; deploying when it does becomes shippable will then be easy.

And that's a successful workflow plan.

Image: "The Great Giza Pyramids by Filip Maljković is licensed under CC BY-SA 2.0

Oct 28 2015
Oct 28

Fabulous colorful future illustration

In previous episodes, Marty McFly time-traveled to 1955 … Oh wait, that’s the wrong article.

So: In the previous issue of Drupal Watchdog, we introduced the Content Preview System (CPS). With CPS you can preview a site as what it would look like in the future, and allow tracking changes to the site in so-called ‘site versions’. With this powerful concept, it is possible to view a site as it will look after the changes have actually been made. Once all interested parties are satisfied with the outcome, the site version is published and all the changes go “live.”

There are two main advantages to this approach: Being able to publish several things together in context (e.g., articles with supporting sidebar or news content, an article series, a feature gallery, etc.), and being able to tightly control the workflow of content that is shown publicly to your visitors.

While this concept for a single editor is already powerful, it really starts to shine once there is a content team.

And that’s what this article is about: Extending the workflow of CPS to work for teams with different roles, permissions, and responsibilities.

You Will Learn To:

  • Set up CPS Workflow Simple for three different roles;
  • Use CPS reviews to enhance your workflow;
  • Set up an entity-based workflow that is part of the larger workflow;
  • Extend workflows.

A Simple Workflow with CPS

In the following example, it is assumed that the entities you use are revisionable (e.g., use file_entity_revisions, taxonomy_revision, etc. modules) to make full use of CPS. Only revisionable entities will work with the CPS workflow.

To get started, download the CPS, Diff, and Drafty modules and have drush download any other dependencies:

$ drush dl cps drafty diff
$ drush -y en cps_workflow_simple cps_node diff

This will download the following modules to your code base: cps, ctools, diff, drafty, entity, entity_status, iib, mailsystem, mimemail, views

After this step you will see a nice “Site version” widget below the admin and shortcuts menus.

Image A. The CPS simple workflow consists of a Draft, Review, and Publish phase

As described in the previous issue, you can then create a new “Site version”, change items on the site, preview the changes, and, finally, publish them.

For the setup of the CPS simple workflow, you will need to set up the roles and permissions per the Workflow Roles table (see Workflow Roles And Permissions sidebar).

CPS Workflow adds a new review state to the workflow, Draft » Review » Published.

Previously, any added site version was directly published; now it is first submitted for review. For this to work correctly, it is important that the content editor does not have permission to “Publish site versions” and that the cps_workflow_reviewers_email is properly set up.

$ drush vset cps_workflow_reviewers_email [email protected]

In this case, the reviewer’s e-mail is assumed to go to a group of people who are equally responsible for the content.

Once this is set up, the link to publish a “site version” on the site version edit page will be replaced with a link to “Submit for review”.

The Simple workflow records a message for each workflow state change, allowing editors and publishers to communicate changes. This is comparable to GitHub comments on pull requests. (For a more entity-centric communication, see the “CPS Reviews” section later in this article.)

After the “site version” has been submitted for review, the editor has the possibility to withdraw it again in case the editor wants to make more changes.

While not included in CPS Workflow Simple itself, it is usually a good idea to deny edit access to any entity that is part of a changeset that is in “review” stage. That way, last minute edits by an editor cannot accidentally be published. (See "the “Extending the Workflow” section below for an example of how to achieve that.)

The team of content publishers will receive an e-mail notification once the “site version” has been submitted for review; clicking on the link in the e-mail will automatically bring them to the “site version” edit page. There, the content publisher will see a list of all changed or newly created nodes and can then, after their review, either decline to publish the changeset with a message or publish it.

Depending on organizational workflow, the publishing might also happen at a later stage; however, no state for this exists in the Simple CPS workflow.

After publishing, the original editor of the changeset gets an e-mail notification that their “site version” was published.

Useful hint: As a last measure, a changeset that was published too soon, or should not have been published at all, can be unpublished again. However, only the last published changeset can be unpublished at a time; after that, the next-to-last, and so on.

CPS Reviews

One useful submodule when dealing with content changes is the CPS Review module. While the Simple CPS workflow is mainly tailored for small changesets, once there are numerous changes to different nodes it can be quite problematic to communicate all the needed adjustments via one “review” comment field. (Similar to how on GitHub pull requests, it makes more sense to provide a contextual review).

By enabling the cps_review submodule –

$ drush en cps_review

– every entity gets a new review form that is unique per changeset. Therefore, changes can be discussed in context.

After setting up appropriate permissions, CPS reviews can be added, replied to, and, ultimately, unpublished. (Compare that to “resolved” for comments in word processing programs.)

To find reviews that have been accidentally unpublished, go to Admin » Structure » CPS Reviews, find the review in question, and re-publish it – assuming you have the appropriate permissions.

CPS per-Entity Workflow

For many cases the simple workflow is sufficient, but for more advanced cases the granularity of being able to approve single entities within a changeset is needed.

With the help of the cps_workflow module this can be achieved. However, by itself, cps_workflow doesn’t do anything; custom code is needed to provide the needed workflow states and transitions. You can find a full sample of CPS workflow modules at http://wdog.it/5/2/cps.

Download and enable the module.

$ drush dl cps_workflow
$ cd sites/all/modules
$ git clone git://github.com/LionsAd/cps_workflow_sample
$ drush -y en cps_workflow cps_workflow_sample

This will enable a new tab called “Workflow” on every node, which allows you to control the per-entity workflow.

In our sample workflow, the life of an entity workflow goes from a draft stage to a review stage to approval.

While using cps_workflow_simple together with cps_workflow is supported, it is not mandatory. A possible workflow scenario here would be to provide the full changeset again for review to a supervisor after all content has been approved individually, before a content release manager publishes the changeset.

Again, a more extensive implementation might choose to lock down content that has already been approved so that no more changes are possible.

Sample CPS per-Entity Workflow

The following is a sample entity workflow to get started with:

 * Implements hook_cps_workflow_info().
function cps_workflow_sample_cps_workflow_info() {
$workflows = [];
$workflows['node'] = [
  'entity path' => 'node/%node',
  'default state' => 'draft',
  'default transition' => 'new draft',
  'states' => [
    'draft' => [ 'label' => 'Draft', 'weight' => -10, ],
    'review' => [ 'label' => 'In review', 'weight' =>  0, ],
    'approved' => [ 'label' => 'Approved', 'weight' => 10, ],
  'transitions' => [
    'new draft' => [ 'label' => 'New Draft', 'valid states' => ['system only'], 'state' => 'draft' ],
    'ready for review' => [ 'label' => 'Ready for review', 'valid states' => ['draft'], 'state' => 'review' ],
    'approve' => [ 'label' => 'Approve', 'valid states' => ['review'], 'state' => 'approved' ],
    'back to draft' => [ 'label' => 'Back to Draft', 'valid states' => ['review'], 'state' => 'draft' ],
return $workflows;

First of all, the entity path per entity type needs to be specified. Since every entity can provide that itself, entity_translation, for example, has a quite complex helper method to find that path; cps_workflow expects the user to provide the path.

Next, the default state and default transition are provided, and are assigned whenever an entity is created or modified within a changeset. This ensures that the entity is in a known state, before further transitions are applied.

Then, a list of states is provided, in this case: draft, review and approved.

Finally, a list of transitions is given, and each transition defines what valid states this transition can take place on.

Note: You can specify custom transition keys and then use hook_cps_workflows_alter(), for example, to change properties of the defined workflows.

The full sample workflow is pictured in Image (B). More custom work is needed to ensure that only a changeset, for example, can be published that has all entities in approved state.

Image B. The CPS per entity workflow allows each entity to go through various workflow states, provided by the cps_workflow_sample module.

Extending the Workflow

Workflows usually have one thing in common: They are very different for every organization.

Therefore, CPS can be easily extended using a variety of hooks for almost every use case you can think of. Especially if you add the Rules module to it (using the normal entity update hooks by checking for updates to the changeset entities), you already have a highly flexible system to extend your workflows.

A summary of CPS-specific hooks can be found in the Useful Hooks and More sidebar.

As an additional example to help get you started, let's consider the following code. It denies access to entities that are currently in a changeset with the status “in review” and makes an exception for users who are allowed to publish content (e.g., the content publisher).They can still make small edits before publishing the changeset.

 * Implements hook_node_access().
function mymodule_node_access($node, $op, $account) {
  // Only deal with create, update and delete.
  if (!in_array($op, array('create', 'update', 'delete'))) {
  // Check only non-live changesets.
  $changeset_id = cps_get_current_changeset(TRUE);
  if ($changeset_id == CPS_PUBLISHED_CHANGESET) {
  $changeset = cps_changeset_load($changeset_id);
  // Allow users with the publish changesets permission to do last-minute edits.
  if ($changeset->status == 'review' && !user_access('publish changesets', $account)) {
    return NODE_ACCESS_DENY;

With CPS, a powerful and flexible system is provided that can be tailored to suit the most simple to the most advanced editorial workflows.

Workflow Roles And Permissions

1. In most organizations the following roles are usually present:

  • The Content Editor is responsible for adding and editing articles.
  • The Content Publisher is responsible for verifying the changes and publishing them “live.”
  • The Content Administrator manages all content on the site.
  • A Drupal Administrator has all permissions.

2. The following permissions matrix is used to set this up:

  • Administer site versions permission – for the editor, publisher, and administrator – is necessary for the other permissions to work.
  • Edit all site versions allows the administrator to edit all site versions, even those owned by other people.
  • View site versions allows viewing and changing – by the editor, publisher, and administrator – between different site versions, via the widget.
  • Publish site versions grants permission to the publisher and administrator.

“Bypass content access control” should not be given to even the “Administrator” role, because it can allow him to circumvent CPS under some circumstances.
3. In addition to those permissions, there is also:

  • Preview site versions, which allows viewing a site version when the ‘changeset_id’ parameter is provided manually on the URL. This is useful to allow less-privileged roles to see what the page would look like in the future.

Useful Hooks and More

The following CPS-specific hooks are useful to provide your own workflow.
Note: A “site version” in the UI is called a “changeset” in the code:

  • hook_cps_changeset_states_alter() allows adding or removing workflow states that are recorded in the changeset history.
  • hook_cps_changeset_operations_alter() allows adding operations, like ‘submit’ and ‘decline’ to the CPS changeset operation links as displayed in the overview page and in listings.
  • hook_cps_changeset_access_alter() allows changing the CPS changeset access based on your custom workflow rules.
  • Since each changeset has a history, hook_cps_changeset_history_status_alter() can be used to provide useful texts like “Declined” in the history tab.

For sending e-mails on entity updates, either hook_entity_update() or the Rules module can be used, making use of the $entity->status and $entity->oldStatus properties.

Image: ©Prisca Haase

Oct 26 2015
Oct 26

Photo Drupal websites are not just applications, they are complex applications. They encompass design, functionality, and content, and they’re updated at a furious pace.
In order to master the process for iteration, developers need a mental model for the interplay of code, content, and configuration.

These comprise the modern website’s Holy Trinity.

The Minimum Viable Workflow

Safely making frequent changes to a running website requires some basic capabilities, which I call the Minimum Viable Workflow. In a perfect world, all of these workflow steps are scripted or completely automated, leading to lower friction for developers, and improved reliability via reduced human error.

Three Environments

Being able to iterate requires a minimum of three separate operational instances of the web site; three environments. This is necessary to separate development work from quality assurance and approval, and from the actual live or “production” environment for the site. For safety, these environments must be isolated from one another. Development work can have unpredictable side-effects and cannot be allowed to disrupt the production environment. Likewise, quality assurance and testing should not block development.

These environments must also be as close to identical as possible. Unexpected surprises after deployments and the phrase “it worked on my machine” are the most frequent slayers of team agility and release cadence.

Minimum Workflow Steps

In the MVW (Minimum Viable Workflow), a developer does her work in the Development Environment. While she is active, that environment is unstable; it may appear broken or incomplete to an outside observer.

Code changes are committed to version control, creating a reliable, restorable record of progress. When work is ready for a stakeholder (e.g., project manager, site owner, QA team) to review and approve their work, they will deploy it into the Test Environment.

Deploying into Test often also includes cloning the content (and if possible the configuration) from the Live Environment. For minor changes, this may not be necessary, but for a non-trivial release you want to see a precise preview of what deploying into production will look like.

After approval is granted, code changes deployed into the Live Environment, and, thanks to high-quality testing, there are no surprises, the site users and administrators get access to the new design or functionality, and there is much rejoicing.

Now it's time to start iterating again!

Before the next cycle begins, the developer will want to refresh her environment with the latest content and config from Live. Working against an up-to-date picture of reality reduces errors. (See Image A)

If the Live environment is fast-moving, or the current work in progress lags by a few days, performing refreshes as part of the normal developer workday is advisable. This is usually up to the judgment of the developer, although for larger teams on complex projects, a team-wide standard is a good idea.


A Note About Syncing Content

In this workflow, the only reliable and rational way to synchronize the content between different environments is to do a complete copy: Partial synchronization or “database diffing” is asking for trouble. On the filesystem, copying between different environments can be done efficiently with rsync or other utilities.
This can be a challenge for large websites. Data has mass, and for sites with significant content these synchronization operations take time. Where this time cuts into team velocity, it is possible to use a “representative sample” by pruning the database of older content and then syncing with the trimmed database dump.

However, testing code changes against a pristine copy before release is still a must.

The pattern of “pulling data back” also guarantees that any configuration stored in the database — which, by default, is most configuration — is reliably shared between all environments. This helps keep developers on the same page when creating new functionality.

Feature Branching for Development

Websites are usually a team sport, but trying to share environments is a recipe for frustration, and trying to organize parallel work on separate features is a recipe for disaster; for instance, one developer’s changeset overwrites another’s or creates unexpected incompatibilities. Events like these destroy productivity and lower team morale.

The answer for this is to utilize multiple development environments along with a version control pattern known as “feature branching.” A developer or team will make a branch for their work, build on that branch, and then propose a merge back to the main-line (usually master) when work is complete and ready for release. (See Image B)


This allows multiple features to proceed without interfering with one another and creates a clean process for integrating changes. If a single release involves both features, teams are able to coordinate integration by merging between branches. If one feature goes out before the other, the feature still in development will pull in changes from the released feature via the master.

If feature work is very ambitious or moving at a slow place, it may be advisable to set up a separate branch specifically for integration. This allows you to keep the master branch clear for small tweaks and bug fixes for production, while any hairy challenges for integrating larger feature changes are worked out on the side.

Embracing Change

The lifecycle for the average website is no longer one big launch every few years. This industry-wide shift towards continual improvement and iterative innovation demands new patterns for professional development work. As developers we should celebrate and embrace this shift; we have much to gain from the cycle of build, measure, learn.

This workflow may seem onerous to some, and there are certainly simple situations and sites that can get by with less. I emphasize simplicity, in large part to help teams avoid modeling the complexity of their organizational structure into the system, an urge which usually does more harm than good.

However, as Drupal powers more ambitious websites with larger and more diverse teams, these workflows will become more widely demanded and practiced. Much like the adoption of version control, most developers I know who have started using a multi-environment workflow have little desire to regress to the old ways.

Doing it right is the ticket to great rewards.

Elements of a Drupal Website

  • Code: Website code includes Drupal core, any contrib or custom modules, and the theme. All code must be tracked in version control. The Drupal project itself – and most of the industry – has standardized on Git as the version control system of choice; the workflow described here relies on the functionality and conventions of Git.
  • Content: The data that fills up the website is its content. With Drupal, content can be created, edited, or deleted without a developer’s involvement. That's the whole point, and that means how we manage content is critical to our success.
  • Configuration: The third component of a Drupal website is its configuration – application settings which control how the code and content interact, often exposed through the admin user interface. Configuration is traditionally a big challenge. In versions prior to Drupal 8, there is no core method for how configuration is described or managed. The best answers are CTools exportables and interfaces like the Features module. In the best case, these contributed modules allow you to export configuration to code, where it can be tracked in version control and deployed along with accompanying functional or style changes. Managing configuration means extra work up front for developers, but for complex projects that see rapid iteration, it is well worth the investment.

Image: Confluence of the Inauc River by Sundeep Bhardwaj is licensed under CC BY- 3.0

Oct 23 2015
Oct 23

Photo The core Drupal community is notorious for obsessing over every little detail that is submitted as code. Single issues can have hundreds of comments and, at its very worst, can take years to be resolved.

As a community, though, we know that this obsession results in a much better product. Code quality comes at a cost: time. It is nearly impossible to both comprehensively review code and commit code quickly. But the upfront time costs for peer review will save you time down the line. Teams I've worked with have caught typos, security vulnerabilities, broken styles… you name it, we've caught it before it was deployed, thanks to the peer review process.

The remainder of this article outlines the step-by-step process needed to conduct a peer code review.

Working on New Code

Each time you start new work, make sure your local environment is as pristine as possible. Ideally, this would also include downloading a fresh copy of the database from your production server to ensure there are no half-baked Feature settings which could dirty your export.[1]

With the build environment as clean as possible, you're ready to start.

  1. Check out the branch which holds the development branch that you will later merge your work back into. This is typically called dev, and the command is git checkout dev.
  2. Ensure the branch is up-to-date. Ideally this is done with rebasing (git pull --rebase=preserve) or, in a simplified workflow, you would use merging strategy to update the branch (git pull).
  3. If there are Feature updates, you may need to import these changes into your local database. You can do this with drush fra --force --yes.
  4. Create a new b­ranch.
    git checkout -b XXX-terse-description.

    XXX represents the ticket number from your bug tracker, and terse-description should be replaced with a description of the problem.

You are now ready to begin working on your branch.

Go for it!

Exporting Your Code for Review

Once your work is complete, you're ready to share your work with your teammates.

  1. If your site is using Features, export your feature. (If it's a new feature, I typically do this through the web interface, otherwise I use the command drush fu).
  2. See what has changed, using git status. Look for features which have not been cleanly exported and any other files which may have changed unexpectedly. Using the command git diff, you can review each of the files to see what has changed with the command git diff.
  3. If everything looks in order, add your changes git add -a, and commit your changes to the repository with git commit. Your text editor will open. Write a detailed commit message outlining why you made the changes you did.
  4. Upload your changes to the server:
    git push remote_name XXX-terse-description. The name of your remote is probably origin. To get a list of all remotes connected to your local repository, use: git remote show.

You are now ready to ask for a review of your code.

Importing Code for Review

So you have been asked to review code. Fantastic! The first thing you need is a copy of the branch which contains the new code:

  1. Download the latest version of all remote branches:
    git fetch.
  2. List all remote branches: git branch --remote.
  3. Create a local review copy of the relevant branch:
    git checkout --track remote_name/XXX-terse-description
  4. If there are features, activate them in your database:
    drush fra --force --yes.
  5. Ensure everything displaying in the browser is up-to-date by clearing Drupal's cache: drush cc all.

Conducting Your Review

Take a look at the proposed changes with the following commands:

  • Review the log of commit messages:
    git log --oneline --graph.
  • Compare the code against the current version of the development branch: git diff dev.

When conducting a review, you should ensure that the new code:

  1. Solves the problem identified in the ticket (or adds the Feature as described);
  2. Conforms to relevant coding standards (including spell-checked against the correct dictionary);
  3. Is limited to the scope identified in the ticket;
  4. Does not introduce any new regressions (including performance regressions, layout bugs, etc).

Hopefully, the code passes your inspection. If it doesn't, it's a good thing you were there to catch things!

Report any problems back to the original coder via your bug tracker. You may report your findings as a comment on the original issue, or via an inline code review if that functionality is supported by your bug tracker of choice.

Accepting and Merging a Branch

Assuming the code passed review, you may now merge the approved code back into the shared development branch.

  1. Check out a local copy of the shared development branch. git checkout dev.
  2. Ensure your local copy of the development branch is up-to-date: git pull --rebase=preserve.

Ensure that the work you're about to merge into the development branch is also up-to-date.

  • git checkout XXX-terse-description
  • git rebase dev

Assuming there were no rebasing errors, you are now ready to merge the two branches together. (If there are errors, the original developer is the most qualified to fix the problem. Ask them to bring their work up-to-date.) The final merge is done from the destination branch.

  • git checkout dev
  • git merge --no-ff XXX-terse-description

The updated branch can now be pushed back up to the server for others to base their work on.

Once the work has been merged in, you can clean up the local and remote copies of the ticket branches.

  • git branch -d XXX-terse-description
  • git push remote_name --delete XXX-terse-description

Customize to Suit Your Needs

This process was generalised for the article. You may add a few extra steps, especially regarding who is responsible for merging in the code (an automated gatekeeper? the original developer?).

Please do copy this article into your local documentation! Hack it up and customise it to suit your needs.

The Features module allows you to export database settings to code. This allows you to easily transfer configuration information for Views, Content Types, and more from one environment to another.

Image: "I scribble a lot" by Nic McPhee is licensed under CC BY 2.0

Oct 20 2015
Oct 20

PhotoOrganizations of all types need the ability for individuals and teams to be able to create new online content and modify existing content, submit those changes for approval, approve or disapprove that work, and ensure that all the participants in the workflow can see the status of each piece of content in the system and be promptly notified if they must act upon it, to keep the process flowing smoothly. Drupal 7 is ideally suited as a framework for building a website that can support such a workflow setup – with numerous pertinent modules that operate well together.

We will examine some key modules for building this sort of website, as well as how to configure them and set permissions for some basic types of roles invariably employed in such workflows. Our goal is to set up the system so that the mechanics of the workflow are as automated as possible, but with a minimum of complexity.

The Usual Suspects

There are innumerable core and contributed modules that can be employed for crafting an effective workflow, and they tend to be grouped into three categories corresponding to different strategies for implementing whatever type of workflow is desired: Revisioning, Workbench Moderation, and the venerable Workflow.

Choosing one strategy does not preclude using modules geared toward another strategy. Here we will be using the Workflow approach, but it is good to be aware of the Revisioning module – which allows permitted users to create, moderate, and publish content revisions – and Workbench Moderation – a popular alternative which permits moderation down to the revision level.

For any basic content editing and publication system, the following modules can be utilized in some combination:

  • Workflow, a critical component, allows you to define workflow states for each Drupal entity type, the transitions that each entity instance makes as it moves from one state to another, what transitions can be performed by each user role, and what optional actions occur for each type of transition.
  • Workflow Content Permissions implements permissions to view or edit the fields of any content type that is in a workflow.
  • Workflow Extensions makes possible a range of workflow user interface improvements.
  • Workflow Post-Install is invaluable when you add workflow functionality to a site with existing content, which consequently has no state. This module lets you set a state en masse.

Letters of Transit

We will demonstrate how to easily put together the essential elements for controlling the transition of content through a workflow. In our simple example, we envision a content writer submitting a new article, which must be vetted by a moderator to confirm that it is appropriate for the website. If approved, an editor will likely polish its content, prior to submitting it to a publisher for final approval before it goes live.

To follow this demonstration, install and enable the Workflow module, including its submodules Workflow Field and Workflow UI. Be sure to assign sufficient permissions for your chosen roles – in this case: writer, moderator, editor, and publisher.


Set the node permissions as you normally would, so that writers can create new content and edit their own, editors can edit all content, all participants can view unpublished content, etc. Also, for the content types of interest (in this case, articles), set the publishing options so new content is not published by default.

PhotoTo create and manage workflows, we use the workflow configuration page (at the Drupal path admin/config/workflow/workflow). Every new workflow has a descriptive name and a machine name.

Once a workflow has been created, you can add an unlimited number of states, as well as transitions among them. Here we define a few basic states to accompany the built-in "(creation)" state, which is the starting point of any entity instance when it is first created.

PhotoWith the states defined, we can specify which transitions are permitted. These include the obvious ones in which the piece of content moves forward in the flow; e.g., the writer submits it to the moderator by changing its state to "Moderate". You should also allow for backward movement, presumably for further revision of content; e.g., the moderator sets the node's state back to "Write" to indicate to the author that it needs further work before it is ready to be moderated again.

For the "(author)" pseudo-role, you must specify at least one state to which a new node can transition from the "(creation)" initial state. Consequently, no new content can be left in the "(creation)" state, making it effectively a pseudo-state.

Why do we have a separate and seemingly redundant "Write" state when we already have the "(creation)" state? Why not simply instruct each author, when a new piece of content is ready for moderation, to change its state from "(creation)" directly to "Moderate" and skip the "Write" state? The latter is needed so a moderator can send the article back to the author for further work, because one cannot set a node's state back to "(creation)".

For each content type you wish to manage with the workflow, add a new field to the content type with the field type of "Workflow".

Within the field settings, optionally enable the workflow history permissions for all relevant roles.
Users of the appropriate roles will now be able to see the current workflow state of each node, as well as change that state to whatever values you have permitted that role. For instance, when creating a new article, the writer will see that the single state "Write" we specified earlier is indeed the default.

Here's Looking at You

PhotoPhoto Participants in the workflow naturally would like to see what content they can work on at the moment, without having to sift through individual nodes. Thus they can benefit from views that display those nodes; e.g., a views block can show the moderator which articles need her attention. These additions to the website are facilitated by the Workflow Views submodule. After enabling it, set the "Access workflow summary views" permission for the relevant roles.

When creating such views, be sure to remove the "Content: Published (Yes)" filter criterion. For the workflow state, do not use the raw workflow state field, but instead the appropriate "New state name", so you can compare its value to an actual state name ("Moderate") and not its non-descriptive integer value in the workflow_states table.

You can also use the prebuilt view, "Workflow dashboard" (at workflow/summary, by default), which displays a summary of all of the nodes, their current states within the workflow, etc. If the page is reported to not exist, check the access setting in the view's page settings.

A dynamic list is fine, but more immediate notification could be valuable and can be implemented using the core Trigger module in conjunction with the Workflow Trigger submodule. (An alternative is the Rules module, which allows you to automate various tasks and actions when certain events occur, based on conditions and contextual information.) For example, on the Actions configuration page (admin/config/system/actions), you could define a new advanced action to send an email message to the moderator.

On the Triggers configuration page (admin/structure/trigger/node), you would then assign that action to the trigger that fires when an article node transitions from "Write" to "Moderate".

The Beginning of a Beautiful Friendship

There is much more that can be done with these modules, as well as others not mentioned here. Yet hopefully this introduction has demonstrated how these modules can work effectively together in order to help any organization set up and monitor the path of each piece of content from one stage in its development to another.

Image: "Sunset shot of Hassan II Mosque" by Marshallhenrie via Wikimedia Commons (Own work) is licensed under CC BY-SA 4.0

Oct 16 2015
Oct 16

PhotoLast year, we had a crisis in our company. We had been imploring our landlord to give us more space, but a few months after he finally did, our office emptied out. Some days we had just one or two people showing up. Not only was it an epic waste of money, but the office felt empty – and the solitude was hurting morale.

For almost six years, I have been working in a fairly unorthodox way. All of my company's employees are located in Szeged, Hungary, but I work from my home office in Belgium. When I tell people that I am a remote boss, one of the first things they ask is how often I travel to Hungary. They are puzzled when I tell them that I typically travel back to Hungary every three months or so. How can I lead the company when I’m not there? How do I make sure people do their jobs?

This was never really a problem. I started the company with my wife when we still lived in Hungary, and we were able to build a terrific enterprise with people we could trust. We also had the necessary tools in place to enable remote work. Between Skype, Google Docs, ticket management systems, and e-mail, we were able to run the business without any real issues. We had our office in Szeged, a great office manager who handled all the paperwork, and that was that.

But as I said, that changed abruptly last year, as soon as our office space expanded: several colleagues who had been with us for a long period left the company and, after a policy change that accommodated work from home, people started working remotely. Later, we merged with another Drupal agency that never had an office. As a result during the winter days, when snow was falling and the weather freezing, few people felt inclined to come in.

There were some weird aspects to this. Often the same people who complained about the empty office and low morale were also working more from their homes. Even if several people preferred working centrally, there were not enough colleagues at any given time for the office to reach critical mass.

Empty rooms combined with the loss of longtime staff members made it feel like the apocalypse was upon us. Some people even urged that we clamp down on remote working and go back to an office-centric situation. But by then, that would have been a really big hit for morale, so, after some scary moments, we decided to counter the loss of cohesion we felt by initiating a series of tribal moments:

  • We instituted quarterly company meetups that our remote colleagues were required to attend.
  • We created a weekly company standup, a one-hour call in which each employee could talk about their week, and management could share strategic announcements and news. (This is not scalable but at 25 employees it still works.)
  • We started peer-to-peer one-on-ones in which, on a weekly basis, a remote employee could chat with a random office-based colleague, to strengthen bonds.
  • We initiated a #coming-to-the-office-is-rewarding campaign that, in our weekly newsletter, highlighted something interesting or amusing about office life.

Today, I am happy we didn’t return to a centralized office. Our company is now stronger than ever; I think the option to work remotely has become an important part our lives. Giving our employees more freedom to choose makes us a better company to work for. I also believe that it has made our company more scalable.

Our transition is not yet final: in the end, we want to balance remote working with additional co-working places. After the summer, we will be opening a small office in Budapest where our colleagues can meet up to collaborate; we have plans to make it possible for people to sleep over when they attend a meetup there. In Belgium we are opening a similar facility, so that colleagues can join me for a retreat and buckle down for a sprint on a project. We would also like to open our office to freelancers.

I believe that most people are not built for a sedentary life. The ability to work in different places, with different people, is a great way to pull you out of your routine. And freedom from routine is crucial for inspiration and personal growth. My colleagues are like family to me; I am delighted to share that freedom with them.

These Are the Tools We Use:

  • Slack is a messaging system that provides a channel for our projects and teams. It lets us combine asynchronous interactions with notifications from our collaboration tools.
  • Google Docs enables us to work together on documents in real time or through a series of comments.
  • Google Calendar allows us to schedule meetings with colleagues and overseas clients. (The time zone app in Google Calendar Labs is super-handy.)
  • We use Jenkins for standardized automated deployments.
  • Redmine lets us document requirements, track issues, and log the time we spend on them.
  • Bitbucket is a code repository our developers can use to merge their efforts.
  • We use Skype and Google Hangout for small meetings where we want video chat or screen sharing.
  • Teamspeak is used for large meetings, like our weekly company calls.

Image: "Teliris VL Modula" by Fuelrefueld via Wikimedia Commons (Own work) is licensed under CC BY-SA 3.0

Other articles from this issue:


Melissa Anderson

Drupal 8 Usability Study

Coming Soon

Jeremy Rasmussen

Using Configuration Management Today

Coming Soon


Binky Lush Charlie Morris

Speed up your workflow, improve your content quality and decrease your headaches

Coming Soon

Oct 15 2015
Oct 15

This came up while manipulating taxonomy children and their children recursively, so it’s as not far from Drupal as you’d think. First, we will learn how to create recursive closures which in itself is a bit tricky. Then, as an option, you can learn about getting rid of them which is really tricky.

Let’s say for the sake of simplicity we have a factorial function:

function factorial($x) {
  return $x ? $x * factorial($x -1) : 1;

This is a classic, horribly inefficient way to calculate a factorial (see http://cs.stackexchange.com/a/14476/10273 for a more efficient way, but that’s not our topic today). Now let’s say you wanted to do this in an anonymous function:

$factorial = function($x) use (&$factorial) {
  return $x ? $x * $factorial($x -1) : 1;

Although the variable $factorial does not exist when use(&$factorial) is reached, the reference sign will force PHP to create the variable with a value of NULL and not throw a notice. However when PHP finishes evaluating the expression function… it will assign the closure to $factorial so now, inside the closure, the same function can be called. Neat trick.

However, this is somewhat ugly, requires a reference, and won’t work if you change the variable $factorial outside of the closure which violates the principle of a closure being, well, a closed thing.

Instead, we could do this:

$factorial = function($func, $x) {
  return $x ? $x * $func($x -1) : 1;

Now you need to do $factorial($factorial, $x) which is uglier but much more correct. Still, it’s quite error prone: if you pass in something else as the first argument or nothing at all, it breaks. We can fix this with the following function:

function fix($func) {
  return function() use ($func) {
    $args = func_get_args();
    array_unshift($args, fix($func));
    return call_user_func_array($func, $args);
$original_factorial = function($func, $x) {
  return $x ? $x * $func($x -1) : 1;
$factorial = fix($original_factorial);

What happens if you call $factorial(5) ? Well, $func inside $factorial equals $original_factorial so when the array_unshift runs, then it puts fix($original_factorial) as the first argument which is the same as $factorial. So then it calls $original_factorial with $func = $original_factorial, $x = 5.

This is also called “fix” because it is (almost) a fix point combinator. If that sounds familiar, perhaps it’s because you’ve heard of the most famous one: the Y combinator. Now, our example fails some of the fix point combinator criteria, but it’s pretty close. And it does allow the chief practical programming task: recursion on a higher-order function!

Oct 14 2015
Oct 14

Drupal 8 comes with plenty of new features: the high visibility ones, like CKEditor or Views in core, and those less obvious but equally pivotal to Drupal 8’s strength and flexibility, like the Entity Validation API.

Never heard of it? You're not alone – much of the fanfare around Drupal 8 is devoted to the shiny parts. But under the hood, rock solid developer APIs like the Entity Validation API are what will make Drupal 8 a pleasure to work with for client projects and contributed modules alike.

So what is this Entity Validation API and why should you care?

For Those Who Came in Late

In Drupal versions up to and including Drupal 7, any validation was done in the Form API. Consider the Comment entity, provided by the Comment module. There is a lot of validation relating to comments, such as:

  • If the comment is being updated, we confirm the timestamp is a valid date.
  • If the comment is being updated and the username is changed, we confirm the username is valid.
  • If the comment is anonymous, we validate that the name used for the comment doesn't match an existing username.
  • If the comment is anonymous, we confirm that the e-mail address entered is valid.
  • If the comment is anonymous, we confirm that the homepage is a valid URL.

In Drupal 7 and earlier all of this happens in the comment form validation logic, in comment_form_validate().

The issue here is that this validation is tied to a form submission. If you're saving a comment via some other method, then you have to duplicate all this logic to ensure you don’t end up with invalid comment entities. Common alternate methods include:

  1. using Rules;
  2. using a Restful, Services, or Rest WS endpoint;
  3. programmatically saving via custom code;
  4. using a custom comment form.

The same scenario is repeated for Nodes, Users, Taxonomy terms, and custom Blocks (which aren’t entities per se in Drupal 7, but the story is the same).

It’s Like an Onion, or Maybe a Layer Cake

But before we can talk about Drupal 8's Entity Validation API, we need to go over some background on the Entity API itself.

In Drupal 7, entities are \StdClassobjects; accessing field values depends on the entity and the field. There is no real unification. For example, $node->title is a string, while $node->field_tags is an array.

And so, in Drupal 7 you might see things like this:


In Drupal 8, the Entity Field API brings unified access to field properties and first-class objects for each entity type. So in Drupal 8, you see consistency like this:


When you work with fields and entities in Drupal 8, you’re likely to interact with a suite of interfaces that comprise the API.

Key Interfaces in Drupal 8 Entity Field API

Let’s start with ContentEntityInterface, the overarching interface that content entities in Drupal 8 implement. (Node, Comment, Taxonomy Term, and BlockContent, among others, implement this interface.)

Each field and each property on a content entity is an instance of FieldItemListInterface. Even fields like the node title are lists, which just contain a single value.

Each FieldItemListInterface consists of one or more FieldItemInterface objects.

At the lowest level of the API, each FieldItemInterface is comprised of one or more DataDefinitionInterface objects that make up the properties or columns in each item value.

Perhaps a diagram might make this clearer. (See Diagram 1.)


Note that there are two broad types of fields: base fields and configurable fields. Base fields are the fields that are largely fixed in entity type. Configurable fields are those added during site building using the Field API.

Determining if an Entity is Valid

In Drupal 8, to check if an entity is valid, simply call its validate method:

$violations = $node->validate();

That will give you an instance of EntityConstraintViolationListInterface which helpfully implements \Countable and \ArrayAccess. If the resulting violations are empty, the entity is valid.

There are some handy methods on EntityConstraintViolationListInterface that help you work with any violations, including these:

  • getEntityViolations() filters those violations at the entity level, but is not specific to any field.
  • getByFields(array $field_names) filters the violations for a series of fields.
  • filterByFieldAccess() filters only those violations the current user has access to edit.

As the return implements \ArrayAccess, you can loop over the results and work with each ConstraintViolationInterface item in turn:

  • getMessage() gets the reason for the violation.
  • getPropertyPath() gets the name of the field in error. For example, if the third tag in field_tags is in error, the property path might be field_tags.2.target_id. These align with the form structure if you're in the context of validating a form.
  • getInvalidValue() returns the value of the field that is in error.

Interacting With the Entity Field Validation API

To enable an entity or field to be validated, Drupal needs to know which constraints apply to which field or entity type. This is done by using the API to add and modify constraints. Once you've gathered your requirements, you need to attach your validation constraints to your entity types and fields. How and where you do this depends on the type of constraint; whether you define the entity type in your code; the type of field; and the nature of the constraint. Before we address these, let's take a quick look at the anatomy of a constraint.

Constraints are Plugins

Continuing the learn once – apply everywhere paradigm that runs through much of Drupal 8, validation constraints are defined as plugins. Helpfully, core comes with a plethora of existing constraints such as:

  • NotNull;
  • Length (supporting minimum and maximum);
  • Count;
  • Range (for numeric values);
  • IsNull;
  • Email;
  • AllowedValues;
  • ComplexData (more on that later).

In many cases, you’ll be able to implement your validation logic by combining the existing constraint plugins in core. However, if you do need to create a custom constraint, all that is required is a new constraint plugin.

Adding Constraints to Base Fields

How you add constraints to base fields depends on whether or not your module defines the entity type in question.

If you're dealing with your own entity type, then you will have already implemented FieldableEntityInterface::baseFieldDefinitions(). In this method, you will already be defining your entity properties as an array using the BaseFieldDefinition::create() factory and its various builder methods. One such method is addConstraint. You call this passing the plugin ID for the required constraint as the first argument, and any configuration for the second argument.

This example from Aggregator module adds the FeedTitle constraint to the title property:

$fields['title'] = BaseFieldDefinition::create('string')
->setDescription(t('The name of the feed (or the name of the website providing the feed).'))
->setSetting('max_length', 255)
->setDisplayOptions('form', array(
'type' => 'string_textfield',
'weight' => -5,
->setDisplayConfigurable('form', TRUE)
->addConstraint('FeedTitle', []);

Here, the constraint plugin being added has an ID of FeedTitle. Note that this field definition also includes setRequired(TRUE) and setSetting('max_length', 255). Behind the scenes, those two calls are also calling addConstraint, once each for the NotNull and Length plugins.

Finding the Plugin ID

As with every other plugin in Drupal 8, each constraint plugin defines its ID in its annotation. So, in order to determine what to use in the first argument to addConstraint, it’s simply a matter of opening the required plugin class and inspecting the class-level docblock.

* Supports validating feed titles.
* @Constraint(
* id = "FeedTitle",
* label = @Translation("Feed title", context = "Validation")
* )

If your module doesn't define the entity type, but you wish to add a constraint to a base field, you need to implement either hook_entity_base_field_info() or hook_entity_base_field_info_alter(). Then, you can add or remove constraints from entity base fields, or even define new base fields – a little-known power feature of Drupal 8.

Adding Constraints to Configurable Fields

This functionality is also possible for configurable fields added by the Field API. Now, that can only be done in code, but if Drupal's history is anything to go by, pretty soon a contributed module will debut which allows site builders to wire up validation in the UI.

To add a constraint to a configurable field, you need to implement hook_entity_bundle_field_info_alter(). At that point, you have an array of FieldConfigInterface objects, keyed by field name. For any of these, depending on both the entity type and bundle, you can call addConstraint or setConstraint to add or replace the field level constraints.

At the Data Type Level

Until now we've dealt with constraints at the FieldItemInterface level, but what if you need to apply validation at the DataDefinitionInterface level? For example, the EntityReferenceItem field item (each item in the list) contains a number of properties. Each of these is a DataDefinitionInterface. In this case, it has the target_id and entity properties. Similarly TextLongItem, which is used for rich text fields, has two properties in value and format, one to hold the field value and another to track the input format. If you want to add a constraint on a configurable field (FieldConfigInterface) directly to one of these properties, you can implement hook_entity_bundle_field_info_alter() and call setPropertyConstraints or addPropertyConstraints, nominating the constraints for the required property name. If you need to do the same for a BaseFieldDefinition, you can add a new ComplexData constraint using one of the techniques detailed above. The ComplexData constraint takes a nested array of constraints keyed by each property name. For example:

$field->addConstraint('ComplexData', [
  'value' => [
    'Length' => [
      'max' => 150,
      'maxMessage' => t('Title may not be longer than 150 characters.'),

Multiple Fields

In some cases you may need a constraint that depends on the value of more than one field. For example, the Comment entity contains a name and an author field. Whether or not the entity or either of these fields is valid depends on the comment author and the values of those fields. Attaching validation constraints at the entity level depends again on whether your module declares the entity type, or whether you are adding the constraint to another module’s entity type.

For your entity type, entitylevel constraints are added using the entity type annotation, just like most of the other entitylevel metadata. Nominate these in the constraints property of the annotation. This is an array of constraint configuration, keyed by the constraint plugin ID.

To add entity-level constraints for another module, implement hook_entity_type_build() to add a new constraint, or hook_entity_type_alter() to alter an existing one.

Creating Your Own Constraint Plugins

To create your own constraint plugin, as with every other plugin in Drupal 8, you need to place a class in the correct folder structure and add an annotation.

Your class needs to live in the Drupal\your_module\Plugin\Validation\Constraint namespace which corresponds to the src/Plugin/Validation/Constraint folder in your module.
Your class needs to include an @Constraint annotation, like that shown above for FeedTitle, and would normally extend from \Symfony\Component\Validator\Constraint. Your constraint plugin really only needs to provide a default value for the violation message, which is a public property called message. You can use the same placeholders as you would for string translations in place of dynamic text.

You then need a validator object. By default, each constraint is validated by a class in the same namespace as the constraint, with the word Validator appended. For example, CommentNameConstraint is validated by CommentNameConstraintValidator. If you don't want to use this pattern, or you’re reusing one validator for several constraints, you can override Constraint::validatedBy() in your constraint plugin to nominate the validator class.

The validator class needs to extend from ConstraintValidator and implement the validate() method, receiving the data to validate and the constraint itself as arguments. Depending on the level at which the constraint is attached, the data to validate may be a ContentEntityInterface, a FieldItemListInterface, or a primitive value.

If you're creating an entity-level constraint, your constraint plugin must extend from CompositeConstraintBase instead of Constraint, and implement the coversFields() method. (See CommentNameConstraint in core.)

Wrapping Up

Having a solid API means that when you approach client projects on Drupal 8, you can think about your domain model first. Normally, during the requirements-gathering process, business rules regarding validation of the client's domain are captured. The new validation API will allow you to think differently about how to implement this logic.

For the first time, you will be able to bake this validation into your entities, whether you use custom entity types or rely on the common building blocks provided by core (Node, Comments, Terms, etc). With validation that isn't tied to form submission, you can write unit tests for your model, which allows you to rapidly iterate and refactor, knowing you've not introduced regressions. Although it may not be one of the most visible features in Drupal 8, the new validation API will change the way we work with Drupal.

Bring it on!

Image: celebration of light 2007 by Jon Rawlinson is licensed under CC BY 2.0

Oct 12 2015
Oct 12

Far Away Thoughts Painted By John William GodWard At DrupalCon Los Angeles, the question on everyone’s lips was: “When will Drupal 8 be released?”

Each time it was asked, it felt more and more like a recurring dream. Is this thing real, or did some Belgian guy start a rumor?


Views in core? Really?

Drupal configuration in code?

Ha! Now I know you're kidding!

Well guess what: Drupal 8 is not just some intangible vision that Drupalistas keep dreaming about. It exists and, ready or not, here it comes, barreling down the road. Very soon, my friend, the dream will become reality.

But bear in mind that the Drupal 8 dream is a lot like Disneyland:

  • Entry to the park ain’t free. (You're going to have to acquire some new skills.)
  • There will be long lines for your favorite rides. (Not all modules will be stable right away.)
  • At the end of the day, you’ll be pooped. (Your first Drupal 8 project will exhaust you.)
  • You might be too small for some attractions. (Neither you nor your clients will be quite ready-to-go on Day One of Drupal 8’s release.)

Putting skepticism aside, I am truly excited about the things coming in Drupal 8. The one new feature that I’m particularly excited about – and I think many others are, too – is Configuration Management. Drupal 8 will finally give us the ability to store site settings and configurations in code instead of mingled with content in the database. For me, and for your DevOps person, that’s the dream: Finally, no more scribbling notes on the back of an envelope for what we changed so we can remember to do it live in production.

If you’re as excited about configuration management as I am, you may stop reading now and just jump in and play with Drupal 8. Or maybe take a look at this article from a couple of Drupal Watchdog issues ago.

But if you're not ready to take the plunge, or know that you’ll be using Drupal 7 for a while longer, then please read on.

Let’s start with a preface: There is no single solution, no silver bullet that will solve all your configuration problems. But a lot of work has gone into making them less painful, via some contributed modules that are available today for Drupal 7: the Features module and the Configuration module. With a bit of time, patience and, perhaps, some creative thinking you'll find that these two modules will put you in a good place – and even prepare you for the Drupal 8 way of thinking as it relates to configuration in code.

Features Module

You’ve probably heard about the Features module already; it’s been around for a while. If you haven't used it since the 1.x days, it's time to download it and give it another try.

Features basically captures a set of configurations in a single package that can be exported and moved to another site. Once there, it can be enabled just like any other module. In other words, it's a module that makes modules from modules. Features provides you with a module that you can give a version number to – and commit to – your favorite version control system. It's great!

Let's give it a try. For this example, I’ve created a blog on a development site. This blog consists of a content type with some fields and a view named Blog. From here, get and enable Features.

$ drush dl features && drush en features

We can export our work by telling Features which components of the site need to be captured. You only need to know the machine name; I use drush features-components to get a list of the components and source machine names, or drush fc for short. For this blog we know that we used the Article content type, and the view name is Blog, so our machine’s names are article and blog respectively.

Our export command will look like this:

$ drush features-export mycoolblog node:article views_view:blog --version-set=7.x-1.0

This will create a feature module named mycoolblog and even though a version isn't required, I set this example to 7.x-1.0 because it's good practice. Drush will then save it in the sites/all/modules folder for you. From there you can run your version control magic or move it manually to another site. Simply enable your feature like you would a regular module. I bet you’re starting to see the possibilities already.

What about changes?

They’re pretty easy: make changes to your development site just like you would before you started using Features. When you’re done, run the export command from above and be sure to change your version number accordingly. Alternatively, if you’re working on a site that has the feature module already enabled, use the features-update command, fu for short.

$ drush fu mycoolblog --version-increment

This command will export the changes you made from the database to your features code and bump your version number up by one. From there, you can jump into your normal deployment workflow.

Keep in mind that if someone else overrides your feature on the site you are deploying to, their feature won’t fully enable; it will hold back the overridden part(s). You can tell which features on a site have been overridden using drush features-list. That will list all the available features and which ones are overridden. Once you know which features have conflicts, you can dive in and start resolving them. I'll leave conflict resolution to you with this bit of help: find extra Drush commands for Features in the documentation section of the project.

If you do choose to use Features to manage your site's configuration, be sure to test everything first. It’s easy to miss a component and deploy with a half functional feature. You’ll also want to break down your features into common components. For example, keep all the components related to the blog in its own distinct feature. This will require planning, especially on larger sites, but it’s completely feasible. On some of the more complex sites. I tend to use the Feature UI instead, and make sure I get the feature right before I export, but don't tell anyone I admitted to that.

Configuration Module

The Configuration module is different in its approach. It's actually more like what you'll get with Drupal 8. Features works around the concept of a packaged module, where Configuration exports the config as PHP arrays to file. It doesn't try to make modules or anything like that, it just dumps out code.

When you start using this module, there are two new terms to learn: activestore and datastore. The activestore is another way of referring to your live configuration, or what's in the database. The datastore is the exported configuration stored on the server's filesystem. If you end up using the Configuration module, make sure you understand the difference between these two terms.

Out of the box, this module doesn't assume anything for you, it only creates a folder named “config” in your public files folder. To get started with the module, you need to explicitly tell it what you want to track. Don't worry, you can change that config folder location to anywhere the web server has write permissions. For simplicity, as we proceed I'm going to leave that config folder right where it is. BTW, like Features, the Configuration module, too, has Drush commands as well as an administrative UI.

Now I’ll get you up to speed with the Drush commands.

Using our previous example, let's export the Blog to code. You’ll need to figure out the machine names of each component you want to track. Start by listing them with the command config-get-components.

What's nice about this output is that it shows both the human readable and the machine name. From this list, we can ask Drush to show us all the identifiers of a component. (An identifier is the same as the source from Features.) Using the component content_type we can get its identifiers with the following command.

$ drush config-get-identifiers content_type

For simple sites, assembling your list of components and identifiers should be pretty easy. In most cases you won't need to look anything up, particularly if you are the one who built the site. To export the Blog you would use a command similar to this:

$ drush config-start-tracking content_type.article views_view.blog

The above command, config-start-tracking , short for csta, will export the configuration for the view and content type and all the dependencies associated with them, such as permissions and image styles. Don't believe me? Try it! I'll wait…

Now for changes. Let's say you need to add a block display to the view. Just as you did with Features, make your changes to your development site like you always have. Then, you can export those changes to the datastore with the config-export command. Assuming you only changed the view, your command would look like this:

$ drush config-export views_view.blog

That should export all the items from the activestore (the stuff in the database), that are different from what it finds in the datastore (the stuff in code). Once your configuration has been exported to the datastore you can move it to another site and activate it. You can do this via some kind of version control or the old fashioned way, by manually moving it yourself.

Getting this tracked code to import into another site is pretty easy. First, turn on the configuration module, then move the new files into the config folder that the module generates. Once the files are in place, you can run this sync command.

$ drush config-sync

This will look in your default config folder and import anything that exists in the datastore but not in the activestore. It will turn on modules, create views, and set variables that weren't there before.

If you’re moving changes into a site that is already tracking configuration, the method is the same. Copy your new or changed configuration files into the config folder and run the sync command.

BONUS LEVEL: If you want to be more like Drupal 8, you can create a second folder called “staging” and place your new configuration files there. Run the same sync command but add a --source like this:

$ drush config-sync --source=path/to/staging

This command will read the configuration from the staging folder and import it into the default config folder and activate all your new config at the same time.

Should you choose to use this module, remember: If you remove something, say a field, you will need to manually stop tracking that field and then delete the include file from the datastore on both the development site and your target. Otherwise your deleted items will just keep getting imported, even though they were removed in development.

For a few more commands that I didn't cover here, refer to the module documentation.


Both of these modules are powerful. They overlap in a lot of areas but each has its own strengths.

The great thing about Features is that it's easier to start using, even if you are not a power user. The admin interface is a little easier to work with than the Configuration module and there isn't confusing verbiage like datastore and activestore. Plus, I feel that moving code to another site in the form of a module is much easier to wrap your mind around.

As for the Configuration module, it's a little less friendly to the non-power user. However, with some careful trial runs and testing, anyone can get the hang of it. Ultimately, it's a little unfair to compare these two since it’s not apples to apples. You would likely want to use this module in combination with Features, tracking certain variables and settings in one place while creating feature modules elsewhere. For example, a simple blog makes a great feature and perhaps is unnecessary to track with the Configuration module, since all its settings are already stored in code as a module.

How you end up using these two modules will depend largely on your project. Plan ahead and always run tests, even if it's a manual test where you're the one clicking and loading pages. Make sure you obtain the exact result expected before you give either of these a go on any production site.

Once you find the sweet spot, I'm pretty sure you will be living the dream, the Dream of Drupal 8.

On that note, I leave you with one last thought. It doesn't come in my words but rather John Lennon’s:

“You may say I'm a dreamer, but I'm not the only one. I hope someday you'll join us. And the world will be as one.”

Image: "Far Away Thoughts" by John William Godward is Public Domain

Oct 09 2015
Oct 09



In the 6 years that the Penn State University Libraries has been using its current enterprise Content Management System, the number of author/publishers has grown to over 200, the number of pages has increased from 5,000 to over 10,000 and the number of media files now sits at several thousand. When the libraries made the decision to switch to a Drupal platform, the time was right for a shift: to fewer editors, to a surefire way to keep content fresh, and to a dead simple workflow that won’t get in the way.


The primary goal in redefining the workflow is to avoid content “ROT" (Redundant, Outdated and Trivial) on the site. Hundreds of authors, contributing content with no oversight, has naturally led to a bloated site with lots of unnecessary, duplicate, and obsolete content. We conducted a full content inventory prior to our move to Drupal, and after having analyzed 10,000 pages of content, page by page, we clearly understood the importance of managing and maintaining content control. It is critical that every piece of content on the site has an owner, is easily reusable across the site, meets all copyright guidelines, and is fully accessible to users with any disability or learning difference. Many areas of the site have content that changes regularly; therefore, minor edits to the site need to be taken care of without delay. And, if there’s a problem identified with the site, it needs to be easily reportable and quickly addressed.


So what's the key to making all of this happen? A simple, streamlined workflow that doesn’t get in the way or allow changes to get stuck in limbo. Our new workflow involves a small group of frequent users who are empowered to make regular content additions and changes, and a content board led by a content strategist that will review these changes after-the-fact, in batches.

Sure, Drupal has a slew of options (see, e.g., Workbench, Workflow, and Maestro modules) for managing editorial workflows. And these workflow solutions might work great in other environments - environments that are smaller or environments that are geared for publishing (like a newspaper or web magazine). But for a large (500+ FTE staff) organization with a number of autonomous authors, establishing a traditional publishing workflow could be a serious challenge. The addition of editorial layers is doomed to fail here because changes won’t ever be reviewed quickly enough (or at all), and “draft only” editors will end up with full publishing access, just to keep things moving, defeating the purpose of the workflow entirely. So, while content is incredibly important and must be vetted, having an after-the-fact review by a small and highly trained group of regular editors allows for speedy content updates and less ROT. Once we get in the swing of things, we anticipate that problems discovered during editorial review will become less and less frequent.

PhotoConvincing librarians that our libraries' web content is a valuable library resource and should be curated, reviewed and weeded like any other important library collection, was an easy sell. This understanding and acknowledgement encouraged buy-in from administration, enabling an overhaul of the old author model and workflow and paving the way for acceptance of the proposed editorial review and oversight.



The proposed workflow begins with a small cadre of editors who have permissions to create and change libraries web site content. These editors will be regularly trained in writing for the web, accessibility, and best practices. Each editor will create and edit content for their own department or library and will act as backup for other units when needed. Each node on the site will have a specific single editor listed as owner, and it will be social convention and that after-the-fact review that will keep our editors working within their own realms. Ultimately, anything that belongs to an owner is that owner’s responsibility to keep up-to-date. And while permissions-wise we won’t be adding restrictions, we will be providing editors with a view of the content they own. And finally, most importantly, although web editing and content creation may only be a part of an editor’s day-to-day responsibilities, these duties are written into their job description, recognizing the effort and importance of this valuable work.

Content Board

The content board is made up of a small group of highly-engaged editors who regularly review changes made to the Libraries’ web site. They ensure that content remains valuable and relevant by working closely with editors on all web content, making changes or corrections where necessary. Membership on the board is voluntary, with the exception of the content strategist, a user experience librarian, and an IT representative. An editor on the board cannot review her own content, and a module is under development to aid in the tracking and review of the content changes. The module will supply a simple table (provided by Views) of recently updated content that has not yet undergone review. The table has a “Review” link that will open the node referenced for that row; when the reviewer looks at the node in edit mode, he will see an extra field based on his role as Reviewer that allows him to “mark as reviewed.” When that is saved, the node disappears from the aforementioned View.

Content Strategist

The content strategist heads up the content board and is ultimately responsible for ensuring compelling and sustainable content through the content life cycle. Working with the content board and web site stakeholders, the strategist defines the structures of the libraries’ web site and associated sites within Drupal to meet best practices and ensure clear and concise engagement with and communication to users. The strategist will work with the content board and editors to develop content that reflects the libraries’ goals and user needs – content that is up-to-date, factual, consistent, and accessible to all. The content strategist, along with the content board, will review and edit content for accuracy, grammar, clarity, style and messaging and assist editors in creating attractive and user-friendly web pages.

The content strategist will meet regularly with the cross-departmental stakeholders to report on content strategies, progress and any other issues. Additionally, significant changes, including page creation, changes to the information architecture, or changes in navigation, will be approved or initiated by the content board, content strategist and stakeholders. Content issues will be brought to the attention of the content board and the content strategist for review and resolution.


The quality of the libraries’ web site content is critical to serving our users and connecting them with the resources they need. As is, creating and maintaining this valuable content without getting bogged down by approval paths and confusing permissions. We believe our simple, effective workflow, involving a dedicated group of editors, an engaged content board and content strategist and the understanding of the need for after-the-fact reviews, will allow us to meet our web site goals without getting in the way. Wish us luck!

Image: Pattee-Paterno Library-Tulips_5by Penn State is licensed under CC BY-NC 2.0

Oct 07 2015
Oct 07

PhotoWriting tests for a project is a terrific way to learn about its underlying code. In the case of Drupal 8, nearly all the code is a complete overhaul from Drupal 7. For folks familiar with the inner workings of Drupal 7, writing PHPUnit tests for a given part of the codebase is a perfect way to understand what has changed.

As of June 2015, there are 566 active issues in the Drupal 8.0.x queue tagged with Needs tests. There’s lots of room for contribution for folks who can write a test. Since this column is dedicated to testing, let's look at the benefits of pushing Drupal 8 forward through test writing.

There are three different types of tests in Drupal 8:

  • Unit tests (PHPUnit) are used to test that the individual classes function as expected in relative isolation.
  • Integration tests (KernelTestBase) expand from this concept, but still only pull in code needed to test.
  • Functional tests (WebTestBase and BrowserTestBase) bring everything together, perform a full site install, and then assert the expected behavior.

Note that since the functional tests perform a full site install, they are quite slow, and should only be used when integration or unit tests are insufficient.)

To get started, find an issue in the “Needs work” or “Needs review” state that already has a patch and is tagged “Needs tests.” If the issue already has tests, update the issue to remove that tag and move along, or review the issue, give feedback, and then move along.

When a patch that only contains the fix is found, the next step is to backward-engineer the bug. The test must fail without the patch – meaning it shows that this is a bug – and then, with the patch, the test must pass. This pattern makes it easy for overworked core committers to see that the problem has been demonstrated and fixed.

For folks writing their first test, the ideal issue will have clearly defined “steps to reproduce” in the issue summary. The test then is essentially an automated way of reproducing those.

Let's use Issue #2482857: Cannot delete a book parent as an example.
To reproduce:

  1. Enable the Book module.
  2. Add a top-level book node.
  3. Add one or more child book nodes.
  4. Attempt to delete the parent node, note the exception.
  5. Try again and it succeeds.
  6. Visit one of the child nodes and note the fatal error.

The bug can be verified by manually performing those steps, but our test will ensure the fix doesn't regress later. Since this is testing the book module, tests will be found in core/modules/book/src/Tests or in core/modules/book/tests for unit tests. This test will be functional since it cannot be easily reproduced by a unit or integration test.

BookTest looks like a good starting point; it already enables the book module.

   * Modules to install.
   * @var array
  public static $modules = array('book', 'block', 'node_access_test');

Even better, it already has a helper function to build out a book with child nodes, BookTest::createBook(). The test also already includes one to test book deletion: BookTest::testBookDelete(). This means steps 1-3 above are essentially completed, so we just need to do 4-6:

     // Tests directly deleting a book parent.
     $nodes = $this->createBook();
     $this->drupalGet('node/' . $this->book->id() . '/delete');
     $this->assertRaw(t('%title is part of a book outline, and has associated child pages. If you proceed with deletion, the child pages will be relocated automatically.', ['%title' => $this->book->label()]));
     // Delete parent, and visit a child page.
     $this->drupalPostForm('node/' . $this->book->id() . '/delete', [], t('Delete'));
     $this->drupalGet('node/' . $nodes[0]->id());

The drupalGet and drupalPostForm methods are responsible for actually sending GET and POST requests to the test installation of Drupal, so we're really just clicking around the site here to achieve the steps to reproduce from above.

Once you have a failing test that illustrates the issue, this is posted as a test-only patch, meaning it is expected to fail. It is then combined with the fix for a patch that will pass. If you file an issue – or come across one without a patch – making the test patch to show the failure will greatly help moving the issue along since the person that eventually fixes it won't have to write that test, and it eliminates the need for manual testing as the patch is being written.

Whether you're just looking to learn more about the underlying code of Drupal 8, or want to really help push it out the door, writing tests is a great way to do both!

In future columns, I plan to dive deeper into the three test types mentioned above, as there is much to discover within.

Image: "Equilibrium 8_8_8" by Felipe Gabaldón is licensed under CC BY 2.0

Other articles from this issue:


Melissa Anderson

Drupal 8 Usability Study

Coming Soon

Dreaming of Drupal 8

Jeremy Rasmussen

Using Configuration Management Today

Coming Soon


The No-Workflow Workflow

Binky Lush Charlie Morris

Speed up your workflow, improve your content quality and decrease your headaches

Coming Soon

Oct 05 2015
Oct 05

House Of Blues Boston, MA
Drupal has long been described as a content management system for developers. It’s been criticized for its Drupal-centric jargon and characterized as unfriendly to inexperienced and experienced web site creators alike. In the DrupalCon Barcelona 2007 State of Drupal address, project creator Dries Buytaert stressed the need to focus on Drupal’s usability.

Not long afterward, the first formal usability study took place at the University of Minnesota, just after the release of Drupal 6 in February, 2008. Several studies of Drupal 7 were conducted in subsequent years. In June, 2015, community members returned to the university for Drupal 8’s first formal evaluation.

These formal usability tests are just one metric about Drupal’s user experience. Anyone who has introduced a new site builder to Drupal, or tried to help a Dreamweaver-savvy friend get started, has a pretty good idea where existing major challenges lie. Drupal.org has methodology suggestions to empower anyone to conduct their own studies, which can take place any time. New features in Drupal 8 are evaluated as they’re introduced, as well. For example, the Drupal User Experience team has conducted more than 70 informal sessions on Drupal 8-specific changes. The formal studies, however, lend a certain gravitas to recommendations for improvements; as we return to Barcelona for DrupalCon 2015, the history from formal evaluations provides a valuable metric to reflect on how far the project has come.

When I was invited to attend Drupal 8’s study, I was eager and hesitant. Eager, because who doesn't want to geek out on eye tracking feedback and all the experience-capturing equipment while spending focused time with key players who are working toward sorely needed improvements? Hesitant, because four years into the development of Drupal 8 seemed like a difficult time in the cycle to introduce meaningful change.

The 2015 Study

The usability team observed study participants through one-way glass as they approached a series of scenario-based tasks. In addition, audio of the participants’ think-aloud monologue played in the observation room along with dual-screen output. One screen showed camera footage of the participant working through the tasks. The other screen displayed the participant’s computer screen, overlaid with red lines from eye-tracking software.

Each session followed a single person for seventy-five minutes. In all, seven experienced website creators participated. Some had prior Drupal experience, others were experienced with different tools. All were new to Drupal 8.

Usability scenarios deliberately avoided using product-specific language (like content type, for example), but the ones for this study were intended to prompt testers to:

  • Create a basic page;
  • Add it to the main menu;
  • Create a content type with fields, likely referencing users and taxonomy terms;
  • Author and edit content on a mobile device;
  • Place the preconfigured “Who’s New” block on the home page.

There were, of course, other valid ways to accomplish the scenarios, as the participants quickly demonstrated.

What Was it Like?

The sessions corroborated the perspective I've gained over the years as a teacher and trainer: Drupal's user interface supplies inadequate support for site builders and in places feels like it’s actively working against them. People can learn to use it, but for many, initial self-discovery is tough and even with one-on-one guidance and all of the internet at their disposal, it can still be incredibly exasperating.

I’m used to working with frustrated Drupal users, but I never expected exactly how difficult it would be to watch users try to do something and not be able to help them. By the end, I felt like I was in a milder software variation of a Milgram experiment where the real test was how much frustration we, the observers, could tolerate before we were willing to end the exercise.

It’s hard to take comfort in the realization that the intensity of their annoyance was mainly because, being participants in a study, they persevered beyond the point where they would have chosen a different approach: an external form creation tool, another CMS, or the long-hand creation of HTML pages and navigation.

What’s Different About a Formal Study

Thorough and nuanced descriptions of what participants encountered are available online. Odds are, if you’ve built a Drupal 6, 7, or 8 site with the admin interface, you’ll see very familiar concerns. You might wonder, as I used to, if it really requires a formal usability study to identify most issues. It does not. By being in the room, though, I learned that uncovering issues is just one part of the value.

There were two major benefits to the formal study I’d not considered:

First, observing people as they encountered Drupal 8 for the first time with other people from the Drupal project – people empowered to lead change – mattered. I learned how differently we can interpret the same observations. Sometimes, those interpretations were just different, but often, one or the other of us made assumptions that weren’t warranted or missed something important that happened. In those cases, someone else would question the assumption, point out a detail, or offer a counter interpretation, and adjustment was easy. What could take years in an issue queue and still not be clarified often amounted to a ten second blip in the conversation.

Second, having skilled people (with distance from the project, and using an established methodology) in charge (recruiting participants, scheduling the sessions, facilitating attendance – both physical and remote – providing and tending to equipment, and dealing with the unexpected), allowed the Drupal community observers to stay focused.

In the observation room, having a highly skilled, neutral facilitator to keep conversation on track, and cut off premature discussions of solutions or long digressions about past history, was likewise incredibly helpful. Our facilitator was also masterful at interacting with the testers to reduce bias, and at eliciting their feedback and expectations.

You can watch the sessions online. If you do, I recommend you make it a group activity, with other Drupal people in the same room. It changes everything.

So How Did Drupal 8 Test?

Mobile Was a Highlight

Participants tested mobile functionality on either an iOS tablet or an iPod at the lab. Minor issues cropped up, but overall they created and edited content on the mobile devices with only occasional inconvenience.

Creating a Basic Page for the First Time? Less So.

Over half the participants struggled to find where to add a page – even when they followed the main Content link – because they were looking for the word “page” somewhere in the interface. But page was also more than a label. The idea of a web page as the entire thing you're viewing in your browser, or even as something whose underlying markup and code you can edit directly (for both content and navigation) – permeated users' struggles with Drupal 8.

Other Trouble Spots

The Field UI is one particularly troublesome area that illustrates tension between the mental models of a developer and a site builder. Even as an experienced user, I find it unfriendly, and it certainly was for participants, who were searching for “Dropdown” and had generally unhappy remarks about a list that starts with “Boolean.”

Overall, participants focused on their users and how those users would interact with the site, not with the database and how data would be stored. Compare the widget-first interface from Typeform with the Drupal interface in Image A.

Figure 1, Comparing Typeform and Drupal.

Adding fields was tough in Drupal 7, too, but at least the widget select list was nearby to give faster confirmation, if you were on the right track. We can talk about whether it would be better to restore that dependent Widget field, put the most-used choices at the top, remove the data-type sub-headings, better label Text (formatted) and Text (formatted, long), but that won’t adequately address the difference between an outside/in and an inside/out perspective.

If the interface isn’t for site builders who don’t write code, then I’m not sure who it is for. Site builders who do write code? They seem to avoid the interface at every turn, favoring tools like Features, Drush, Git, Drush Make files, installation profiles, and next, CMI. Without agreement on who Drupal’s admin interface serves and respect for those users, it seems hard to bridge the usability gaps.

What Impact Will This Study Have?

As with many things in Drupal core development, it's hard to say what will change. We can look to the impact of the Drupal 7 research for an idea. Conducted in 2011, the Drupal 7 usability study prompted creation of 80 issues.. Four years later, over half of them remain unresolved.[1]

Many changes are frozen in the beta phase. UX changes, however, are prioritized during this period and, according to product manager Angela Byron, nearly all of the 145 issues from this year's study could be addressed in some fashion prior to Drupal 8’s release.

What Gets in the Way?

A number of things can hamper change:

  • Some people blame site builders for a lack of education, skill, or intelligence, and see no need to change.
  • Others disregard difficult user experiences as anecdotal or dismiss formal participants as outliers.
  • Some believe that Unix command line skills, debugging with PHPStorm, and confidence with Git are requisite site building skills.
  • Varying perspectives can make it difficult to agree on whether an issue is actually a problem.
  • An issue may lack a champion to lead it through the community discussion in order to get agreement on the nature of the problem.
  • Even with agreement about the problem, people cannot always agree on the appropriate course of action.
  • When both problem and solution have enough support, there still may be no one willing to do the work or to shepherd the code from its initial patch through the requirements that make it committable.
  • A subsystem maintainer may still decline to maintain the code and reject it.

What’s Next, Then?

It’s traditional at this point to invoke the do-ocracy and tell folks to roll up their sleeves, get in there, and give back!

It’s hard, though, to confront fundamental usability issues that were identified seven years and a major release ago. Recent core governance clarifications may provide a more effective path for addressing usability, but without a cultural commitment to site builders, it seems that the disconnect between developers and site builders will likely remain unresolved.

Issues may be be closed as time goes on. Closed issues also include duplicate, won't fix, works as designed and other states that could indicate that usability issues weren't sufficiently addressed.

The Elephant

While usability has improved in non-trivial ways since Drupal 6, I don’t see much progress on the foundational problems.

Here are some conceptual barriers in Drupal 6, from the first formal study in 2008:

  • Where do I start? Missing are step-by-step, task-based, conceptual help, tutorials, and example content.
  • Where did my page go? Users often lose all sense of context.
  • What is “content”? The word “content” is used ambiguously throughout the user interface: content type, content management, create content, etc.
  • How do I add a form to my page? Drupal doesn't communicate its mental models well. Users thought content types were fields, content types were content, etc.
  • Where's that key word? Words like"form" and "field" are hardly used in the interface, so users resort to guesswork.
  • What do my users see? There’s no clear distinction between admin and user-level views, and no way to preview things like node add forms as you're creating them.

The University of Minnesota, Usability, and Drupal

The University of Minnesota selected Drupal 7 in November, 2012 as a replacement for their end-of-life Oracle-based content management system and began rolling out sites in March, 2014. According to Steve Nguyen, Service Director for Collaboration & Web Content Services in the Office of Information Technology, accessibility and usability were two of many factors in the university's choice of Drupal. The project's commitment to accessibility in the form of WCAG 2.0 AA compliance means programs using Drupal are assured they are adhering to federal guidelines. In addition, they have been developing a tool to tailor site and content creation to the university’s needs and reduce the steep site builder’s learning curve.

Hosting the Drupal 8 Usability session, Nguyen says, is one of their ways of giving back to the Drupal community. The university’s centrally-funded usability lab plays a key role in evaluating their customizations and site designs. Outside organizations using the lab are rare, said User Experience Analyst Nick Rosencrans, because of the high internal demand for its services.

The Drupal 8 usability team's desire to live stream the sessions required two significant changes to their usual process. First, Rosencrans said, they worked closely with the Office of the General Counsel to develop a talent release form that allowed participants to consent to the remote presentation of the lab’s composite video of them, rather than just the video of their screens.

Live streaming required technical adaptations as well. Joe Finnegan, Technology Support Staff, said the most significant challenge was integrating with the existing lab technology. The key was to treat the audio and visual input from the existing system as USB devices so they were selectable as “presenters” in the conferencing software in order to display video full-screen to remote observers. He says that with the success of this session, they plan to begin design of a more permanent approach.

The Drupal project may continue to see long-lasting benefits from these changes. Bringing people together physically is logistically and financially challenging. Remote access to sessions makes it more feasible to identify issues in user experience before major version releases.

Image: "House of Blues by Stephen Dwyer is Public Domain

Sep 16 2015
Sep 16

Angie Byron (a/k/a “webchick,” Director of Community Development, Acquia) and Addison Berry (Lullabot Director of Education) recall Drupal-blue beverages they’ve guzzled, and argue heatedly over their favorite emoji to use in a GitHub pull request. (You’ll never guess!)
Oh, and they also talk about their book, Using Drupal, which is geared towards familiarizing the reader with all the tools one can find in Drupal core and modules without requiring custom development.
For more Angie, read my interview with her in Drupal Watchdog 5.01 – and subscribe! https://drupalwatchdog.com/subscribe/2015

Sep 11 2015
Sep 11


This article was written with Drupal 8 still under very active development; if something don’t work… maybe you can fix it.

Pink Origami Unicorn With the release of Drupal 8, themers will be showered with pleasant surprises.

For starters, the theming engine has been changed from PHPTemplate to Twig. And while we were fixing that little issue, we also modernized the CSS architecture, removed all theme functions, shuttled control of markup and CSS into the theme, and added image styles, breakpoints, and responsive images.

Were you ever confused about where certain markup came from? Guess what: it’s now visible by viewing the page source.

Welcome to the epic world of Drupal 8 Theming: powered by a fearless horde of Norsemen and Shield Maidens! riding on mighty Unicorns! in the full moon! under the banner of Drupal 8! onwards to victory!
Or just a really delightful theming experience.

First Blood

Let’s start with a Drupal 8 theme. Create a new folder in [root]/themes/[themename]. This will make it available to all sites in your installation. For a site-specific theme, you could place it in sites/[sitename]/theme as is done in Drupal 7.

To register the theme, Drupal needs a .info file. That file must be placed inside the theme folder theme/[themename]/[themename].info.yml, and have the following contents:

name: yggdrasil
type: theme
description: A mighty theme
package: themes
core: 8.x
screenshot: yggdrasil.gif

base theme: classy

  - yggdrasil/global

stylesheets-remove: stylesheet-i-dont-need.css
stylesheets-overwrite:  stylesheet-i-overwrite.css

    header: Header
    content: Main Content
    footer: Footer

Let’s break down each part of the .info file.

Theme Definition

The first part of the info file defines the basic information of the theme, its name, core version, and an image.

Base Theme

This is one of the “small” things that has changed with Drupal 8. It defines how Drupal gets its markup and classes. In Drupal 7, core came with the templates: if you wanted to change those, you would overwrite a template file or a theme function. Sometimes, the markup even came pre-rendered.

Drupal 8 has been built from the ground up with the bare minimum of CSS and markup. Only what’s needed for core functionality is added to the core template files; everything else is added to the new Classy theme. (If you don’t add a base theme, your site will still work with core functionality, but with a bare minimum of markup.)

The Classiest Theme Ever Created

The role of the Classy theme is to provide default markup and CSS for Drupal. Classy provides a rock-solid markup and CSS combination built on modern concepts; BEM and SMACSS devotees will feel right at home.

That gives themers and module developers a common ground to build on, while not locking down those who want to develop new frontend frameworks (which can now be done without having to battle with Drupal’s default markup).

So fans of insert-your-favorite-css-framework can create a theme with little or no headache.


A new concept for a theme is “libraries.” Without getting too technical, this is where you add CSS and JS files.
Libraries can be activated for specific pages, and you can enable or disable Javascript files as needed. Don’t get confused by the name “library”: your CSS is still just a CSS file, it didn’t go all Dewey-Decimal wimp, with horn-rimmed glasses and a hankering for claret and beat poetry.

In the theme’s .info file, we define the libraries we’re going to use. In this case were setting a “global” library: yggdrasil/global.

Now our theme needs a library file: [themename].libraries.yml. We’ll create one with the following contents:

  version: 1.x
      css/style.css: {}
      css/print.css: { media: print }
    js/style.js: {}
    - core/jquery

Let’s take a closer look at what’s being defined here.


The “global” library will add the CSS files css/style.css and css/print.css. We can optionally add media queries like {media: print}, so if you fancy writing media queries at the file level, this is where you do it.


Javascript gets added the same way as CSS. And as a bonus, there are dependencies, which means that if your theme requires jQuery, you have to define it. If jQuery isn’t defined, it won’t be in your theme (until a module needs it). Yes I know you will miss the 32K of love that was added to every page. Sorry.

For more information on including CSS and JS in a theme, see http://wdog.it/5/1/add

Removing and Overwriting CSS Files

If a module provides a CSS file that you don’t have any need for, it can easily be removed by using the stylesheets-remove configuration setting. (Some of my loyal fans might remember this as the FOAD method, as described in my “Angry Themer” column from Drupal Watchdog Vol. 3, Issue 1: http://wdog.it/5/1/foad).

For example, you can remove a CSS file by adding the following to your [themename].libraries.yml file:

stylesheets-remove: stylesheet-i-dont-need.css

Stylesheets provided by modules in Drupal 8 follow the “Module-Admin-Theme” (“MAT”) name pattern, that makes it much easier to figure out what is contained in the various CSS files:

[modulename].module.css - contains functional essential CSS
[modulename].admin.css - CSS for the admin interface
[modulename].theme.css - the colors, padding, etc.

That makes it child’s play for the themer to remove all the colors and padding in *.theme.css from a module without the risk of breaking functionality, which is now separated out in the *.module.css file.

To keep a module’s CSS file, but overwrite it with a the theme’s local CSS file, you can use stylesheets-overwrite:

stylesheets-overwrite:  node.theme.css

Debugging is Fun

Now we’re ready to enable the theme. Load up your Drupal 8 site, find a template and let’s make some magic!

Trying to figure out where the markup comes from has always been a thorny issue: was it a function, a template, or an act of the Drupal gods? Sure, the Devel module solved some of these problems, but it was still tedious work.

In Drupal 8, we have theme debugging built into core. To enable debug, go to the services file (sites/default/services.yml). There you’ll find the Twig settings, which makes it a snap to enable debug mode:

debug: true
auto_reload: true
cache: false

After setting debug: true, clear the cache and do a view-source on your site; you should see comments, as shown in the following screenshot:

View source of a Drupal 8 generated page.

Debugging comments include:

  • theme hook: which hooks are in use.
  • filename suggestions: the little X on the suggestions tells you which of the templates are in use, and gives suggestions to filenames that can overwrite this template.
  • path: tells where on the filesystem the template is located, For example: core/themes/classy/templates/node/field--node--title.html.twig

If you want to take debugging to the next level, download the Devel module and use kint on the variables in the template files to get more info: {{ kint(variablename) }}


CSS has changed a lot over the last couple of years. One of the hard truths in a CMS is that it’s almost impossible to keep up with what happens in the frontend. (Responsive, anyone?)

We changed Drupal 8 CSS documentation to build on SMACC & BEM principles, but we can’t pretend that those principles will live on forever. So, instead of Drupal assuming what a theme(r) needs from CSS classes and hiding that information deep away inside a function, the control is now flipped around to the theme.

In Drupal 8, it’s the template that defines what class names are used. A module can still provide suggestions, but the classes are controlled from the template file.

Lets take a look at the node template, node.html.twig:

  set classes = [
    'node--type-' ~ node.bundle|clean_class,
    node.isPromoted() ? 'node--promoted',
    node.isSticky() ? 'node--sticky',
    not node.isPublished() ? 'node--unpublished',
    view_mode ? 'node--view-mode-' ~ view_mode|clean_class,


At the top of the file is a Twig variable set called classes, which is then added to the attributes for the article tag with .addClasses().

When the template is rendered here, the classes will be added to the attributes and printed out:

    We can manipulate the default classes and only print out what’s needed with the information that Drupal gives us.

  • string: 'node' (set ‘node’ as a class)
  • tilde: 'node--type-' ~ node.bundle|clean_class The tilde (~) adds to the string -- in this case, the node.bundle (aka content type) and makes sure that it’s correctly formatted with the |clean_class filter.
  • if: node.isSticky() ? 'node--sticky', If the node has isSticky set to true, then we add node—sticky to classes.
  • not: not node.isPublished() ? 'node--unpublished', Now we ask the node: if it’s published. If it’s NOT, then add the node—unpublished class.
We can now do all sorts of cool stuff. For example, if a module adds a class red, you can remove it from your classes by using .removeClass(‘red’), like this:

Remember, if you’re adding classes that are used for Javascript functionality, prefix it with js-. That way, it’s easy to understand what a class is used for. If a file is named .js-something, then we know it’s not used for colors but for some magical Javascript stuff. (You should really use data-attributes, but I’ll save that for another article.)

Images — Now Also Responsive

With the help of configuration management, we can now pack the theme with predefined image styles: first, configure the image styles in normal fashion; then, export the configurations to the theme (found at Configuration » Configuration Management » Export).
 Image styles in Drupal 8.

To export your image styles, create a file with its correct style name (image.style.[stylename].yml) and paste in the content from the export form.

The file should then be saved in your theme’s configuration directory, [themename]/config/install/.

Configuration Management file export.

Breakpoint and Responsive Images

Okay, so now the theme has images, but lets make them responsive. First, we need to define the breakpoints for our theme, and that goes in the file [themename].breakpoints.yml:

  label: Very tiny style
  mediaQuery: '(min-width: 0em)'
  weight: 0
    - 1x

  label: Epic sized
  mediaQuery: 'screen and (min-width: 120em)'
  weight: 2
    - 1x

For more information on breakpoints, see http://wdog.it/5/1/bp

Now we can map the breakpoints to image styles by creating a new responsive image style at Admin » Config » Responsive Image.

Responsive image styles

Awesome Level 9000

If all this wasn’t enough to make you smile, then remember those questions that once induced throes of terror in the spine of every themer: “Can we change the markup of the main menu, but not the other menus?” Or, “Can you fix the pagers markup?”

Now you know how to find them and, yes, there’s just one template file for each.

And so, this fearless Viking mounts his mighty Unicorn, in the full moon, as his Shield Maiden smashes her Battle Ax against some hapless template files.

See you in the sourcecode.

... a much-less-angry themer

Twig Crash Course in Four Lines!

{{ prints a variable }}
{% set varible-that-we-can-use %}
{# comments #}
{% if something %}

For more, see my past article, “Gettin' Twiggy With It,” where I covered the basics of Twig

Image: "Unicorn" by Emraya is licensed under CC BY 2.0

Sep 09 2015
Sep 09

Dani Nordin is at our “Meet the Authors” booth (courtesy of Drupal Watchdog, so subscribe now: https://drupalwatchdog.com/subscribe/20150) to autograph her new book, Drupal For Designers, and to promote Design for Drupal, a summer get-away in Cambridge, MA.
Geeks, send your kids to camp: hiking (virtual), campfires (Yule log), and coding.
Lovely views.

Sep 08 2015
Sep 08

Photo of manufacturing Few developers in the startup community would recommend making a product with Drupal. If you are building a web or e-commerce site, sure, but if you want to build a SaaS product there are plenty of technologies that are easier to productise. Installation profiles and features were added onto Drupal 7 as an afterthought, not as a central design principle and, as a result, have plenty of shortcomings. It looks like this will get better with Drupal 8 but, for now, by design, Drupal is not architected for easy redeployability.

Sounds pretty damning? It may be, but a lot of work has gone into remediating this problem, and Drupal is still better at redeployability than most other CMSs. CMSs are designed to be products themselves, not to be used to build products. Drupal however, is hyper-configurable. As a tool it is made for infinite extensibility, to keep as many options as open as possible. Compared to other CMSs, it’s easier to extend even if you don’t yet know what you will want to do in the future – which makes it great for one-off tailor-made projects. And this is a strength for some types of products and for certain parts of the development cycle.

In our consultancy, we started building “products” with Drupal because it was the right thing to do: all our products had some technically challenging component that we could contribute back to the community. Our developers were able to grow their skills significantly through the challenges they overcame and get an opportunity to build their reputation in the community. But these initial products didn’t make much business sense, except maybe as cost-leaders to promote our services. Most were not really sustainable as standalone products.

It has taken a long time, but after many iterations, we’ve learned a lot from these experiments. The projects we launched in the past two years have been much more successful. We still need to keep on fighting to get through what Seth Godin calls “the dip,” but we’ve gained a key insight; we now know the types of products we should be using Drupal for. In this article I want to share with you the most important of these insights.

In his book, The Dip, Seth Godin argues that anything worthwhile requires you to go through “the dip,” a moment when you have to struggle with all your might to succeed. Most people give up in the dip, right when they should push hardest. The dip is what keeps services and products valuable; without it, they become commodities.

Drupal as a Prototype

Creativity requires boundaries. When you can do anything you want, it is really hard to be creative, because you get paralyzed by choice or, worse, you try to build everything perfectly the first time. It is now generally accepted in the startup world that the only good way to build a product is through an iterative, lean process, in which you create as many learning points as you can, using Minimal Viable Products and customer development to shape a product people actually want.

I’ve found Drupal to be an amazing tool for prototyping: content types, configurable fields, rules, triggers, profiles and a whole string of contributed modules make it really easy to throw together something fast. In a few hours of clicking you can go from an idea to a product your customers can interact with.

For example, the last week of January I built a first prototype for LinkForward, a web application that makes it easy to ask for introductions. The first iteration took me four hours. It had a user profile, actions that send e-mail introductions, a view that shows a single profile at a time – to reduce cognitive load – and a system of flags to keep track of the things you’ve already checked. The second prototype after four more hours of work, allowed users to follow each other, so that they could prioritize asks from their acquaintances. After only 12 hours of development work, I was inviting people from my network to check it out.

As a result, I believe that Drupal is a really great tool for building what Alberto Savoia calls pretotypes: first prototypes of services that might fake most of the functionality a full prototype would require. It enables a focus on the user interaction, to discover if people will actually use a product, not just say they want to use it.

Minimal Viable Product (MVP) is a term coined by Frank Robinson, and popularized by Steve Blank and Eric Ries. It “describes a version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort”.

A pretotype, as defined by Alberto Savoia, is very similar to the concept of an MVP, but is more specific: it focuses on “making sure that you are building the right it before you build it”. The weakness of MVP as a concept is that it’s too broadly used. Sure, it’s easy to say that a prototype will not be viable without a certain feature (e.g., design, a mobile app, LinkedIn integration, etc.), but pretotyping helps founders create something barbarically minimalistic.

Great! It’s easy to build a pretotype that helps you make sure you are building the right “it,” and it’s especially suited for startups, where the majority of the innovation is in the business model, not in technical development.

But... Be careful you don’t overdo the feature side. (A tool that limits the modules you can use, like Drupal Gardens can be a great box to think in.)

And don’t expect to keep all the work you’ve done; it’s important to learn and rebuild from scratch.

Drupal for Non Technical Founders

Drupal is an amazing tool for non-technical founders because there are so many options to create truly complex behavior – without writing a single line of code.

Actions, triggers, and rules let you create events that your product will respond to. A developer in Belgium told me about an enterprise company where a product owner creates prototypes of products in Drupal, before his developers build it out in Django. I don’t know why his team doesn’t use Drupal, maybe because they like Python better than PHP. Whatever the case, it is a fascinating example of the power of Drupal for non-technical entre/intrapreneurs.

And that’s the reason that I fell in love with Drupal, back in 2006. My first website was a technology listing page for a biotech research center that I quickly “clicked together” in Drupal. In all the years that I’ve been leading a Drupal company, I never needed to learn to code to be able to architect or even build projects. I’m grateful for Drupal, it gave me superpowers on the Internet and allowed me to build the foundations of some really complex systems.

Great! You don’t need to be a developer to build a prototype; Drupal allows you to build complex websites without a single line of code.

But... If you are not familiar with Drupal, you might make architectural decisions that cause problems down the road.

Drupal as a Platform

Most people start using Drupal because it’s such an amazing platform:

  • if you build a Drupal product, there are a range of modules that can be used to extend your product. A vibrant community works on thousands of contributed modules that provide a large range of functionality.
  • Drupal has a whole ecosystem of consultancy companies, hosting and support solutions, and a plethora of integrations with third party services. As a result, it is much easier to build a whole product.
  • Drupal has a track record of large enterprise scale projects at some of the most prestigious organizations. When you sell a Drupal product, you are selling a pre-configured instance of Drupal. This makes it much easier to answer questions about security and scalability than with a product that you have developed from scratch.

All of these reasons make Drupal terrific for "me too” products. As a startup, you can step into an already proven market niche with an open source version of a software solution. You can use your product as an open source cost-leader to gain market share in the enterprise service market. Acquia’s OpenSaaS program is built around that concept: to work together with teams in the community who have developed a Drupal version of a certain product category, that you can sell with your sales force.

“Cost leadership” is a term that was developed by Michael Porter. It describes a business strategy in which a company becomes the leader in a market through a focus on cost control. In the open source market, with a free product, the product itself has – by definition – the cheapest purchase cost. When you give away a free product as a commercial entity, you need to derive value from complimentary services. Ideally, these provide a subscription-style component that generates recurring revenue. The most economically successful open source software products use a razorblade business model. They give away their product and recoup their investment on hosting, support, and other complimentary services. (“Razorblade model” refers to the shaving industry, where handheld razors, like Schick, for example, are sold at a low price, but are only compatible with replaceable Schick blades.)

NuCivic’s DKAN is a tool that helps organizations publish open data sets. Through the OpenSaaS program, they were able to leverage Drupal’s reputation and familiarity to rapidly gain market share in the Open Data solutions market.

Another example of a product that leverages Drupal as a platform is, of course, Drupal Commerce – The Commerce Guys’ distribution that packages the functionality you need to make e-commerce sites. The Commerce Guys monetize their development through a combination of application support, professional services and development, and production hosting; in most cases, they leave the actual development of individual projects to their network of partners.

Phase2 created OpenPublic as a platform on which they can build websites for government and public policy organizations. In contrast with the Commerce Guys’ partner strategy, Phase2 actively seeks to implement projects themselves. Even when they share OpenPublic with the community, it gives them a marketing advantage when companies in the public sector go looking for a Drupal partner.

Great! Drupal is a mature platform that makes it easier to create a whole product.

But... As a Drupal consulting company it’s easy to start building “a product” without properly validating the opportunity for generating sufficient revenue from add-on services.

Drupal as a Frontend

Recently we were asked to help a Silicon Valley bio-informatics startup with the development of the frontend for their web platform. They were developing their API backend in .Net and wanted to use Drupal as frontend for their customers. At first we were surprised that they wanted to use Drupal in this way, and that they didn’t choose to build a frontend application in Angular or another Javascript library. Their choice made a lot more sense once it turned out that the frontend also needed to handle the e-commerce component of their application.

Other products have been built this way: Apigee’s developer portal is built on Drupal and lets you create API documentation. It also supports blogging and provides threaded forums. The main advantage of using Drupal as a frontend for their services is that their customers can customize the developer portal to meet their specific requirements.

Great! Plenty of developers know how to customize Drupal; that makes it easier to customize the interface for your services.

But... The theming system in Drupal has its quirks.

Drupal as a Market

Drupal has a worldwide community that is very well interconnected. It is easily addressable through community events, like Planet Drupal and other community communication channels. Two years ago, we completed a successful crowdfunding campaign in the Drupal community for WalkHub, a Drupal product that lets you record Walkthrough tutorials on top of web applications and websites. Targeting your product at the Drupal market however is not for the faint hearted. Drupal is a micro-market; it’s a great beach-head to cross the chasm to a mainstream market adoption.

Typical examples of productised services targeted at the Drupal market are, of course, Pantheon’s and Acquia’s hosting solutions.

Great! The Drupal community has a lot of early adopters and technical innovations that welcome anybody who wants to extend the capabilities of Drupal. This makes it easy to get feedback on your product.

But... Over a million reported Drupal sites might sound like a big market, but it really isn’t. In time, to be successful a product will need to expand beyond the Drupal community or use an aggressive sales and premium pricing strategy to generate sufficient revenue for significant growth.


I hope I’ve convinced you that there are significant opportunities for building products with Drupal. Even if you are not yet a Drupal geek, I believe it can be a valid strategic choice.

Image: © by Ondine Corewign

Sep 04 2015
Sep 04

Image of computer boardAs far as how to install and maintain Drupal, there is no need to dig further. Installing Drupal in any hosting provider is simply a matter of decompressing a tarball; Drush on the command line gives the seasoned sysadmin a wealth of administrative aids. However, when considering integration with the system as a whole, there is ample room for improvement. Drupal cannot exist outside of a given environment: the PHP version used, the way it is integrated into its host operating system, the modules for connecting to the database, and the database engine itself. Ideally, they will all make up a coherent entity, with a single, unified administrative logic. Looking at each component individually would quickly lead to madness, especially for the sysadmin, who has to keep the whole stack secure and updated.


Debian is one of the earliest free­software distributions built around Linux. By the time this article is printed, Debian 8 (code­named Jessie) should have been released, with over 35,000 binary packages (that is, independent programs), and lives according to the distribution's motto, the universal operating system. It runs from the smallest embedded devices to the largest workstations, across many different hardware architectures.

System-wide Integration

One of Debian's strongest merits – what has made it a lively, energetic community with a sound technological platform and projection into the future – is its policy. Despite the amount of available packages, they are all standardized: They are all configured in the same location, follow the same layout logic, and are guaranteed not to clash with one another. But where this policy shines most brightly is when it is applied to keeping things
optimally administered. Debian provides security support throughout the stable release cycle, not just easing the system’s installation. As was already stated, our Drupal installs involve quite a bit beyond just Drupal itself. So, on a freshly ­installed system, the task of installing

Drupal is just a matter of running:

# apt­get install drupal7

And all of the necessary software (that is, an HTTP server, the PHP programming language and execution environment, a relational database engine, and the needed glue between them) will be installed. To apply all of the pending security and reliability fixes across the entire system, run the command:

# apt­get update && apt­get upgrade

That’s all that's needed to get every component in the system up to date.

The Debian Drupal installation is multisite­aware. This means that all of the sites Drupal will respond to should be configured from the same location. If your host will serve both example.com and anotherexample.org, you only need to create the /etc/drupal/7/sites/example.com and /etc/drupal/7/sites/anotherexample.org directories and put the sites’ configuration files there. All of the codebase will be shared, installed only once in the /usr/share/drupal7 directory.

This has an interesting advantage security­wise when compared with what I have seen at most shared hosting providers. As all the Drupal code belongs to root, any attacker that manages to subvert Drupal's security – or any of the installed modules’ security – will not have enough privileges to modify your site’s code, and will thus have a harder time leaving a backdoor or modifying your site's behavior for their interests. Even if they got a local rivilege escalation exploit, finding their misdeed will be easier: Debian ships cryptographic signatures for all of its files. By simply running the following command, any file that was modified will be reported:

$ debsums drupal7

Handling Modules

The Drupal ecosystem is very rich in third­party code: almost 30,000 modules and over 2,000 themes. Packaging them all for Debian is plainly unfeasible, but we have a tool – dh­make­drupal – that not only packages modules and themes pointed to it, but processes dependency handling as well. This way, even having a complex, multi­site and multi­server deployment, it’s easy to deliver code with all of the characteristics mentioned in the previous section.

Version Stability

Just as, within Drupal, all of the related PHP and Javascript libraries are frozen prior to a release and not hanged at that point (to avoid breaking internal stability throughout the life cycle of a stable release), packages in Debian are frozen and do not get updated when new versions come out. But in order to keep security tracking at the level required by our users, all important fixes get backported to the corresponding release. For example, Debian 7 (frozen gradually since June 2012 until its release in May 2013) shipped with Drupal 7.14. But that does not mean that Debian’s Drupal package went on for its three years of life unpatched: While feature releases were not incorporated, as you can see in our public Git repository, all bugfix releases were.

What About Drupal 8?

Now... With all the hype set in the future, it might seem striking that throughout this article I only talked about Drupal 7. This is because Debian seeks to ship only production­ ready software: as of this writing, Drupal 8 is still in beta. Not too long ago, we still saw internal reorganizations of its directory structure.

Once Drupal 8 is released, of course, we will take care to package it. And although Drupal 8 will not be a part of Debian 8, it will be offered through the Backports archive, following all of Debian's security and stability requirements.

Wrapping Up

Of course, I understand the workflow and tools mentioned here are not for every person and situation. Drupal developing is clearly served better by the “traditional” way of installing Drupal. However, for administering a large­ scale Drupal installation, operating­ system integration through Debian might be just what you need!

Image: " X-ray of computer motherboard" by tpmartins is licensed under CC BY 2.0

Sep 04 2015
Sep 04

Photo of network cablesWhen faced with the task of managing videos in Drupal, the number of available solutions might seem overwhelming. Where should you store the videos? What is the difference between CDNs (content delivery networks), cloud storage services, and hosted video solutions? Which Drupal modules should you use with which service?

Drupal Modules

By using some of the most popular modules for video handling, you can quickly set up a reliable video solution:

  • Media module Although not specialized for video, this widely used module can be combined with others; some cloud services have their own modules integrated with the Media module, too.
  • MediaElement module The MediaElement module provides an HTML5 (or Flash fallback) player. With the MediaElement module you can stream video to mobile devices as well. The player can integrate with your responsive theme.
  • Video module The Video module can handle video upload, transcoding, and playing; generates thumbnails; and can deliver videos from cloud systems.

Content Delivery Networks

CDNs are largely distributed systems optimized for delivering content to end-users, over the Internet, with high availability and performance. For video content, they are often coupled with a transcoding server.
They can be expensive, but are a good choice for improving performance and delivering content for high-traffic websites. As the data centers are distributed, they will be faster than the usual hosting providers for most visitors to your site. Also, contemplate using a CDN if you already have a transcoding server.

The CDN module provides easy Content Delivery Network integration for Drupal sites. It alters file URLs so that files are downloaded from a CDN instead of your web server.

Cloud Storage Services

Cloud storage services aren’t optimized for delivering rich media content on high traffic sites, but can be a cheaper alternative to CDNs – if you don’t have a huge number of videos and your site traffic isn’t very high.

The Media module alone doesn't provide full cloud storing services (like Amazon S3, Rackspace Cloud, and Google Cloud Platform), but together with the Remote Stream Wrapper module, you can reach external files from Drupal.

Some modules for full cloud storage service integration:

  • Storage API is a low-level framework that can be extended to work with any storage service.
  • Google Cloud Storage allows you to replace the local file system with Google Storage. Files will still be managed by Drupal, but instead of being stored on the local server, they will be stored on the Google Cloud Platform.

Cloud-hosted Video Solutions

Hosted video platforms are specialized for video storage, transcoding, and playback. They already include the transcoding software, and additional features like players (for different devices), playlists, ads, live streaming or DRM (Digital Rights Management).

DRM is a robust content protection program that enables publishers to enforce policies and rules for usage of their content.

  • Brightcove offers a highly-scalable secure video hosting platform for delivering and monetizing video across connected devices.

    The Brightcove integration module adds Brightcove videos or playlists to your content, and accesses information from the Video Cloud from within your site. Brightcove actively maintains their Drupal module by paying a company from the community – Pronovix (where I work) – for development and support.

  • The Platform offers different online video packages from an enterprise-class video management system to businesses with smaller video libraries. They also provide a transcoding engine. Their Drupal module, Media: thePlatform mpx, integrates with the Media module.
  • Wistia is a professional video hosting solution with analytics and video marketing tools. In Drupal, you can either use it with the Media:Wistia module or the Video Filter module that has extended Wistia support.
  • Vzaar is a video hosting solution offering quick, secure, and reliable services with much-praised support. Use it with the Media:vzaar module for Drupal integration.
  • Viddler offers Overture, a video hosting platform for enterprise corporations, and Arpeggio, a responsive HTML5 video player that enables you to create interactive video content and timeline commenting. Use the Viddler module for Drupal integration. (Note: The Drupal 7 version is in development, currently for testing only with limited functionality.)

Self-hosted Video Solutions

For an open source solution, you can opt for Kaltura – the world's first open source online video platform – or set up your own hosted video platform with OctopusVideo.

  • Kaltura provides enterprise level commercial software and services, as well as free open-source community-supported solutions for video publishing, management, syndication, and monetization. You can use it as a one-stop solution for all your Rich Media, including images, audio, and documents.
    Using the Kaltura module, you can either leverage the Kaltura hosted platform or grab the free source code and self-host the entire platform.
  • For your own hosted video platform, try OctopusVideo, an open source video transcoding and delivery platform built on Drupal.

Video Sharing Sites

If simply embedding YouTube or Vimeo videos in your Drupal site is enough for your project, you have more options to choose from:

  • Use the Embedded Media Field module and the YouTube integration for the Media module in both Drupal 6 and 7.
  • Use the Media module to embed videos from YouTube or Vimeo, by adding the URL to a field or through an editor.
  • In Drupal 7, use the Video Embed Field module to create a simple field type that allows you to embed videos from YouTube and Vimeo or show their thumbnail previews by entering the video's URL.
  • Use the Media Embed plugin for CKEditor that opens a Dialog where you can paste an embed code from YouTube or Vimeo.


As you see, there is a wide variety of available solutions, which vary widely in price and performance: choose the approach that best suits your needs. This overview should help you get started!

Image:"Networking" by npobre is licensed under CC BY 2.0

Other articles from this issue:


Jonathan Hedstrom

There are lots of new features for testing Drupal sites, making the process simpler and more efficient. Herein, some examples and explanations.

Coming Soon

Jeff Sheltren

NYC tourist: “How do I get to Carnegie Hall?” Musician: “Practice, man, practice.” The same is true for achieving expertise as a barista – or a Drupalista.

Coming Soon


Emma Jane Westby

In Drupal, there’s no single enforced path into content; there’s more than one way to skin the cat. Taxonomies, vocabularies, and other options abound. Take a look.

Coming Soon

Sep 02 2015
Sep 02

Jen Lampton and Nate Haug (co-founders, BackDrop CMS) explain their fork of Drupal 7: why they felt the move was necessary; who benefits from it; and the Drupal community’s reaction.
There’s also a blatant plug for Drupal Watchdog. (Which you should subscribe to, right now, before you forget: https://drupalwatchdog.com/subscribe/2015)

Aug 27 2015
Aug 27

Photo by Myles Brawer

As it says on the t-shirt, I’M NOT HIM.

Okay, I know I look a lot like Howard Stern.

And yes, I spent a pleasant hour chatting with him and Robin on his show that one time. (The video is somewhere on YouTube, but don’t ask.)

And yes, I auditioned for America’s Got Talent. (Three thumbs-up votes, one thumbs-down.)

And okay, yes, I’ve obligingly posed for thousands of selfies with Stern-fans.

But I’M NOT HIM! I’m not leading a double-life as Drupal Watchdog editor and the King of All Media.

Yes, but what if...?

So here’s a spoof Bob Williams and I made during DrupalCon Los Angeles. Yeah, I know, the audio on the elevator kinda sucks, but the acting!

The acting – and Ronnie Ray’s Drupal expertise.

(Photo by Myles Brawer)

Aug 26 2015
Aug 26

“People like the beard,” quips Jeremy Rasmussen (Director of Web Development, Lever Pulley), who writes a frequent column in magazine (Subscribe! https://drupalwatchdog.com/subscribe/2015)
about Drush, that Swiss Army knife that “makes Drupal bend to your will.”

Aug 24 2015
Aug 24

Photo of Irish Setter at Dog Show As web sites and applications have become more complex, the need for auditing – at multiple points in the lifecycle of a project – has become ever more important.

Before delivery, a web project can be audited to ensure the ability to meet business goals or compliance with regulations. After delivery, an audit can identify problems and propose remedies. In a possible merger or acquisition, an audit can help evaluate the project’s relative benefits and liabilities.

Website auditing has become similar to financial auditing (which is separate and distinct from accounting and financial activities). It is similar to the practices applied in auditing management systems (see “There’s a Module Standard for That” sidebar).

Website auditors must apply these four principles:

  • Judgment They must be able to choose the scope and granularity of the website, without wasting effort on discovering problems with no meaningful impact on the behavior and performance of the site; hence, a need for business acumen.
  • Expertise In order to determine whether or not best practices were followed by the original site developers, auditors must achieve a level of proficiency beyond that with which the site was delivered.
  • Objectivity Auditors cannot audit a site they themselves produced, or else risk selective blindness – the inability to see problems they missed the first time around.
  • Distance Auditors cannot operate on a website developed by a company – especially their own – with which they have any kind of commercial or personal involvement.

The Real World

Market studies show that site audits are often used as a loss leader by generalist Drupal agencies. Their objective: to set the stage for redevelopment and third-party maintenance work, where the main volume of business is done using “findings” from a short and low-cost audit to provide the developer with a technical advantage against competitors.

Aside from preventing efficient market competition, this practice applies strong pressure on tailoring audit results to meet the specific abilities of the target company; i.e., it questions the trustworthiness of audits. Such “audits” neither meet neutrality requirements – because of the expected gain – nor meet proficiency requirements – because no company can operate above its own level.

In 2014, for the first time, many customers mentioned that the very existence of a code of ethics – albeit minimal – was an important factor in selecting one auditing practice over others.

In order to provide businesses and organizations, and even certifying bodies, with reliable audit reports, our sector must continue to mature and establish auditing as a separate activity – with strict ethical requirements.

Unlike the financial profession and general management systems, there is now little regulation in place for web/app development vs. auditing (with a few exceptions like PCI compliance); the onus is on us practitioners to establish a code of ethics, based on the best practices identified in other sectors, but adapted to the specifics of our line of work.

Existing Texts and Drupal Work

The general standards community in ISO/IEC has already produced a significant amount of work, mostly targeted to business management systems, as in the ISO 19011 standard, by Technical Committee 176 on quality; the ISO/IEC 17021 standard on auditing bodies, by the Committee on Conformity Assessment; and the ISO/IEC 27006/27007/27008 series of standards established around security auditing.

The IEEE/ACM Software Engineering Code of Ethics and Professional Practice is also relevant to both auditors and system implementers.

Auditor Constraints

Am I able to audit this?
  • Technical proficiency consists of:
    • experience in delivering the technology;
    • “horizontal” cross-cutting knowledge of all technical aspects of websites;
    • constant training on the latest technical trends applied to Drupal projects;
    • acknowledging gaps in knowledge, and subcontracting accordingly.
  • Business proficiency entails “vertical” knowledge of the business in which the website under audit is deployed.
Am I free to audit this?
  • To avoid conflicts of interest:
    • never provide maintenance or other implementation services directly after you have performed a website audit;
    • never audit code delivered by a company with which you have a personal or commercial tie.
  • Avoid selective blindness by never auditing code you have previously delivered.

Inspiration and references can be provided by national and international legal and regulatory texts for ethics in the accounting audit fields, although some practices vary from country to country.

A fine example of Drupal-specific audits was created by the French auditing practice, OSInet. It can provide a starting point for a common code of practice for the Drupal Community, which auditors in all countries should be able to adapt to the specifics of their national environment.

If this topic is of interest, get in touch – or attend the sessions in 2015 DrupalCon editions or other community events like the Drupal DevDays 2015.


Software ethics

General auditing standards

  • Relevant body: ISO/TC 176/SC 3 – Quality management and quality assurance, and supporting technologies
  • Main standard: ISO 19011-2011 – Guidelines for auditing management systems

Security auditing

  • Relevant body: ISO/IEC JTC 1/SC 27 – IT Security techniques
  • Main standard series:
    • ISO/IEC 27006:2001 – Requirements for bodies providing audit and certification of information security management systems
    • ISO/IEC 27007:2011 – Guidelines for information security management systems auditing
    • ISO/IEC 27008:2011 – Requirements for bodies providing audit and certification of information security management systems
Certification auditing
  • Relevant body: ISO/CASCO – Committee on conformity assessment
  • Main standards series: ISO/IEC 17021 – Conformity assessment and requirements for bodies providing audit and certification of management systems

Financial auditing

Drupal audits

Image: ©richardlpaul on istockphoto

Other articles from this issue:


Tatiana Ugriumova

With over a million chunks of content on Drupal.org – accumulated over the past ten years – the need for an overhaul is obvious. But who would benefit most from any upgrades?

Coming Soon


J. Ayen Green

In software, as in life, you’re either an oenophile or a beer- maven. Not sure yet? Well, here’s your guide.

Coming Soon


Francesco Placella

Ah, dear Drupal developer, no longer need you cower under your desk and curse when you hear the words “complex multilingual site.” The bad old days are gone. Drupal 8 is here to make your content sparkle across multiple sites in multiple languages.

Coming Soon

Aug 20 2015
Aug 20

Photo of myriad things By now, if you have been using Drush for a while I assume you are comfortable with all the basics. (I consider the basics to be things like downloading modules or updating a site.) I also assume you are aware that Drush has plenty of other features built in, but you probably don't take advantage of them. In this article I want to show you a few things that are just as easy to use as the “basics” and only require a little upfront setup to use them. Once you learn them, they will quickly find their way into your daily workflow.

Shell Aliases

Consider how much time you spend typing out commands. Now think of all the commands you type over and over again. Next, think about all the commands that have a lot of options and how often you refer to help resources to remember which options you need to use. Wouldn’t it be better if we simplified those things? Conveniently, Drush allows you to do exactly that: create shortcuts or aliases in a file called drushrc.php. I'll refer to it as the command file later in this article.

Let's start with an easy example: the clear cache command. This command isn't very long but we can still improve on it. Plus, it's probably one of the most frequently used.

Edit your drushrc.php file; if it doesn't exist create it. It's typically in your home folder at:


Add this line to the bottom of this file and save it:

$options['shell-aliases']['ca'] = 'cache-clear all';

We just added a shell alias. Now instead of typing this:

$ drush clear-cache all

You only need to type:

$ drush ca

That wasn't too hard, right? That was one line of code, and you just improved on an already very simple command.

Let's take a closer look at the line we added to the command file. The first part $options['shell-aliases'] tells Drush we are adding an alias. The second part, ['ca'] =, declares what the name of the alias should be. Last, we tell it what our named alias should do: 'cache-clear all';. Replicate this and replace the name and the command to create your own.

What if I told you that shell commands can have a Drush alias too?

Well, they can and it's just as easy as the clear-cache example. They are structured the same way, with one minor difference: you need to prefix the command with an exclamation mark. I'll use git clone as an example. Let's say you want to create a shortcut for downloading the latest Drupal 8 from git. I will name my alias “gcd” for Git Clone Drupal.

Edit the same drushrc.php file and add the line below:

$options['shell-aliases']['gcd'] = '!git clone --branch 8.0.x http://git.drupal.org/project/drupal.git';

Now let’s give our alias a try by issuing this command from an empty folder:

drush gcd

Drush will execute that git clone command on your behalf and download the project into a folder named “drupal”. Seeing how easy it is to add aliases, I hope you are now thinking about all the possible aliases that can improve your development.

Smart Defaults

The command file also allows us to provide defaults for commands. This is particularly helpful for the commands that get your fingers in a tangle when you type them out. A good example of this might be the SQL dump command. Here's what I mean:

drush sql-dump --result-file=~/backups/myfile.sql –structure-tables-list=cache,watchdog

The above command will dump the database to a specific file but not include the data from the cache or watchdog tables. It can also be a lot of unnecessary typing since we can set these options as defaults. Edit the same drushrc.php from before and insert the following lines:

$options['result-file'] = '~/backups/@[email protected]';

That line will now tell Drush to always save the dump file in a folder named backups. With the use of the @DATABASE and @DATE tokens the filename will also contain the database name and the date of the dump.

But what about the second part of the command? From the above example, I explicitly asked Drush to ignore the data from two tables. Often, it’s unnecessary to export the data from tables like the cache. Since I often don't want data from certain tables, I will create a list of common tables to ignore. Using wildcards, I'll expand the list and ignore any cache table and a few other things.

Add this line to the drushrc.php file:

$options['structure-tables']['common'] = array('cache', 'cache_*', 'history', 'search_*', 'sessions', 'watchdog');

Finally, we're going to add a line that will combine the smart defaults into an alias. Insert this line into your drushrc.php file:

$options['shell-aliases']['dump'] = 'sql-dump –skip-tables-key=common';

With three lines of code we have converted that long sql-dump command into two words:

$ drush dump

Think of all the commands you can create to make your daily development easier.
The bonus for actually taking the time to create your own custom commands is that you will only need to do it once. Keep a copy of your commands handy in a text file somewhere. Then whenever you enter a new Drush-enabled environment, paste your pre-made commands into the new drushrc.php file. You'll be up and running with all your familiar commands right away.

Image:"Kit para emergências do dia-a-dia" by Dois Espressos is licensed under
CC BY-NC 2.0

Aug 19 2015
Aug 19

In an exclusive, late-night interview, the fabulous Fabian Franz (Technical Lead, Tag1 Consulting) dishes the dirt on big-pipe Drupal and drops a grenade on our favorite social media platform: “You know what?” he says, “FaceBook cheats!” Hey, I’m totally not surprised; I saw the movie.

Want more Fabian? Read my interview with him (“Baby Steps”) in Drupal Watchdog 5.01 and, while you’re at it, subscribe: https://drupalwatchdog.com/subscribe/2015.

Aug 17 2015
Aug 17

Wait, $langcode? What the Heck?

Photo of words If that was the most polite thought that crossed your mind when dealing with the Drupal 7 Field API, please read on.

No matter whether you build complex multilingual sites, or whether just hearing the words “Drupal” and “language” in the same sentence makes you want to hide in the darkest corner of your office, there are a few language-related notions that you really need to know to write Drupal 8 code that works properly. After all, language is an intrinsic property of textual content, and since Drupal is supposed to manage content, having to deal with language does not seem such a peregrine idea, does it?

Speaking of Content

Historically, content in Drupal is a user-friendly way to refer to nodes. However, in Drupal 8, content has a broader meaning: it refers to any entity type which stores information usually meant to be experienced in some form by a certain set of site users.

Content entities, such as nodes, comments, terms, custom blocks, custom menu links, and users, are all examples of this kind of entity type. The other main category is Configuration entities: node types, views, fields, menus, and roles, are meant to store information mainly related to determining the site behavior. Note that this distinction may not always be so clear-cut, as in some cases the choice of picking one category or the other may be determined mainly by implementation details, as in the case of module-provided menu links.

To sum up, when in Drupal 8 we speak of content, most of the time we are referring to content entity types.

Multilingual Content: A Bit of History

In Drupal 7, a new way of translating content was introduced by adding native multilingual support to the Field API. That allowed the ability to store multilingual values for any field attached to any entity type. But code that implements business logic needs to explicitly deal with field language, which implies a very poor developer experience (DX); i.e., this infamous field data structure:


Unlike the $delta and $column levels, whose values are predictable, deciding what the correct value should be for the $langcode level is definitely not trivial. In fact, depending on whether the field is translatable or not, it may hold an actual language value or LANGUAGE_NONE. Dealing with a translatable field in Drupal 7 can be accomplished in two different ways, depending on the business logic being implemented:

  1. Acting on all available translations, which is what the Field API does in (C)RUD operations.
  2. Acting on a single language, the active language, which is what the Field API does when rendering the entity or its edit form.

Dear developer: As you may have guessed (or learned the hard way), figuring all this out is your responsibility. (Ouch!)

One of the reasons dealing with field language is so hard in Drupal 7 is that, despite all the work that went into designing and coding the Field API, core does not provide a full-fledged Entity API to build on. As a consequence the Drupal 7 “Entity Language API” is an inconsistent mess spread in three places:

  • Core provides an entity_language() function that can be used to determine the active language.
  • The Entity API module provides the entity metadata wrapper which, among the rest, makes it easier to access field values and deal with their language.
  • The entity translation CRUD hooks are provided by the Entity Translation module.

Content Translation Models

In Drupal 7 we have two competing models to translate content:

  • The core Content Translation module allows us to translate nodes by creating one node for each translation and grouping them into translation sets.
  • The Entity Translation module relies on the Field Language API to implement field translation support for any fieldable entity type.

The main reason why the field translation model was introduced is that the node translation model, although easy to implement and support on a superficial level, has several drawbacks when trying to deal with more advanced use cases, in particular when needing to share data among translations.

Another issue that can be problematic when building a website that needs a high degree of symmetry among the various language versions, is that identifiers are different for each language.

Last but not least, we need to translate any entity type, but extending the translation set approach to work universally is definitely not a trivial effort. On the other hand, aside from the DX issues, making field translation work properly does not turn out to be an easy task either. In fact, all non-field data attached to entities is not supported, and requires a workaround like the Title module, that allows us to translate entity labels.

In fact, having two models in core is a bad situation: site builders have to pick one, developers have to support both, and everyone has to understand both. This leads to additional cognitive, operative, and maintenance burdens, that in many cases make building a multilingual website in Drupal 7 a painful experience. In Drupal 8, we faced the need to resolve this “conflict” and provide a single content translation model. The solution we found is a unified model: if every piece of data is a field we can replicate the node translation model by making every field translatable. The only difference is that we have a single entity for each translation set.

The Content Translation UI

The Drupal 8 Content Translation module comes with a very powerful configuration page, that allows us to configure translatability from bundle to field property level for all the supported entity types. It also allows us to configure which should be the entity default language, and whether it should be alterable or not.

The content translation UI is very similar to the Drupal 7 one: the main differences are the source language picker and the labels indicating that a field is not translatable.

The “Content language and translation” configuration page.

The content translation user interface.

The table layout for translatable and revisionable entity types

  • The {entity} base table holds entity keys and metadata only:
    | entity_id | uuid | revision_id | bundle_name |
  • The {entity_revision} table holds the basic revision entity keys and language:
    | entity_id | revision_id | langcode |
  • The {entity_field_data} table stores entity field data per language:
    | entity_id | revision_id | langcode | default_langcode | label |
  • The {entity_field_revision} table holds the revisions of field data:
    | entity_id | revision_id | langcode | default_langcode | label |

Translating Every Field

The main obstacles toward making every piece of data a field (and making it translatable) in Drupal 7, were the huge DX issues mentioned above and, above all, the lack of a storage layer supporting multilingual values for any field.

In Drupal 8, we can rely on a solid core Entity API that exposes entities as classed objects, and thus allows us to encapsulate all the complexities involved in dealing with field translatability. We also have an Entity Storage layer that provides a unified way to load and store field data in a storage-agnostic fashion. This means entities can be stored in an SQL database (the default implementation), as well as rely on MongoDB or an XML storage; the decision is totally up to the current storage handler. This also means we can bake native multilingual support into each storage handler without needing to touch any other part of the Entity API.

When entity translations are added to or removed from the storage, the following hooks are fired:

  • hook_entity_translation_insert(EntityInterface $translation)
  • hook_entity_translation_delete(EntityInterface $translation)

Core SQL Storage

The default SQL implementation distinguishes between base fields – which are attached to any entity – and bundle fields – which are attached only to certain bundles. The former are stored in the entity shared tables; the latter, as well as base fields with multiple cardinality, are stored in dedicated field tables. While field tables natively support multilingual values, exactly as in Drupal 7, for shared tables, four different layouts are supported, depending on the entity type definition:

  • Simple entity types use just a base table, where all base field values are stored.
  • Revisionable entity types use a base table and revision table, where all the base field revisions are stored.
  • Translatable entity types use a base table and a field data table. The former stores just very basic data, like id, bundle or UUID, while the latter stores base field data, one record for each available translation.
  • Revisionable and translatable entity types use four tables to store basic data, revision metadata, base field translations, and base field translated revisions.

The Entity Storage API allows us to switch between table layouts by merely altering entity type definitions. This allows us to pick the most performant table layout that suits the project requirements. By the way, did I mention that Views relies on the Entity Storage API to provide native multilingual support?

Entity querying and multilingual

The Entity Query API does not make any assumption on language conditions:

$result = \Drupal::entityQuery('node')
  ->condition('promote', 1)
  ->condition('status', 1)
  ->execute(); // Nodes with one published/promoted translation

$result = \Drupal::entityQuery('node')
  ->condition('promote', 1)
  ->condition('status', 1)
  ->condition('langcode', 'en')
  ->execute(); // Nodes with one english promoted translation

$result = \Drupal::entityQuery('node')
  ->condition('promote', 1)
  ->condition('status', 1)
  ->condition('default_langcode', 1)
  ->execute(); // Nodes with promoted original values

Shut Up and Show Me Some Code

Being able to store multilingual values for any piece of content is great, but is that enough? Smart readers will have guessed the answer at this point, I suppose.

Although many of the intricacies of dealing with field translatability have been hidden below the ContentEntityBase rug, developers still need to keep in mind that any entity type may find itself operating in a multilingual environment sooner or later, so the related business logic should be coded accordingly.

Okay, let's see some examples!

Accessing Field Data

The new Entity Translation API relies on a very simple concept: every (content) entity object represents an entity translation.

    // A translation object is a regular entity object.
    $entity->langcode->value = 'en';
    $value = $entity->foo->value;
    $translation = $entity->getTranslation('it');
    $it_value = $translation->foo->value;

    // A translation object can be instantiated from any translation object.
    $langcode = $translation->language()->getId(); // $langcode is 'it';
    $untranslated_entity = $translation->getUntranslated();
    $langcode = $untranslated_entity->language()->getId(); // $langcode is 'en'
    $identical = $entity === $untranslated_entity; // $identical is TRUE
    $entity_langcode = $translation->getUntranslated()->language()->getId(); // $entity_langcode is 'en'

As you can see, field language is no longer exposed in the public API. (Yay!) Additionally, thanks to some behind-the-scenes magic, untranslatable field data is shared among entity translation objects, while translatable field data is accessible only from the related entity translation object.

    // Field data is shared among all the translation objects.
    $entity->langcode->value = 'en';
    $translation = $entity->getTranslation('it');
    $en_value = $entity->field_foo->value;
    $it_value = $translation->field_foo->value;
    $entity->field_untranslatable->value = 'foo';
    $translation->field_untranslatable->value = 'bar';
    $value = $entity->field_untranslatable->value; // $value is 'bar'

This means that we no longer need to worry about field translatability, which is pretty relieving, to say it politely.

The Active Language

In Drupal 8, every translation is an entity object with its own language; we only need to pass it around, as we’re already used to doing, to make the active language available in any part of our code base.

    $langcode = Drupal::languageManager()->getLanguage(LanguageInterface::TYPE_CONTENT);
    $translation = $entity->getTranslation($langcode);

    function entity_do_stuff(EntityInterface $entity) {
      $value = $entity->field_foo->value;
      $langcode = $entity->language()->getId();
      // do stuff

As you can see, in Drupal 8 entity_do_stuff() can be completely language-agnostic, unless its business logic explicitly deals with language, which can be retrieved form the entity (translation) object.

Entity Language Negotiation

In Drupal 7, the only way to determine what translation should be selected among the available ones, is by calling field_get_items(), which tries to determine which translations are available by inspecting field values through field_language(). If a field translation is missing for a certain language, field_language() will return a fallback value, which may lead to having different languages for different fields, a confusing and unintended behavior. When writing this code we assumed that all field translations would be in the same language.

Additionally, this behavior makes sense only in a rendering context; applying field language fallback in a save context would cause non-existing translations to be stored. In Drupal 8, we fixed this mess by introducing the concept of entity language negotiation and applying it to the whole entity object. Then we inspect available entity translations and pick the one that most suits the available contextual data. For the rest, this means empty values are just treated as such, instead of triggering field-level language fallback.

    // D8 features a reusable entity language negotiation API.
    function viewEntity(EntityInterface $entity, $view_mode = 'full', $langcode = NULL) {
      // If $langcode is NULL the current content language is used
      $translation = \Drupal::entityManager->getTranslationFromContext($entity, $langcode);
      $build = array();
      // do more stuff
      return $build;

    // A context can be provided.
    function node_tokens($type, $tokens, $data = array(), $options = array()) {
      if (!isset($options['langcode'])) {
        // This instructs the system to use the entity original language.
        $langcode = Language::LANGCODE_DEFAULT;

      // The default operation is 'entity_view'.
      $context = array('operation' => 'node_tokens');
      $translation = \Drupal::entityManager()->getTranslationFromContext($data['node'], $langcode, $context);
      $items = $translation->get('body');
      // do stuff

Modules can hook into the entity language negotiation process and alter it based on the contextual data passed along via hook_language_fallback_candidates_alter(). There is also an operation-specific version of the hook: hook_language_fallback_candidates_OPERATION_alter().

This API is not limited to entities, but contextual data allows us to tell when entities are involved. In fact, the entity object is always part of the contextual data. The default operation (entity_view) works correctly for view builders and form handlers. Entity language negotiation is automatically applied to entities being referenced in the current route's path (e.g., node/1), so in these cases there is no need to explicitly instantiate a translation object: the correct one is provided by default. The same is true for all the hook_form_alter() implementations and the various callback functions involved in entity form building and entity rendering.

Entity Translation Handling

If your code explicitly needs to deal with translations, and it needs to act on all translations instead of just dealing with the active one, there are a few useful methods to help with that.

    // Acting an all translations.
    $languages = $entity->getTranslationLanguages();
    foreach ($languages as $langcode => $language) {
      $translation = $entity->getTranslation($langcode);

    // Creating a new translation after checking it does not exist.
    if (!$entity->hasTranslation('fr')) {
      $translation = $entity->addTranslation('fr', array('field_foo' => 'bag'));

    // Which is equivalent to the following code, although if an invalid language
    // code is specified an exception is thrown
    $translation = $entity->getTranslation('fr');
    $translation->field_foo->value = 'bag';

    // Accessing a field on a removed translation object causes an exception to be
    // thrown.
    $translation = $entity->getTranslation('it');
    $value = $translation->field_foo->value; // throws an exception

What is Missing?

Okay, this sounds great (hopefully), but is it really as cool as it sounds?

Not yet. There is some more work to do to make all of this perform flawlessly:

  • We still need to finalize the SQL storage API to make it possible to actually switch between table layouts. The foundational code is in place, but the entity-type-specific storage handlers need to be updated to deal with it.
  • The Content Translation UI still needs some TLC with UX improvements, and UI polishing would be more than welcome.
  • And last but not least: bug fixing. You know, from time to time, in the process of rewriting core, we inadvertently introduce a bug or two. We were not able to fix them all so far. Maybe you will?

Ad Maiora

To sum up, even if some care from developers is still required when designing and coding a functionality that may end up operating in a multilingual environment, we truly believe we built a system far superior to what we have in Drupal 7, a system allowing us to minimize the effort required to make things simply work properly.

So what you are waiting for? Make Drupal 8 shine even brighter!

Image: ©aaabbc-Fotolia.com

Aug 12 2015
Aug 12

Narayan Newton (Partner, Tag1 Consulting), takes a break from spring-cleaning his infrastructures to recall a once-upon-a-time Drupal takeover of a hotel kitchen, and the “semi-gourmet” meal that resulted.
He also divulges the nature of his continuous integration setup. (Jeeves. Or maybe Jason. Jenkins? I don’t know, something with a “J.”)
Finally, he admits that “I vote for President based on hair, so, yes, I’d definitely vote for Dries.”
BTW, this would be an excellent time for you to subscribe to Drupal Watchdog: https://drupalwatchdog.com/subscribe/2015

Aug 11 2015
Aug 11

Photo of landscape Everyone working on software has a baseline competency with communication, yet it’s not unusual to see the time required to communicate effectively viewed as a distraction from the real task at hand. People seeking to deliver valuable software in a more timely and reliable fashion tend to turn to concrete changes – a new ticket system or software tool, more developers, performance bonuses – rather than delve into developing communication skills, despite the many concrete ways improved communication can increase a project’s momentum.

Set and Communicate Goals for Response Time

You’ve probably experienced some form of the information black hole. Maybe you put in a request to have an account created but have no idea if it will be minutes or months until the request is fulfilled. It’s important, but not urgent, and you don’t want to be “that person” who demands immediate attention. If only you knew how long it should take, you could set aside the stress of not knowing. Then, if a response didn’t arrive when it should have, you’d know it was time to double-check.

Both individuals and teams can:

  • Set honest response times;
  • Communicate them clearly and visibly;
  • Monitor how they’re living up to those goals;
  • Adjust processes or goals accordingly.

People are free to focus on other things when they know how long they have to wait.

Setting such expectations also frees you from more “Is it ready yet?” e-mails. Sending an auto-reply or a cut-and-paste acknowledgement like this should do the trick:

“If your request hasn’t been answered in three working days, something has gone amiss. Poke us with a stick by e-mailing [email protected].”

It can be as formal or as playful as suits your needs.

Goal-setting applies equally to long processes. Don’t underestimate how helpful it can be to let people know if you have no idea when something will be done, but you’re sure it won’t happen before some specific time. For example, when I applied for a volunteer position that required a police background check, they were clear up-front that there was no chance of placement in less than six months. They knew that was too long for many people to wait. By telling me, I didn’t need to hold space or put off shorter-term opportunities wondering if they might call me at any time. Yes, the long process meant they lost potential volunteers, but they didn’t create the kind of ill-will or persistent “Are we there yet?” inquiries that not knowing creates.

If you find that your response time is too unpredictable, or takes so long you’re embarrassed to say it publicly, being honest and clear is more respectful. Perhaps more important, it increases the chance that the underlying problems will get addressed.

Question How You Think About Time

It’s not unusual on development projects to think about how long something will take in terms of the hours the task itself will require. Questions may come in form of: “How hard would it be …?” “How long would it take …?” “How much longer is this going to take? You’re already four hours over!” “How many of these hours are billable?”

People typically answer such questions by saying how long it will take to do their part of it. A developer, for example, might consider the time it will take to figure out a solution and implement it. She might not consider the amount of time the rest of the team will have to spend communicating changes, testing, configuring, documenting, deploying, or training. Indeed, it’s difficult to know everything the team does in support, and harder still to estimate how long it will take to do their part. (And there’s almost never enough time allowed to account for the time spent making and adjusting the estimates!)

I’m no fan of {n}-hour estimates. Are people talking about {n} perfectly productive consecutive hours? Or just perfect hours? I get much more utility from asking, “What’s the least amount of time this could possibly take?” and “What’s the soonest you can imagine this being deployed?” That’s useful information, and its surprising how many times re-asking an estimation question this way will end up with a longer estimate than “How long would this take?” Murky as it may be, it’s still helpful to keep an eye on how hourly estimates match up to reality. If specific kinds of work aren’t being included, it’s likely to affect the sense of progress.

“How long will this take?” can also turn into “How long should this take?” which can affect momentum in a couple of ways.

First, when there are overlapping skill sets among people with varying degrees of expertise, it’s easy to think things like, “Oh, she can export this to a feature much faster than I can. It’ll be five minutes for her, and I know it’s going to take me forever. I’ll open an issue for that.” Sometimes that’s just the right thing to do. But it’s not unusual to find out that it does not, after all, take someone less time than you. They just dislike the work less or endure it more quietly.

Someone else is better/faster/smarter insidiously affects the overall project momentum. Perhaps 30 minutes was allocated for a feature export. I look at the ticket. I have time do it, even though I’m pretty sure it will take at least two full hours. I could pass it by to let someone more skilled handle it. On the surface, I’d be saving one-and-a-half (or more) hours of effort, but I know that this skilled person doesn’t have time to do it right now. If they take a day or two to get to it, from an overall project momentum standpoint, it’s not an hour-and-a-half savings; it’s a 24-48 hour delay.

For some issues, this matters. For others, less so. There’s a good chance that some degree of latency like this exists in your projects and may be a prime opportunity for increasing the overall momentum. Try focusing on overall impact instead of a narrow sense of how long a task should take.

Cultivate Information Wayfinders

If you’ve tried to navigate a maze of bureaucracy and had the good fortune to find that one person willing and knowledgeable enough to tell you who to talk to, what forms to submit, who has the answer you need, then you’ve met an information wayfinder.

In their own way, software projects can become as complex as bureaucracies. But whereas bureaucracy tends to have massive, entrenched, and formalized ways of doing things, the pace of technological change and the relative youth of software development is less maze-like and more akin to traveling through uncharted territories. Depending on the organization, you may have to succeed at both traversing the labyrinth and blazing a trail through the wilderness.

In either of these environments, wayfinders channel the energy of participants into the project itself, rather than leaving individuals to struggle through the jungle or navigate the maze on their own.

Developing the Skills of a Wayfinder

Wayfinders have many skills, and even if you don’t see yourself as suited for the role, you can still acquire a few of the skills:

  • Learn how to reach people. Wayfinders know the preferred communication media of the team members and stakeholders for both regular and urgent communication. For example, if you’re aware that someone checks their e-mail once a day but responds to a text in seconds, you can remove hours or days of latency. It's usually best to stick to official project communication channels first and use alternates only when something is stuck or urgent.
  • Learn people’s work style. What time of day are they typically most effective? At what tasks? When do they benefit from a break? Wayfinders help people shine.
  • Learn who’s who. Wayfinders learn the work well enough to know who has the answer to what kinds of question; they make connections between people who need information and people who have it. You may not know anything about the caching layer of Drupal, but if you know who does, and someone is lost, you can point them in the right direction.
  • Amplify the connections you make. Amplify what you know and what you discover by creating shared resources: documentation, FAQs, and tools that increase the chance of someone finding information on their own or can figure out where to turn.
  • Monitor flow. Keep a casual but continuous eye on asynchronous communication channels (mailing lists, issue queues, etc.) for communication that isn’t responded to within the project’s target response time.

Wayfinders tend to be generalists, knowing a little about a lot of areas; they can be difficult to find and keep in the software world. When talk is silver and code is gold, in workplaces that maintain different pay rates for people who code and the people who amplify the effectiveness of the code, it’s not surprising that wayfinders don’t stick around in great numbers.

Support Each Other

If this sounds like a lot of work, it is. Even the most skilled communicators can become unfocused, exhausted, distracted, or otherwise unable to support the emotional and information needs of others. Independent of specific roles, people who rely on teammates – when they find themselves in a position where they can’t communicate effectively – can still benefit both the team and the project.

I’ve been talking about removing unnecessary communication delays, but it’s just as important to know when delays are beneficial. It’s good to hold back when you’re too angry to be responsible; to take time to heal when you’re sick; to recognize when you need to recharge or take a fresh look.

When Your Communication-fu is Low

  • Clearly adjust response expectations. Like a vacation reply or the warning on a support line, telling people they’ll encounter an atypical delay can relieve pressure.
  • Don’t just go silent. If you’re busy working and it suits the nature of that work, show progress. It isn’t always possible, but frequently pushing code, making drafts visible, or otherwise letting your work speak for you can reduce other people’s uncertainty of where you’re at and make it less necessary for them to interrupt you.
  • Seek support. Look for people you can trust to filter for you when you need to focus, when you’ve been pushed to your limit, or when you’re overwhelmed with other priorities. Enlist them to lend a hand. Look for opportunities to reciprocate by doing the same for them or exchanging disparate skills.
  • Don’t take emotional labor for granted. No one is entitled to being supported this way. It takes time and energy. Asking for what you need doesn’t guarantee you get it.

Protect the Zone

It takes extraordinary energy to communicate deliberately. It takes enormous focus to write complex code. It takes both to learn new systems and document them. It takes focus and diligence to test and provide feedback. Visual design, project coordination, marketing, sales: the myriad roles on a team all have a “zone.” Protect yours. Respect others’.

Don’t expect a person tasked to deliver in one area to excel in the others. Avoid asking people to switch in and out of different modes of thinking. Even someone skilled in multiple areas and fluent at switching pays a price for task-switching. It’s the energy equivalent of driving in stop-and-go traffic. A lot more fuel gets burned. Sometimes it may be necessary, but it’s seldom desirable.

Ways Individuals can Protect the Zone

  • Respect boundaries. If someone sets “Do not disturb” flags, don’t disturb them.
  • Set your own boundaries. Make space for yourself to be effective. Set your own do-not-disturb flags, create time when you’re not in company chat, take a break when you need it. Do what works for you.
  • Employ empathy. Maybe someone working hard to get a production server back up isn’t able to communicate nicely. People pulled from their zone have often temporarily lost the ability to empathize and can get cranky. When someone is struggling, look for ways to help them rather than unleashing judgment, which moves everyone away from what they’re good at and creates friction.

Organizations and Projects Can Protect the Zone

The specific context of a project will affect both what is possible and what works, but here are some things that can help people find their productive zone and stay there:

  • Allow work-from-home time for office-based teams.
  • Designate collaborative time where people can count on getting questions answered.
  • Designate quiet time where people are automatically protected from interruption.
  • Maintain a release cadence. Predictable release cycles allow participants to budget their time and energy accordingly.

It’s much easier to see and quantify what is missing and confused in the artifacts of software development: unimplemented key features, crashed servers, failed tests, contradictory documentation, difficult user experience, spaghetti code. It’s harder to put a figure on the effect of communication and information flow on time, budget, and morale. Too often, people think it’s just the way things work and don’t realize that communication skills can be honed as surely as technical skills. But everyone is affected by the energy drain of poor communication and buoyed when the information is flowing well.

Every software project has a quality with which it moves. Is it difficult to get anything done or nearly impossible to keep up? Is it filled with hurry-up-and-wait or does it hum along seemingly without effort? When the course needs adjustment, are we talking the lumbering performance of a cruise ship or the dangerous course-correction of a speed boat?

In the same way that the distance to be traveled, the strength of the engine, the cargo, the size and skill of the crew, the needs of the passengers, the weather conditions, the available fuel, the quality of maps, and more will characterize a journey by sea, software development has myriad factors that affect the journey, too. Any of them can be adjusted or reconfigured in order to affect the project’s momentum: more staff, fewer features, different timeline, fewer meetings, more demos.

Communication skills may seem like the water and wind, things we must have and respond to, but over which we have little control. But the way we communicate greatly affects the ease and speed of our software development journey, and we can strategically improve communication skills as surely as we can improve technical skills.


Aug 07 2015
Aug 07


The purpose of this blog post is intended to be a comprehensive analysis and response to:
#2289619: [meta] Add a new framework base theme to Drupal core; to encapsulate how I believe core should move forward with a default theme. For brevity, the term "framework" in this post will always refer to "front end framework".

The Problem

Core has always provided its own themes, however the benefit of sustaining this model is becoming increasingly difficult to justify.

Fast Moving Front End Technologies

In regards to the front end, core has historically been more reactive than proactive. This isn’t all that surprising, nor a bad approach, when you take into account what core originally had to deal with.

Now consider for a moment, all the different technologies and techniques that are used on the front end today and compare that with when Bartik was created ~6 years ago. Things have changed rather drastically with the advent of Responsive Web Design, HTML5 and CSS preprocessors like SASS and LESS.

Web Components are possibly the next major milestone in front end development with an impact potentially just as large, if not larger, than that of HTML5 and Responsive Web Design. While the concept isn't necessarily new (frameworks have had "components" for years), it is definitely being sought to become a "web standard" with Google leading the way. Web Components are an encapsulation of multiple front end technologies supported directly by the browser. This could also likely be the future of what we now consider "frameworks" today: a consolidation of web components. Perhaps the web as a whole, only time can tell.

2015 DrupalCon Los Angeles Session: Drupal 9 Components Library: The next theme system

The 1% of the 1%

Generally known as the 1% rule, only ~1% of any given internet community is responsible for creation. In regards to Drupal, this figure is actually even more drastic with only about 0.02% that are core developers.

This fact is what makes core developers a "rare commodity". I would, however, like to take this a step further. Of that 0.02%, how many of those core developers are front end developers? I’m sure no one knows the exact ratio, but it is commonly accepted that core front end developers are an even "rarer commodity".

Back end developers are the majority. They have little to no interest in current front end technologies, techniques or "Best Practices™". I'm not discounting the fact that there are back end developers that cross over or vice versa, but there are just a handful of "unicorns" .

Without enough of a front end ratio in core development, it is impractical and unfair for everyone involved to expect any sort of "solid theme" from core. It is because of this fact that not having an external framework is actually hurting us from a community, development and maintainability standpoint.

"Getting off the Island"

Core has already adopted the "Proudly Found Elsewhere" mentality for a lot of its back end architecture. The benefits that this approach has brought to our community has proven predominantly fruitful. The front end should be no different.

Core really hasn't accepted this direction for front end themes, but doing so would allow core development to focus solely on the integration of an external framework. This would reduce a lot of the technical debt required to sustain the design and implementation of CSS and JS in a core theme.

Keynote: Angela Byron — Drupal 8: A Story of Growing Up and Getting Off the Island — php[world] 2014

Automated testing

While the automated testing infrastructure (DrupalCI) is definitely improving, it is still primarily focused on the back end side of things.

Core has wonderful unit testing. These unit tests include and ensure that a theme can implement the necessary hooks available to it via the theme system APIs. It is also great at ensuring that a theme's markup is correctly generated.

However, that is where the benefits of automated testing of core themes ends. Any patch that affects CSS or JS must undergo a very manual process which requires contributors to physically apply patches and "test" changes in multiple browsers. This often results in bucket loads of before and after screenshots on an issue. This is hardly ideal.

The reason many front end oriented projects live on GitHub is because of their ability to integrate amazing automated tests through tools like Travis CI and Sauce Labs. Being on GitHub allows front end projects to rigorously test around the specific areas for which their technologies implement and leads to the stability of their codebase.

Perhaps one day the core and contrib themes could leverage the same type of abilities on drupal.org, perhaps not.

Regardless, testing is just as paramount to the front end as it is for the back end.

Unless the testing infrastructure is willing to entertain the possibility of theme-based CSS and JS testing, an external framework is really our only alternative for stable front end code. At the very least, implementing an external framework allows us to defer this decision.

Popular Drupal-grown base themes

There will always be use cases for the popular Drupal-grown base themes. It really just depends on the project and capabilities of a front end developer. There's nothing wrong with them and I have used them all. They are without a doubt a very powerful and necessary part of our ecosystem.

There is often a lot of misunderstanding around what these base themes actually are, though. Many of them started out simply as a way to "reset" core. Over time, many have added structural components (grid systems), useful UI toggle and other tools. However, the foundation for many of them is simply to provide a starting point to create a completely custom sub-theme. Intended or not, they are essentially by front end Drupal developers for front end Drupal developers.

The priority for these base themes is not "out-of-the-box" visual consumption, but rather providing a blank canvas supported by a well groomed toolset. It is because of this "bare nature" that they can actually become more of an obstacle than a benefit for most. They essentially require an individual to possess knowledge of CSS or more, to get even the most basic of themes up and running.

Their target audience is not the other 99.9998% and not viable for core.

The Solution

Due to the complexity of the theme system, the work with Twig has been a daunting task. This alone has pushed a lot of what we thought we would "get to" into even later stages. I propose that the next stage for a default core theme is to think long term: adoption of an external framework.

Proudly Found Elsewhere

Core cannot continue to support in-house theme development, at least until there is a higher ratio of core front end developers. External frameworks live outside of Drupal and helps ensure fewer "Drupalisms" are added to our codebase. It also allows Drupal to engage more with the front end community on a level it never has before.

Vast and Thriving Ecosystems

Because frameworks typically live outside of larger projects, they usually have a vast and thriving ecosystem of their own. These often produce a flurry of additional and invaluable resources like: documentation, blog posts, in-depth how-to articles, specific Q&A sites, support sites, template sites, forums, custom plugins/enhancements. Drupal would instantly benefit from these existing resources and allow our community to offset some of the learning curves.

Back end developer friendly

These resources also allow a back end developer to focus more on how to implement existing patterns than worrying about having to create new ones. This would allow core developers to focus solely on the theme system itself and providing the necessary APIs for theme integration, rather than the more complicated front end CSS or JS implementations.

Why Bootstrap?

There are many frameworks out there and, quite frankly, attempting to find the one that is "better" than the other is futile; frameworks are simply opinionated "standards". You may agree with one’s opinion or you may not. It does not change the fact that they all work.

The question that remains is: Which framework do we put in in core?

I strongly believe that it should be Bootstrap. A lot of individuals, including myself, have already put in a great deal of time and effort in contrib to solve this exact issue: how to use an external framework with Drupal.

Another advantage of using Bootstrap is that it is already backed by a massive external community.

The main strength of Bootstrap is its huge popularity. Technically, it’s not necessarily better than the others in the list, but it offers many more resources (articles and tutorials, third-party plug-ins and extensions, theme builders, and so on) than the other four frameworks combined. In short, Bootstrap is everywhere. And this is the main reason people continue to choose it.

In just two and half years, the Drupal Bootstrap base theme has grown exponentially at a whopping 2596.67% (based on 7.x installs from: 1,070 on January 6, 2013 to: 70,531 on July 12, 2015*) and has become the third top most installed Drupal theme on drupal.org.
*Note: I have chosen to exclude the past two weeks of statistics as I believe they are in error due to #2509574: Project usage stats have probably gone bad (again).

While I cannot attest to the exact reason this rapid adoption has occurred, here is an educated guess: it's what the people want. I purposefully made something that was easy to install and worked right "out-of-the-box". Ensuring that the focus of the project was on the other 99.9998%.

No other Drupal project that implements an external framework can claim this or even come close to it.

Ease of use is paramount and often overlooked by developers. This "philosophy" is what has allowed sites like Dreditor to be born and this one, Drupal Watchdog, to be redesigned given some rather severe time constraints.

Conclusion: Drupal 8, 9, 10...

Adopting an external framework is just the logical next step in core's "Proudly Found Elsewhere" mission on the front end. Regardless of which framework is ultimately chosen, I think it is more important to see why Drupal needs an external framework.

We already have too many issues and tempers flaring around even the smallest of details on the front end. By outsourcing a theme's design (CSS & JS), we would allow our community to instead focus on the integrations of themes, like the future of components and much larger issues.

While this issue isn't about trying to add a framework to core just yet, I think it is very important to have this discussion early on. I do think that ultimately, a framework based theme in core should replace Bartik, but that won't and should not happen until D9.

Since adding an external framework base theme would be purely an API addition, there isn't anything that would prevent us from adding it in an 8.Y.0 release (solely opt-in, of course). In fact, I would strongly recommend that we add one before D9 so we can smooth out any remaining details before tackling something as gigantic as #1843798: [meta] Refactor Render API to be OO.

I have a feeling that D9 will be "the year(s) of the front end". While yes, Twig is awesome, the fact remains that the underlying theme system (and default theme) itself hasn't changed all that much and needs some serious tough love.

I believe integrating an external framework is an excellent way for us to, not only reduce both our technical debt and maintenance burden, but also focus how we should be developing our theme system. We have an opportunity to transform the first visual impression of Drupal.

Let's do it for the 99.9998%.

Aug 07 2015
Aug 07

Dragon Drawing This is not an article about gamification. Unless your website is devoted to dragons, adding quests and hit points won’t improve your users’ engagement with your content. (If your website is devoted to dragons, well that’s just awesome.)

Content – be it text, images, video, data, or a combination of mediums – is the reason we build websites in the first place. It’s right there in the acronym: CMS. Drupal is a system – or more accurately, a framework – for managing content. We strongly believe that all website features, layout, and design choices must support the goal of serving your target audiences with the critical information – the content – they need to engage meaningfully with your organization.

Your content is the connection between your organizational goals and your audiences’ motivations. There’s usually a reason a piece of content is added to a website; somebody, at some point, thought it would be useful. Unless that content has meaning to your users, however, it has little value to your organization. Without a strategy guiding the creation and governance of that content, your quest, noble though it may be, is almost doomed to fail.

Fortunately, it’s not hard to start creating a strategy. We believe you can learn the foundations of content strategy for Drupal websites by breaking down what the team at BioWare did in creating Dragon Age: Inquisition.

Goals and Audiences

You have to start with goals.

BioWare’s basic goal is easy to suss out: they want to make money. To support such a massive undertaking – creating Inquisition involved programmers, writers, producers, graphic artists, vocal talent, project management, and more – the end result had to be a game that would appeal to enough people not only to pay for expenses, but to turn a profit. (We have a strong suspicion the team also wanted to turn around the negative reception Dragon Age 2 met by releasing something that would blow more than a few minds.)

There are also goals built into the game itself. When you play Inquisition, you interact with at least 100 hours of staged content in pursuit of an overall quest – not to mention the scores of side missions you can distract yourself with within the game’s world. Players become more and more invested in the game as their characters become part of the ultimate outcome of the story.

So, before you start building a website, you should define those two things for yourself:

  • What are your organization’s goals?
  • How can your website, and the content you populate it with, support those goals (by captivating and delighting your audiences)?

But who are those audiences? To create meaningful content, you need to understand who’s going to engage with it. Inquisition takes the concept of “persona” quite literally: players can customize their avatars not simply by selecting a gender, but by adding layers of tattoos, facial hair, outfits, and more.

More important, the designers recognized that different audiences will have different preferences when it comes to gameplay. Inquisition features two entirely different (but complementary) battle systems. If you want to charge right in with your longsword and the Cheese Wedge of Destiny (FYI, that’s an actual shield in the game), you’re free to wade in and hack away. At the click of a button, however, you can switch to the tactical view, stopping time to allow armchair generals to deploy their troops methodically.

In building a website, this difference might manifest itself between users who prefer to enter through your site’s home page and navigate to the content they seek, and those who enter deep within your site because a friend posted a link on Facebook.

By understanding your audiences’ motivations, you’ll be able to better craft content (and the connections between content) to serve their needs and move them toward deeper engagement.

Content Structure

Dragon Drawing While the technical aspect of content strategy is often overlooked, we as site builders know that a solid content model defining the structures and boundaries of what can be created is key to success. It’s not a problem unique to websites.

Because Inquisition is a role playing game, it features complex systems of character building, crafting, and influence. You increase your team’s abilities not just by slaying enemies and completing quests, but by learning how to make new things – usually of the shiny, sharp, or exploding variety.

For example, to create a “Mighty Offense Tonic” that will provide a damage bonus against a barrier, you need to collect, and then combine, 11 Embrium, 11 Deep Mushroom, and 2 Rashvine Nettle. Oh, and only a Warrior-class character can use it. According to one of the many Inquisition wikis, there are at least 15 potion types, each with multiple variations. Then there are the weapons, the armor, and on and on.

The amount of planning that went into the foundational structure of the game is astounding. It mostly fades into the background when you play, becoming something that you simply do.

The same has to be true for the content on your website. By and large, people don’t come to a website to admire its functionality. In fact, if they notice the structural underpinnings of your content, you’re likely doing it wrong.

Drupal provides all the tools you need to implement a rock-solid foundation. Because any entity can be fielded, careful planning and the creation of models will help you create the structures necessary to support all your content needs.
The key to content modeling is careful planning before you write a single line of code or create a single entity.

If you want to, say, store information about authors separately from blog posts – as you should – your content model should reflect the relationship between the two entities and note that the Author entity requires a display mode that includes only the Author’s name, photo, and shortened bio.

To the end user, the resulting rendered page will be seamless, with the core content enhanced by a brief, visually differentiated blurb about the author – that can be connected to the Author’s full profile and a list of all of her other blog posts; to the content administrator, updating the Author’s image happens in a single place, rather than on every piece s/he has written. This is not the sort of content relationship you want to implement after the writers have copied and pasted bios all over your website.

Content Organization

Dragon Drawing Once you’ve defined your structures, you need to map out how they relate to each other.

One of the first things you’ll notice about Inquisition is how vast the world is. Want to climb that mountain over there? Go ahead, as long as it’s not too steep; jumping only gets you so far. Interested in the Urthemiel Plateau? It’s all there for you to run around in (and without getting tired).

A small minority of people will likely want to explore every nook and cranny. Others will focus on the tasks at hand.

Triple-A games can’t just rely on great gameplay to make them successful. The story counts for something now, too. Inquisition has over 200 pieces of written history located throughout the world. That’s a lot of content – and it’s not all dumped on you at once.

But if getting from point A to point B is a huge drag and you hate doing it, you’re probably not going to stick around for long. Inquisition recognizes that different audience types may want different things and it rewards them both. In order to succeed, you don’t have to read all of the in-game content if you don’t want to (but you should want to). When you get tired of running around, you can use the fast travel functionality to move between major areas of the world.

Information architecture and content strategy bleed into each other. If users can’t find the content they want on your website, if the paths that benefit your organization are not well defined (and tracked and measured), then the best-written copy in the history of copy won’t help you.

Your actual content needs to remain central to the user’s experience of it. We generally recommend streamlining the information around it. How many sites have you seen that place blocks of links to other content, pop-up modal dialogues, and other distractions in your way, essentially trying to direct you away from the very reason you visited the site in the first place? Some visitors may very well want to explore your entire world, but most of them likely want a minimum of fuss as they travel toward their destinations.

As you plan your content structures, you need to keep in mind the navigational elements, the related content blocks, the search filters, and the calls to action that will transform the user’s delight in finding just the right piece of content – you did delight them, right? – into some action useful to your organization. Just try not to overwhelm them.

Is your goal to convince visitors to sign up for an e-mail list? Highlight that call to action, and minimize the secondary conversion opportunities on that page. Just be sure to tie it into your data: if you know that people who read three or more articles are more likely to sign up, then build your system to feature the related content most likely to get them to continue to read until they reach that moment of inspiration when they decide you’re a trusted source of information.


We’re creating content for people; SKYNET is still at least a few years away.

The creators of Inquisition recognized that their content strategy couldn’t end with the game’s release, so they’ve built in multiple ways to tap the increasingly social nature of the Internet.

Dragon Drawing
Within the game, you can invite your friends to hack and slash at your side in a multiplayer mode, and by default, Inquisition records your gameplay: with just a few clicks, you can share that time when you wandered into the Fallow Mire with your network.

There’s also a universe of fan sites and wikis, in addition to the official community hub that collects BioWare’s own social media outreach, featuring contests, game tips, and more. They’ve embraced the knowledge that they can’t control the message around the content they’ve created (we’re looking at you, Tumblr), but are taking every opportunity to shape it.

Content strategy doesn’t stop at your website. As browsing habits shift, and more and more traffic to our sites comes from search and social media sharing (particularly with the younger generations), we need to recognize that simply creating great content isn’t enough. With so much content being generated – some estimate that 90% of the data humans have ever created was generated over the past two years – the Field of Dreams tactic won’t work.

Your content strategy needs to include active outreach and engagement, through social channels, e-mail, maybe even print. Help people find your great content, remind them that you have great content, and give them the opportunities they need to engage with it.


There are whole swaths of content strategy we haven’t covered here, but like even the most awesome multi-classed character, Drupal can’t do everything. Creating good content and managing the people responsible for it are things we can help with, provide tools for, and perhaps measure, but human systems require their own tactics.

The key is simple: plan. Put purpose behind the reason you’re building websites in the first place. As cool as it might be to create a Rule that populates a related content block based on a user’s referral path, if anyone (who’s not a developer) notices the scaffolding, you’ve probably already lost them.

Define your goals. Understand your audiences. Build the structures that allow content to connect the two. Once you come up with the plan, content strategy is nothing to be afraid of.

But if you know how to make dragon battles not absolutely terrifying, be sure to drop us a line.


With Drupal 8’s release, we’ll soon be involved in a spate of upgrades – and content migrations. Since Inquisition is third in a connected story, BioWare faced a similar challenge in getting new players up to speed.

DA and DA2 had a metric ton of content, but folks new to the series didn’t want to have to play hundreds of hours of those games to be able to understand Dragon Age: Inquisition. So, the game-makers created a web application allowing players to review all the vital decisions of previous games in a highly-visual, interactive storyboard. Once players selected their actions, they could save the “world state” and import that into their new game of Inquisition, making it seem as though they’d played the previous games, when in fact, they’d basically gone through a checklist.

Sounds nice, right? As part of your content governance and migration strategy, you need to convince your stakeholders they likely don’t need absolutely everything to be moved from an old site to the new. Prioritize your content: bring over only the highest performing and most important pieces and convince your team they should rewrite – or simply kill – the rest.

Image: ©http://www.istockphoto.com/profile/komissar007

Aug 05 2015
Aug 05

MortenDK (geek Röyale), our favorite bleeping curmudgeon, explains, in graphic detail, the extraction of Drupal 8’s front end from its back end.
And read his Angry Themer articles in Drupal Watchdog (Yo. Subscribe: https://drupalwatchdog.com/subscribe/2015)

Aug 04 2015
Aug 04

PhotoYou just bought a beautiful new home. You spent a lot of money, so you want to get the most out of your investment by looking for opportunities to make it income-producing while residing in it. Along comes a friend with a novel idea. A local manufacturing company exceeded its monthly waste allotment and needs a new location to store its surplus HAZMAT material. This company will pay an extraordinary amount of money – $1,000 per barrel per month – to take up what would otherwise be unused space in your basement.

So... would you do it?

The potential of a massive passive income stream can be enticing. However, in this case, the risk you’d take on is enormous. What happens if the barrels are faulty – or they leak? What if they drop in transit – and crack open? What if there’s a natural disaster – a fire or a flood? How widespread might the damage be? Would you face fines for contaminating the environment? What about the resell value of the house? And if the waste leached out, how would it affect your family’s health?

The fact is that the possible damage is so severe that most cities, states, and counties forbid HAZMAT storage in residential property.

Credit Card Processing

Storing, processing, and transmitting payment card data also comes with potential risks and rewards.
On the one hand, adding an eCommerce component to a website can open up a sizable income stream, which the merchant can then use to grow the website or offset the initial build cost. However, the effects of credit card breach can be fatal to a business: customers may lose trust and take their business elsewhere; credit card companies may lose money from fraudulent charges; and the merchant may be required to pay for significant upgrades without the budget to do so – as well as face fines as high as $200 per credit card transaction affected.


Similar to managing hazardous materials, a single breach wipes out gains and even leaves the merchant stuck cleaning up the mess.

Payment Card Industry Data Security Standard (PCI-DSS)

When done correctly and securely, everyone benefits from allowing cardholders to pay for goods and services online: merchants can grow their business; credit card companies can collect their transaction fees; and customers get the convenience of paying online.

So to make life less onerous for merchants, the Payment Card Industry created a Data Security Standard (typically referred to as PCI-DSS or “PCI compliance”). The list of requirements is intended to secure payment transactions end-to-end across all systems. (For more detail, please visit PCISecurityStandards.org or drupalpcicompliance.org.)

Returning now to the HAZMAT analogy, there are certain standards that have to be met (by law) to move and/or store waste: you can only transport it with certain types of vehicles, which can only be operated by trained drivers; the place you store it must have appropriate ventilation and drainage; the containers themselves must meet a certain standard and be compatible for the type of waste being stored; and on and on.

Is Drupal Immune?

One of the benefits of open source software like Drupal is that the source code is out there for everyone to review, verify, and improve. However, this by itself doesn’t mean that Drupal is bug-free and without vulnerabilities.
That reality hit home with the disclosure of SA-CORE-2014-005, where a single line of code exposed a highly critical vulnerability.

How critical?

A specifically crafted page request could provide full admin access in one page load. The implications for the Drupal eCommerce community were immense. Any site vulnerable to this attack could have had a key logger placed on a payment page.

Fortunately, despite widespread reports of Drupal sites getting hacked, we did not see a rise of reports regarding stolen credit card data. The point is that, while it’s not common to hear of Drupal eCommerce sites being breached, these attacks are possible.

Strategy for Success

Doom and gloom aside, what is a responsible Drupalista to do? Thankfully, it’s possible to significantly – if not completely – reduce your risk while still retaining the ability to accept payments online.

Let's revisit the HAZMAT analogy again. Suppose instead of storing the material in the basement, you simply used your driveway as a temporary inspection point on the way to an offsite storage facility. While a leak would still be damaging, it’s less likely to get in your home – and the total quantity that could be leaked is reduced as well.

Let’s take the analogy one step further. Suppose you discover that, for a small monthly fee, you can rent a secure facility where you can perform the inspections. You’ve significantly reduced your overall risk because the only way to expose your home is through trace amounts on your clothing brought back from the office.
And finally, to completely eliminate the risk, you could simply outsource the responsibility and collect money as an arbiter brokering the deal.

What does this have to do with eCommerce? The same levels of risk apply. If you store credit cards within the Drupal database, you take on the most risk and are subject to the largest quantity of PCI compliance security controls (384). If you only let cards pass through (the driveway example), you’ve significantly reduced the magnitude of the exposure, but there is still risk. If you start pushing the payments off-site, you’ve all but eliminated the risk except for some edge case situations. And finally, if you fully outsource your store, you eliminate all the risk – as well as a portion of your income potential, of course.

The following table summarizes this comparison as well as the associated PCI SAQ levels, number of security controls, and typical costs of compliance.

PCI Type Payment Gateway Type HAZMAT Analogy Number of Security Controls Estimated Cost to achieve compliance SAQ D Storing Cards Storing Waste 384 $100,000+ SAQ C Merchant Managed Driveway Inspection 139 $100,000 SAQ A-EP Shared Management Offsite Office 139 $30,000 SAQ A Wholly Outsourced Arbiter 15 $1,000

The key take away: whenever possible, choose a payment gateway that reduces your overall exposure. You’ll still be able to run a successful eCommerce store (and receive the upside) while limiting the damage of attack (and thus minimize the downside).

A Deeper Dive

For those wanting to go further, please read the Drupal PCI Compliance White paper.

Image:"Marines qualify as HAZMAT techs" by mcas_cherry_point is licensed under CC BY NC SA 2.0


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web