Feb 27 2017
Feb 27
In January of 2015, Accel-KKR, a private equity firm, combined Ektron and the Swedish company Episerver into a single company and CMS platform. This has caused many organizations to choose to migrate off the Ektron platform and onto a CMS like Drupal. Two factors are driving this trend: the high cost of an Episerver license upgrade, and the fact that the open source landscape has evolved significantly over the prior decade, to the point where many enterprise organizations (from private and public corporations to government entities) have embraced Drupal and the open source community.
Feb 27 2017
Feb 27

Our solution for advanced configuration management workflows has just become more powerful! The core of what makes Configuration Split work so nicely with drush and the Drupal UI has been split off in a new module: Config Filter! Config Filter exposes a ConfigFilter plugin and swaps the sync storage applying all the active ConfigFilters to it. This means it is no longer necessary to manually swap the service as we recommended to do in the past and it works also when installing a site, as the swapping happens only when the module is installed. As of drush 8.1.10 this works with the default config import and export commands.

Update Config Split

For current users of the Config Split module it means that they should remove the custom service swapping as part of the update and then apply the database updates which will install Config Filter (you already downloaded it with composer update, right?).

New command-line options and new workflows

With the re-factoring we also changed the options of the drush and console commands. We added a new option for specifying a split and then export to and import from only that split directory; this supersedes the previous options to specify separate export destinations and import sources - they are not needed for the simple work flow we advocate for, and the behaviour can easily be achieved by modifying settings.php as needed.

With the addition of the single split export, a new workflow becomes possible: export only the "production" split, before importing the whole configuration which will have it merged back in.

Improved Graylists

In addition to the new architecture the graylist feature has been improved so that dependencies can be graylisted as well and optionally only items that differ from the sync storage are split off. More details and more documentation will come in future.


The ultimate goal is to improve the export API in Drupal 8 core so that more advanced workflows are possible. Config Filter is our proposal for a solution in contrib, but we feel that this functionality should belong to Core.

On the way to Core, here are some more immediate steps:

  • Clean up after the big refactoring, adding automated tests for some special cases.
  • Improve the documentation.
  • Integrate with Config Readonly to allow locking the site except for the configuration defined in splits.
  • For those that want to do configuration management without seatbelts, integrate Config Ignore with Config Filter so that it can work smoothly next to Config Split.
  • ...and eventually propose Config Filter for inclusion in Core!

More information and acknowledgments

The re-factoring started during the Drupal Mountain Camp in Davos and I would like to thank the organizing committee for the great event. On Friday I gave a presentation about my favorite topic in Drupal: Configuration Management, attached here are the slides based on the Nuvole presentation at DrupalCon Dublin 2016.

For any further information see the module pages for Config Split and Config Filter and their issue queues.

Feb 27 2017
Feb 27

People in auditorium

Last weekend the Florida Drupal community hosted its ninth annual Florida DrupalCamp (FLDC) at Florida Technical College in Orlando.  It was a great success. At this point, we (the organizers) have a pretty good idea how to put on this event. (Since a good number of us have been involved in all nine iterations, (and all still seem to like each other!) some aspects of the camp organizing process are on veritable auto-pilot.

If you're a camp organizer, you know what goes into securing sponsors and a location, spreading the word, finding presenters, arranging for swag and catering, etc… For this year's event, we wanted to step things up a bit by focusing on three things: more learning, more fun, and more networking. Based on feedback and our own experiences, we feel like we achieved all three goals with some changes from previous years' camp recipes.

More Learning

In the past, FLDC has been a 2-day event; Saturday has been the main event - a day full of sessions, while Sunday has been a "community day" with sprints, Coding for a Cause, BoFs, and a generally less-defined schedule. 

This year we decided we stepped it up to a full-on three-day event; full-day training workshops on Friday, sessions on Saturday and Sunday, as well as a professionally mentored code sprint on Sunday. 

Adam Bergstein session

With the generosity of five trainers, we were able to provide full-day workshops to almost 80 people on Friday. Many, many, many thanks to our trainers:

Code sprint

The other change to facilitate more learning is to provide experienced sprint mentors for our Sunday sprint. With a nudge or two (maybe three) from xjm, we decided to have a Major Issue Triage Sprint at this year's camp. We brought in YesCT and nerdstein to mentor the sprinters to great success. During the Sunday sprint, we were able to make progress on well over 20 Drupal core major issues.

Finally, we decided to bring in Kevin Thull (@kevinjthull) to handle all the session recording, processing, and uploading for the event. Kevin's work with recording other Drupal events is top-notch. Having all of our sessions recorded will allow those who weren't able to attend (or those who did attend but couldn't make a particular session) to be able to benefit from our speakers. All of the sessions are currently posted on our website schedule, as well as on our Youtube channel.

More Networking

Traditionally, we have always had a full-day beginner track on the main day of the camp. This had the effect of keeping the vast majority of the beginners in the same room all day - limiting their networking opportunities. By having a full-day introductory workshop on Friday, beginners became fully-involved in camp activities the rest of the weekend. The feedback we received from this change is overwhelmingly positive.

The other big-ish change aimed at more networking was to increase the amount of time between each session to a full 30 minutes. This had the side-effect of us having fewer sessions on Saturday, but was mitigated by adding a half-day of sessions on Sunday. Again, the response we received from this change was overwhelmingly positive. It provided a less "rushed" day for all attendees, gave presenters a little more leeway if they went a few minutes over their allotted window, and provided ample time for the hallway track. In addition, our sponsors loved this change, as it gave attendees more time to stop and chat with sponsors in our exhibition area. 

During the closing session, we announced that any leftover funds from the event can (and should!) be used by Florida local meetup organizers to promote and grow their local meetups. In the past, we've informally made these funds available, but we're going all-in this year. If you're a Florida Drupal meetup organizer, we have $$$ for you for memberships, food for your meetups, or just about anything else that will help you grow your local community. Look for more details on soon.  

More Fun

While Drupal events tend to be very positive events for all attendees, we made the conscious decision to ensure that we maximized the enjoyment of our attendees while we kept the Drupal knowledge flowing. It was also very important to us that we make a positive first impression on those new to Drupal and attending one of their first Drupal events. We wanted folks to be able to leave the event with positive feelings for the Florida Drupal community. 

Networking at the after party.

To this end, we increased the fun factor in several ways. First off, we further "official-ized" our Friday night dinner gathering. Over the past few years, organizers and a few others have always gathered at Bubbalou's a local BBQ restaurant just down the street from the camp venue after prepping the venue for the event. This year, with the full-day trainings on Friday, we invited everyone to meet up at the restaurant for a pay-on-your-own dinner. This turned into a wonderful, casual evening for well over 75 people. The restaurant has a large indoor/outdoor picnic table eating area, perfect for networking. 

Suzanne Dergacheva with Skywalker

On Saturday, we set the tone early and put smiles on people's faces from the very start. We worked with the folks from Gatorland to have a professional animal handler and a friendly alligator named Skywalker on-hand to greet guests when they arrived. Many attendees went for a cuddle with Skywalker even before getting coffee. 

Druplicon chocolates

Additionally, we wanted to provide unique swag to our attendees. A former attendee of previous Florida DrupalCamps, Kathryn Neel, is now running custom chocolate business, Sappho Chocolates. We worked with her company to create Druplicon chocolates for all attendees. As part of the process, we funded the development of the custom molds for the Druplicon chocolates, so if any other Drupal event organizers want to order chocolates for their event, the molds are there for your use!

Finally, for the last Saturday session, we went with a single session of lightning talks. This ended up being one of the highlights of the weekend, as we had a some amazing 5 minutes talks. With most of the camp attendees in the room, it left everyone with smiles on their faces and provided a great transition to the closing session and after party. 

Sponsor Happiness

Most camp organizers always have trepidation about camp finances right up to the actual event. Sponsor needs have evolved as the Drupal community has evolved, and camp organizers have to evolve as well. Today's Drupal event sponsors are more focused on getting a return-on-investment from their sponsorship dollars than ever before, so it is up to event organizers to do everything they can to make that happen. 

By providing more time for attendees to visit sponsor tables as well as the opportunity for Platinum and Gold sponsors to receive an opt-in version of our attendee list, as well as all of the other "standard" sponsor benefits, the feedback from our sponsors has been very positive. Additionally, we gave Platinum and Gold sponsors the opportunity to place text ads in our emails to attendees leading up to the camp.

Achieve Agency logo

Our Platinum sponsor, Achieve Agency is a newly-formed Florida digital agency based in West Palm Beach. John Studdard, their COO (and formerly of Big Couch Media), has been a long-time FLDC supporter and attendee, and wanted to use FLDC as a vehicle to announce Achieve Agency to the community. As event organizers, we couldn't be happier that our top-level sponsor is a Florida-based organization. 

Featured Speakers

Megan Sanicki at FLDC17.

Keeping with our tradition of not having a single keynote speaker (mainly due to the fact that our venue's auditorium can't accommodate all attendees, we invited three featured speakers to this year's event. We were lucky to get our top three choices - check out their (recorded) sessions below:

Room for Improvement

As always, our post-camp organizer retrospective highlights a few areas we need to address for next year's event.

  • Have a printer on-hand for printing emergencies.  
  • Find a volunteer to arrange for food donations for left-overs.  
  • Figure out a magical solution for badges that will make everyone happy.  
  • Find a new official hotel: We had a fair number of complaints.  
  • Figure out a way to improve the slow service at the after-party.  

Quick hits

  • Camp attendance was a bit more than last year. Overall, we had 251 registered attendees. A bit more than 100 participated on Friday, almost 200 participated on Saturday, and about 100 participated on Sunday.  
  • Camp budget was about $18,000, the majority went to catering, but we also had significant costs for featured speakers and sprint leads travel and swag.  
  • We had a total of 17 fiscal sponsors which provided about two-thirds of our income.  
  • The registration fee for attendees was $35 ($25 early-bird), with an optional $25 individual sponsorship. We had 41 individual sponsors.  
  • Our camp organizing volunteers are way better than yours.  
  • We invested a bit this year in large signage that we can re-use in future years.  
  • Another change this year - volunteers got in for free (sorry this took so long).  
  • We had attendees from 9 countries and 19 states.  


What more can we say? Florida DrupalCamp 2017 was a huge success. Our volunteers are the best, our sponsors are the best, and the Florida community is the best!

Be on the lookout for an announcement about the dates for Florida DrupalCamp 2018 by joining our mailing list and following us on Twitter!

Feb 27 2017
Feb 27

Dates have always been a tricky thing to manage in Drupal. Even in PHP. PHP 5.2 introduced the DateTimeInterface which makes handling dates, date ranges, intervals, comparisons etc much easier. However, we always still have the complication of data storage, formatting and different timezones management.

In this article we are going to look at how we can run some entity queries in Drupal 8 using the Date field in our conditions. The requirement for returning entities which have a date field with a value between certain hours is definitely not an edge case, and although seems like an easy task, it can be tricky.

Query for entities using dates in Drupal 8

Imagine a simple date field on the Node entity which stores date and time. By default in Drupal 8, the storage for this date is in the format Y-m-d\TH:i:s and the timezone is UTC. However, the site timezone is rarely UTC and we very well may have users choosing their own timezones. So we need to construct our node queries carefully if we want reliable results.

Running a db_query() type of query for returning nodes with the date in a certain range would be a pain at best and impossible at worst. Luckily though, we can, and should always in Drupal 8 try to rely on the entity.query service when looking for entities.

So let's see a couple of examples.

First, an easy one: how do we query for all the nodes which have the field_date value in the future.

$now = new DrupalDateTime('now');
$query = \Drupal::entityQuery('node');
$query->condition('field_date', $now->format(DATETIME_DATETIME_STORAGE_FORMAT), '>=');
$results = $query->execute();

A few things to notice. First, we are using the Drupal wrapper of \DateTime and constructing an object to represent our current time. Then we create our entity query and for the date field condition we pass the storage format so that it can be compared to what is being stored. And the regular operators here allow us to find the right entities.

There is one problem with this though. When creating the DrupalDateTime, the site default timezone is used. So if our timezone is not UTC, the query will suffer because we are essentially comparing apples with oranges. And the further away from UTC we are, the more apples start to become compared to cars and airplanes.

To fix this, we need to set the timezone to UTC before running the query.

$now = new DrupalDateTime('now');
$now->setTimezone(new \DateTimeZone(DATETIME_STORAGE_TIMEZONE));

And then use $now in the query. The subtle difference to understand is that we are creating $now totally relative to where we are (the site timezone) because we are interested in finding nodes in the future from us, not from from another timezone. However, we then convert it so that we can have them compared properly in the query.

A more complex example could be a range of times. Let's say we want all the nodes with the time of today between 16:00 and 18:00 (a 2 hour span).

I prefer to work directly with \DateTime and then wrap it into the Drupal wrapper just because i can have all the native methods highlighted by my IDE. So we can do something like this:

$timezone = drupal_get_user_timezone();
$start = new \DateTime('now', new \DateTimezone($timezone));
$start->setTimezone(new \DateTimeZone(DATETIME_STORAGE_TIMEZONE));
$start = DrupalDateTime::createFromDateTime($start);

$end = new \DateTime('now', new \DateTimezone($timezone));
$end->setTime(18, 0);
$end->setTimezone(new \DateTimeZone(DATETIME_STORAGE_TIMEZONE));
$end = DrupalDateTime::createFromDateTime($end);

$query = \Drupal::entityQuery('node');
  ->condition('field_date', $start->format(DATETIME_DATETIME_STORAGE_FORMAT), '>=')
  ->condition('field_date', $end->format(DATETIME_DATETIME_STORAGE_FORMAT), '<=');
$results = $query->execute();

So first, we get the user timezone. drupal_get_user_timezone() returns for us the string representation of the timezone the current user has selected, or if they haven't, the site default timezone. Based on that, we create our native date object that represents the current point in time but set the actual time to 16:00. After that we set the storage timezone and create our Drupal wrapper so that we can format it for the query.

For the end date we do the same thing but we set a different time. Then we expectedly write our query conditions and ask for the entities which have a date between those 2 times.

The order of setting the time and timezone on the date object is important. We want to set the time before we set the timezone because the times we are looking for are relative to the current user, not to the storage timezone.

So that is pretty much it. Now you can query for entities and play with date fields without issues (hopefully).

Feb 25 2017
Feb 25
Panels style plugins are made for wrapping panel panes and panel regions into extra markup. 99% of your needs are covered by a "Panels Extra Styles" module. Please look at that module if you need extra styles for panels. But if you need some specific style you can easily implement it.

In this tutorial we will create plugin style for region or pane. It will allow site builders to wrap region or pane into a custom markup entered in settings form.

1. Create a directory example_panels_module and put there a file

name = Example panels module
description = Provides example implementation of panels plugins
core = 7.x

; Since we're using panels for creating
; plugins we have to define this
; dependency.
dependencies[] = panels
2. Tell ctools where it has to search plugins (file example_panels_module.module):

 * Implements hook_ctools_plugin_directory().
 * Integrate our module with Ctools. Tell where
 * Ctools has to search plugins. Please note that
 * we should return path to plugins directory only
 * if $module equals 'panels'.
function example_panels_module_ctools_plugin_directory($module, $plugin) {
  if ($module == 'panels' && !empty($plugin)) {
    return "plugins/$plugin";
3. Create directory plugins/styles and put there a file
 * @file
 * 'Example panel style' style.

$plugin = [
  // Plugin title.
  'title' => t('Example panel style'),
  // Plugin description.
  'description' => t('Raw HTML wrapper.'),
  // Render region callback.
  'render region' => 'example_panels_module_raw_wrapper_render_region',
  // Render pane callback
  'render pane' => 'example_panels_module_raw_wrapper_render_pane',
  // Region settings form.
  'settings form' => 'example_panels_module_raw_wrapper_region_settings_form',
  // Pane settings form.
  'pane settings form' => 'example_panels_module_raw_wrapper_pane_settings_form',

 * Region settings form callback.
function example_panels_module_raw_wrapper_region_settings_form($settings) {
  // Define a settings form with prefix and suffix text areas
  // for region style.
  $form['wrapper_region_prefix'] = [
    '#type' => 'textarea',
    '#title' => t('Region wrapper prefix'),
    '#default_value' => !empty($settings['wrapper_region_prefix']) ? $settings['wrapper_region_prefix'] : '',

  $form['wrapper_region_suffix'] = [
    '#type' => 'textarea',
    '#title' => t('Region wrapper suffix'),
    '#default_value' => !empty($settings['wrapper_region_suffix']) ? $settings['wrapper_region_suffix'] : '',

  return $form;

 * Region render callback.
 * Please note that it's a theme function
 * and has to start with 'theme_' prefix.
function theme_example_panels_module_raw_wrapper_render_region($vars) {
  $output = '';

  // Variable $vars['panes'] contains an array of all
  // panel panes in current region. Collect them into
  // variable.
  foreach ($vars['panes'] as $pane) {
    $output .= $pane;

  // Variable $vars['settings'] contains settings
  // entered in settings form. Wrap region content
  // into custom markup.
  return $vars['settings']['wrapper_region_prefix'] . $output . $vars['settings']['wrapper_region_suffix'];

 * Pane settings form callback.
function example_panels_module_raw_wrapper_pane_settings_form($settings) {
  // Define a settings form with prefix and suffix text areas
  // for pane style.
  $form['wrapper_pane_prefix'] = [
    '#type' => 'textarea',
    '#title' => t('Pane wrapper prefix'),
    '#default_value' => !empty($settings['wrapper_pane_prefix']) ? $settings['wrapper_pane_prefix'] : '',

  $form['wrapper_pane_suffix'] = [
    '#type' => 'textarea',
    '#title' => t('Pane wrapper suffix'),
    '#default_value' => !empty($settings['wrapper_pane_suffix']) ? $settings['wrapper_pane_suffix'] : '',

  return $form;

 * Pane render callback.
 * Please note that it's a theme function
 * and has to start with 'theme_' prefix.
function theme_example_panels_module_raw_wrapper_render_pane($vars) {
  // Variable $vars['settings'] contains settings
  // entered in settings form. Variable
  // $vars['content']->content is a renderable array
  // of a current pane. Wrap pane content
  // into custom markup.
  return $vars['settings']['wrapper_pane_prefix'] . render($vars['content']->content) . $vars['settings']['wrapper_pane_suffix'];
For now we have a module with next structure:
  |__ plugins
  |  |__ styles
  |  |  |__
  |__ example_panels_module.module
When it's enabled you will be able to setup style for a region or a pane. We've created style for both of them but you can define style only for region or only for pane.

Let's test our style. I want to edit node/%node page: wrap content region into div tag with a class "content-region-wrapper". And wrap node_beind_viewed pane into span tag with id "pane-node-view-wrapper". That's how can I do that:

Change style for a region

Select newly created style

Set up prefix and suffix for a region

The same way I've set up prefix and suffix for a panel pane and that's what I've got:

Result markup

Key notes:

Feb 25 2017
Feb 25

We at The Jibe, and all of us in the Vancouver Drupal community, are happy to host the Pacific Northwest Drupal Summit this weekend in Vancouver. I am personally very excited to be presenting at this event for the fourth time -- professing my love for theme libraries and all they let us achieve in Drupal 8.

This presentation was inspired by a blog series that we recently published on the Acquia Developer Center blog. Whether you couldn’t make it up to Vancouver this year or you just had to see another great session, I have assembled some assets and links that will fill you in.

Feb 25 2017
Feb 25


I have been working with Drupal for over six years now, for much of that period exclusively working with Drupal as an employee or in a freelance/contract role.

Prior to the start of Drupal 8, the nearest thing I manged to getting off of the island  was periods of doing something completely different (Expression Engine, Django, a custom PHP framework, attending a Silverstripe conference), usually because of an existing or already started/inherited project connected with the Drupally people I was working with.

These sabbaticals were reasonably short, however often instructive and enjoyable, in some cases (not all) the feeling was 'could have been better done in Drupal', they were a complete break from Drupal.

I have just finished ten months of working on a Symfony based project and things are very different. 

Bursting the Drupal Bubble

For almost a year, I have not been doing Drupal, but also so busy and involved learning more about Symfony, Angular 2 and other technologies that I have not had time or capacity to attend Drupal events, meetups etc. 

Although working with a group of people that had all used Drupal a lot before (amongst other things), Drupal was very rarely mentioned, not one person said 'this could have been done better in Drupal' in fact the only time that really came up at all was in reference to other client systems that were integrating with what we were building.

I needed to burst the bubble, forget about Drupal and whilst working on something different spend my time thinking about that.

The bubble I am referring to is thinking about everything in the context of Drupal, even the proudly invented elsewhere bits in Drupal 8. This kind of thinking can perhaps have the same bad effects as the echo chambers on social networks where everyone you are connected to has essentially the same or similar references and experiences.

There is a possibility that the next contract I do will not be Drupal either.

Drupal is still there

Using Symfony and anything using Symfony components (Laravel may be next) still increases knowledge and skills related to Drupal (before 8 anything elsewhere just meant Drupal relevant knowledge was fading over time). Having used the Symfony console a lot and written commands for it, I will be pouncing on the Drupal Console when I do my next piece of Drupal work for example (which is more than I can say for Drush).

I am clearer about what I think Drupal strengths are though. I current have two personal projects I want to start/finish, one is a match for Drupal one for Symfony.


Mileage may differ for other people, I found I had to put Drupal to one side for a while, it is all very well getting off the Island but that is not quite so effective if you remain in an Island culture ghetto somewhere on the mainland and don't fully integrate for a while. Some people will have no need to ever get off at all, we all have different stories to tell.

However if you do come across some work that would be better done in something else, will you even know? If you do know will you have the courage to burst the bubble?

Feb 25 2017
Feb 25

Introducing the Commerce Abandoned Carts module

Users abandoned their shopping carts before completing the checkout process results potential lost sales. The Commerce Abandoned Carts module allows you to automate the sending of emails to users who have abandoned their carts, thereby allowing you to remind them of the incomplete checkout and/or provide additional information to them.


Install and enable the module normally.

drush en commerce_abandoned_carts -y


When the module is first installed it is in TEST mode. While in TEST mode email will be sent to the configured test email address or the site email address if this empty. Also, while in test mode the database records are not updated so that the same test emails will be sent on each cron run.


Commerce Abandoned Carts module settings

To access the settings navigate to STORE→CONFIGURATION→COMMERCE ABANDONED CARTS (admin/commerce/config/abandonded_carts)

  • Send Timeout: This is the amount of time to wait before a cart is considered abandoned (in minutes). For example, the default setting of 1440 minutes will wait one day from the time a cart is first created before sending the message.
  • History Timeout: This setting is used to prevent the module from sending messages to users who have created their cart too long ago. The default setting of 21600 minutes will prevent the messages from being set for carts that are more than 15 days old.
  • Abandoned Statuses: Check all of the cart/order statuses to be considered not completed. When the module runs it will query for carts/orders that meet the above setting requirements, ere one of these checked statuses and the user has entered an email address.
  • Batch Limit: This limits the amount of emails that will be sent per cron run. A higher number will user more resources per cron run. If you set this to a lower number for efficiency then you should set your cron to run multiple times per day if you want to process for abandoned carts.
  • From Email Address: This is the email address that the abandoned cart messages will be sent from. Leave blank to use the site email address.
  • From Email Name: This is the name that will be shown as the abandoned cart messages will be sent as. Leave blank to use the site name.
  • Email Subject: This is the subject line that will be displayed on the abandoned cart messages.
  • Customer Service Phone Number: Optional phone number to be included in the phone number variable to be used in the email template. Leave blank to omit.
  • Enable Test Mode: Check the test mode box the enable test mode. While in test mode all abandoned cart messages will be sent to the test email address entered, or to the site email address if left blank. No database updates will be made while in test mode, this means that the same emails will be sent during each cron run while in test mode. Once test mode is disabled then emails will be sent to the user/customer email and the database will updated to only send one email per cart/order.


Email can only be sent if the user has entered their email address during the checkout process. Obviously, we can't send an email to a user if we don't have their email address.

Proper cron setup is required.See the cron notes below.


Proper setup of your site's cron system is required for proper functionality. We recommenced note using Drupal's built in "poor man's cron" but rather setting up a cron task on your host/server.

When setting up your cron task it should run at least as often as your "send timeout" setting or your messages will not send as often as you'd like.

More information about setting up cron on Drupal.

Email Template:

The email message sent to users can be easily customized by copying the files from the module's theme directory into your default theme directory. Then clear the site's caches and customize the template as needed.

NOTE: See the template file for further information and available variables. You can also override the commerce_abandoned_carts_theme() function to further customize if needed.

Feb 24 2017
Feb 24


One of the problems with Drupal distributions is that they, by nature, contain an installation profile — and Drupal sites can only have one profile. That means that consumers of a distribution give up the ability to easily customize the out of the box experience.

This was fine when profiles were first conceived. The original goal was to provide “ready-made downloadable packages with their own focus and vision”. The out of the box experience was customized by the profile, and then the app was built on top of that starting point. But customizing the out of the box experience is no longer reserved for those of us that create distributions for others to use as a starting point. It’s become a critical part of testing and continuous integration. Everyone involved in a project, including the CI server, needs a way to reliably and quickly build the application from a single command. Predictably, developers have looked to the installation profile to handle this.

This practice has become so ubiquitous, that I recently saw a senior architect refer to it as “the normal Drupal paradigm of each project having their own install profile”. Clearly, if distributions want to be a part of the modern Drupal landscape, they need to solve the problem of profiles.

Old Approach

In July 2016, Lightning introduced lightning.extend.yml which enabled site builders to:

  1. Install additional modules after Lightning had finished its installation
  2. Exclude certain Lightning components
  3. Redirect users to a custom URL upon completion

This worked quite well. It gave site builders the ability to fully customize the out of the box experience via contrib modules, custom code, and configuration. It even allowed them to present users with a custom “Installation Done” page if they chose — giving the illusion of a custom install profile.

But it didn’t allow developers to take full control over the install process and screens. It didn’t allow them to organize their code they way they would like. And it didn’t follow the “normal Drupal paradigm” of having an installation profile for each project.

New Approach

After much debate, the Lightning team has decided to embrace the concept of “inheriting” profiles. AKA sub-profiles. (/throws confetti)

This is not a new idea and we owe a huge thanks to those that have contributed to the current patch and kept the issue alive for over five years. Nor is it a done deal. It still needs to get committed which, at this point, means Drupal 8.4.x.

On a technical level, this means that — similar to sub-themes — you can place the following in your own installation profile’s *.info.yml file and immediately start building a distribution (or simply a profile) on top of Lightning:

base profile:
  name: lightning

To encourage developers to use this method, we will also be including a DrupalConsole command that interactively helps you construct a sub-profile and a script which will convert your old lightning.extend.yml file to the equivalent sub-profile.

This change will require some rearchitecting of Lightning itself. Mainly to remove the custom extension selection logic we had implements and replace it with standard dependencies.

This is all currently planned for the 2.0.5 release of lightning which is due out in mid March. Stay tuned for updates.

Feb 24 2017
Feb 24

Another day, another Acquia Developer Certification exam review (see the previous one: Certified Back end Specialist - Drupal 8, I recently took the Front End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Front End Specialist - Drupal 8 Exam Badge

Now that I've completed all the D8-specific Certifications, I think the only Acquia Certification I haven't completed is the 'Acquia Cloud Site Factory' Exam—one for which I'm definitely not qualified, as I haven't worked on a project that uses Acquia's 'ACSF' multisite setup (though I do a lot of other multisite and install profile/distribution work, just nothing specific to Site Factory!). Full disclosure: Since I work for Acquia, I am able to take these Exams free of charge, though many of them are worth the price depending on what you want to get out of them. I paid for the first two that I took (prior to Acquia employment) out of pocket!

Some old, some new

This exam feels very much in the style of the Drupal 7 Front End Specialist exam—there are questions on theme hook suggestions, template inheritance, basic HTML5 and CSS usage, basic PHP usage (e.g. how do you combine two arrays, in what order are PHP statements evaluated... really simple things), etc.

The main difference with this exam centers on the little differences in doing all the same things. For example, instead of PHPTemplate, Drupal 8 uses Twig, so there are questions relating to Twig syntax (e.g. how to chain a filter to a variable, how to print a string from a variable that has multiple array elements, how to do basic if/else statements, etc.). The question content is the same, but the syntax is using what would be done in Drupal 8. Another example is theme hook suggestions—the general functionality is identical, but there were a couple questions centered on how you add or use suggestions specifically in Drupal 8.

The main thing that tripped me up a little bit (mostly due to my not having used it too much) is new Javascript functionality and theme libraries in Drupal 8. You should definitely practice adding JS and CSS files, and also learn about differences in Drupal 8's Javascript layer (things like using use 'strict';, how to make sure Drupal.behaviors are available to your JS library, and the like).

I think if you've built at least one custom theme with a few Javascript and CSS files, and a few custom templates, you'll do decently on this exam. Bonus points if you've added a JS file that shouldn't be aggregated, added translatable strings in both Twig files and in JS, and worked out the differences in Drupal's stable and classy themes in Drupal 8 core.

For myself, the only preparation for this exam was:

  • I've helped build two Drupal 8 sites with rather complex themes, with many libraries, dozens of templates, use of Twig extends and include syntax, etc. Note that I was probably only involved in theming work 20-30% of the time.
  • I built one really simple Drupal 8 custom theme for a photo sharing website (closed to the public):
  • I read through the excellent Drupal 8 Theming Guide by Sander Tirez (sqndr)

My Results

I scored an 83.33% (10% better than the Back End test... maybe I should stick to theming :P), with the following section-by-section breakdown:

  • Fundamental Web Development Concepts : 92.85%
  • Theming concepts: 73.33%
  • Templates and Pre-process Functions: 87.50%
  • Layout Configuration: 66.66%
  • Performance: 100.00%
  • Security: 100.00%

I'm not surprised I scored worst in Layout Configuration, as there were some questions about defining custom regions, overriding region-specific markup, and configuring certain things using the Breakpoints and Responsive Images module. I've done all these things, but only rarely, since you generally set up breakpoints only when you initially build the theme (and I only did this once), and I only deal with Responsive Images for a few specific image field display styles, so I don't use it enough to remember certain settings, etc.

It's good to know I keep hitting 90%+ on performance and security-related sections—maybe I should just give up site building and theming and become a security and performance consultant! (Heck, I do a lot more infrastructure-related work than site-building outside of my day job nowadays...)

This exam was not as difficult as the Back End Specialist exam, because Twig syntax and general principles are very consistent from Drupal 7 to Drupal 8 (and dare I say better and more comprehensible than in the Drupal 7 era!). I'm also at a slight advantage because almost all my Ansible work touches on Jinja2, which is the templating system that inspired Twig—in most cases, syntax, functions, and functionality are identical... you just use {{.j2}} instead of {{.twig}} for the file extension!

Feb 24 2017
Feb 24

Drupal 8.3.0 release candidate phase

The release candidate phase for the 8.3.0 minor release begins the week of February 27. Starting that week, the 8.3.x branch will be subject to release candidate restrictions, with only critical fixes and certain other limited changes allowed.

8.3.x includes new experimental modules for workflows, layout discovery and field layouts; raises stability of the BigPipe module to stable and the Migrate module to beta; and includes several REST, content moderation, authoring experience, performance, and testing improvements among other things. You can read a detailed list of improvements in the announcements of alpha1 and beta1.

Minor versions may include changes to user interfaces, translatable strings, themes, internal APIs like render arrays and controllers, etc. (See the Drupal 8 backwards compatibility and internal API policy for details.) Developers and site owners should test the release candidate to prepare for these changes.

8.4.x will remain open for new development during the 8.3.x release candidate phase.

Drupal 8.3.0 will be released on April 5th, 2017.

No Drupal 8.2.x or 7.x releases planned

March 1 is also a monthly core patch (bug fix) release window for Drupal 8 and 7, but no patch release is planned. This is also the final bug fix release window for 8.2.x (meaning 8.2.x will not receive further development or support aside from its final security release window on March 15). Sites should plan to update to Drupal 8.3.0 on April 5.

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

Feb 24 2017
Feb 24

In this blog I want to explain the round up we have done around the refactoring of the acl_contact_cache. In the previous sprints we discovered that a lot of the performance was slowed down by the way the acl_contact_cache was used (or rather not used at all). See also the previous blog post:

At the socialist party they have 350.000 contacts and around 300 users who can access civicrm. Most of the users are only allowed to see only the members in their local chapter.

In the previous blog we explained the proof of concept. We now have implemented this proof of concept and the average performance increase was 60%.

We created a table which holds which user has access to which contacts. We then fill this table once in a few hours. See also issue CRM-19934 for the technical implementation of this proof of concept.

Performance increase in the search query

In the next examples we are logged in as a local member who can only see members in the chapter Amersfoort. We then search for persons with the name 'Jan'. And we measure how long the query for searching takes.

The query for presenting the list with letters in the search result looks like

SELECT count(DISTINCT as rowCount  
FROM civicrm_contact contact_a 
LEFT JOIN civicrm_value_geostelsel geostelsel ON = geostelsel.entity_id  
LEFT JOIN civicrm_membership membership_access ON = membership_access.contact_id  
WHERE  ((((contact_a.sort_name LIKE '%jan%'))))  
AND ( = 803832 
  OR ((((
    ( geostelsel.`afdeling` = 806816 OR geostelsel.`regio` = 806816 OR geostelsel.`provincie` = 806816 )
    AND (
      membership_access.membership_type_id IN (1, 2, 3) 
      AND (
        membership_access.status_id IN (1, 2, 3)
        OR (membership_access.status_id = '7' AND (membership_access.end_date >= NOW() - INTERVAL 3 MONTH))
  OR = 806816
 AND (contact_a.is_deleted = 0)
ORDER BY UPPER(LEFT(contact_a.sort_name, 1)) asc;

As you can see that is quite a complicated query and includes details about which members the user is allowed to see.  Only executing this query takes around 0.435 seconds and the reason is that mysql has to check each record in civicrm_contact (which in this case is around 350.000 and growing)

After refactoring the acl cache functionality in CiviCRM Core the query looks different:

SELECT DISTINCT UPPER(LEFT(contact_a.sort_name, 1)) as sort_name  
FROM civicrm_contact contact_a 
INNER JOIN `civicrm_acl_contacts` `civicrm_acl_contacts` ON `civicrm_acl_contacts`.`contact_id` = `contact_a`.`id`  
WHERE  (((( contact_a.sort_name LIKE '%jan%' ))))  
AND  `civicrm_acl_contacts`.`operation_type` = '2' 
AND `civicrm_acl_contacts`.`user_id` = '803832' 
AND `civicrm_acl_contacts`.`domain_id` = '1' 
AND (contact_a.is_deleted = 0)    
ORDER BY UPPER(LEFT(contact_a.sort_name, 1)) asc

The query now takes around 0,022 seconds to run (20 times faster).


How does this new functionality works:

1. Every time an ACL restriction is needed in a query civicrm core only does an inner join on the civicrm_acl_contacts table and that is all

2. The inner join is generated in the service 'acl_contact_cache'  that service also checks whether the the civicrm_acl_contacts table need to be updated or not.

3. When an update of civicrm_acl_contacts table is needed depends on the settings under administer --> System Settings --> Misc --> ACL Contact Cache Validity (in minutes)

So how does this look like in code?

Below an example of how you could use the acl_contact_cache service to inject acl logic into your query:

// First get the service from the Civi Container
$aclContactCache = \Civi::service('acl_contact_cache'); // The $aclContactCache is a class based on \Civi\ACL\ContactCacheInterface
// Now get the aclWhere and aclFrom part for our query
$aclWhere = $aclContactCache->getAclWhereClause(CRM_Core_Permission::VIEW, 'contact_a');
$aclFrom = $aclContactCache->getAclJoin(CRM_Core_Permission::VIEW, 'contact_a');

// Now build our query
$sql = "SELECT contact_a.* FROM civicrm_contact contact_a ".$aclFrom." WHERE 1 AND ".$aclWhere;
// That is it now execute our query and handle the output...

The reason we use a service in the Civi Container class is that it is now also quite easy to override this part of core in your own extension.

The \Civi\ACL\ContactCache class has all the logic to for building the ACL queries. Meaning that this class contains the logic to interact with the ACL settings in CiviCRM, with the permissioned relationship etc.. All those settings are taken into account when filling civicrm_acl_contacts table which is per user and per operation once in the three hours.

Feb 24 2017
Feb 24

If you want to create a web resource, or to implement certain improvements on an existing one, then you must find specialists who will make your ideas come true. It’s quite a challenging task. We offered our 10 tips for hiring a top web developer by mentioning qualities that must be possessed. However, in addition, there are different types of web developers, with different skills, duties and types of work. The objectives and the scale of your project require appropriate experts. It might be a little complicated to define who exactly you need, especially if you aren’t a dev yourself. We helped you see the difference between front-end and back-end development, and now we are going to help you distinguish between the main types of Drupal developers.

Site builder

Programming is what most developers are dealing with. They receive a task with the requirements and perform appropriate coding. For Drupal, writing custom code is not obligatory in this open source CMS with a large community. A great innovative and feature-rich website can be built with the help of only existing core modules and existing contribs. To solve a particular problem, Drupal site builders should have a deep understanding of the essence of modules, plugins and extensions, their pros and cons, know how they work together, and be aware of all their updates and upgrades. They must know how to use and configure all of Drupal’s potential to provide a wide spectrum of functionality for a usable and accessible web resource.

Theme developer

Drupal themes are responsible for the appearance of the site. They don’t influence the functionality much. However they do help attract visitors and convince them to stay, if the themes are pleasant to view and easy to navigate. The themers are front-end developers who specialize in graphic design. The content and the structure of the site in Drupal doesn’t depend on it, and remains unchanged when switching themes.

There are lots of free responsive Drupal themes. But, if you want something special, a theme developer can either design a custom theme for you from scratch or create a subtheme, grounding it in the off-the-shelf elements by customizing the existing ones. This specialists should have proficiency in HTML, CSS and sometimes in PHP and Javascript to turn a design into a working theme.

Module developer

If you need a custom theme, you should hire a theme developer. But if you need a custom module, then you should hire a module developer. If some functionality for your website can’t be created with the help of the existing Drupal modules, then these back end professionals are able to develop a personalized module specially for your site that fits your demands and wishes.

Sure, these three main types can be divided or supplemented with various other types. But we want to keep it simple, especially because Drupal developers usually possess many skills at once and can belong to several types at a time. True Drupal professionals are interested in continually learning something new, in extending knowledge and enriching skills in order to be able to manage with more tasks. Our company allows you to hire any type of Drupal developer you need, or a team of them from our experienced experts. Contact us to discuss the specifics of your project.

Feb 24 2017
Feb 24

Drupal 8 integrating with Traditional ECMs to enhance Enterprise Content Management Capabilities

“Shifting business requirements for digital content and new technologies are changing the ECM market. .By 2018, 50% of enterprises will manage their content using a hybrid content architecture.” - Says Gartner, Magic Quadrant for Enterprise Content Management 2016

Drupal 8 is a strong platform with it’s Strong Web Content Management System, has a role to Play in integrating with the existing Challengers/Traditional ECMs to enhance their Enterprise Content Management Capabilities.

Some of the key Web Content Management Features of Drupal that can be leveraged to provide this integrated solution include:

  1. Rich Content Management Tools
  2. Responsive Layout and Design
  3. Social Media Tools & Search Engine Optimization
  4. Integration Capabilities - RESTful APIs Support
  5. Multi Domain Capabilities
  6. Multi Lingual Capabilities
  7. E Commerce Capabilities (Capabilities to handle different type of domain contents like E-Commerce, Newspaper etc )

Rich Content Management Tools

In the context of the a large amount of today’s content being Dynamic, HTML, need for Multimedia Support the Content Management Tools of Drupal that can be taken advantage off.

The need for different types of pages in the sitemap, broadly divided into Landing Pages, Individual Pages and Functional Pages. This can be easily achieved by using out-of-the-box content modelling tools in Drupal like Content Types, Views and Call back functions. The flexibility for the editor to create dynamic landing pages with variants in UI (Image and Text combinations, layout variants) is possible by extending the content type interface.

Some of the key content management features include:

Rich Editorial Interface

  1. Interfaces with which editors could easily create and edit content. Drupal’s default content creation interface would be configured / customized to ensure that readers, depending on their roles and the categories of content they manage, can easily create content.
  2. When you need to make quick changes, choose in-context editing Better previews and drag-and-drop image uploads.
  3. The editors would be provided with a WYSIWYG editor using which content can be formatted interactively, courtesy of Aloha Editor. Edit your rich text with your theme's direct styling through the inline editor. It even works with images + captions, links directly to content in the site, and has basic support for tokenized strings.

Multimedia Asset Creation - Images and Videos

  1. Videos can be either uploaded to the site or embedded through a third-party site.
  2. Images library can be maintained within the CMS that can be accessed / used across articles. The EXIF data of the images can be read and stored in the system.

Content Publishing Workflow

Content publishing workflow allows a multi-stage publishing process involving authors, reviewers / publishers. Depending on the type of content and the publishing process, suitable workflows can be created.

Advanced Taxonomy Management

Advanced taxonomy system which would allow categorization of content at a granular level. The hierarchical taxonomy provides a flexible means to create a content structure that well represent the various categories and sections within site.

Content Promotion and Sequencing

Drupal’s node-queue system could be customized to provide Editors with a flexible interface to manage content promotion and their sequencing. For example, Editor would be able to manually promote content to the home page as “headlines” and sequence them based on importance.

Management of Social Media Posts

Editor would be able to moderate the content that would be posted in the social media platform, wherever automatic publishing provision is available.

Layout Management

Editors would be able to manage the page layouts in terms of bringing in new blocks, repositioning blocks, selecting content for the blocks etc. This aspects would need to be discussed during the design phase.

Responsive Layout Builder, courtesy of the Layout and Gridbuilder modules. You can configure layouts for separate breakpoints (e.g. Mobile, Tablet, Desktop) and even define your own grids for them to snap to.

The Panels module allows a site administrator to create customized layouts for multiple uses. At its core it is a drag and drop content manager that lets you visually design a layout and place content within that layout.

E-Newsletters Creation and Management

Editor could compose newsletters by picking the content from various sections. Alternatively, the system can be configured to automatically compose newsletters by picking the latest news headlines, most read articles, latest image/videos from the gallery etc. The reader can choose the appropriate categories to include it in the newsletter. The send-out of newsletter can be integrated with standard third party bulk mailing system.

Responsive Design

With the mobile revolution, readers prefer consuming information on the go on handheld devices. Hence the website would be able to adapt itself based on the device in which it is viewed and present a unified branding.

Responsive Design would be adopted, with a mobile reader in mind, bringing focus to the most important and relevant content. The design moves away from throwing up lots of needless information to presenting what a reader actually needs. Usability and Performance become important aspects to be taken care of towards optimizing designs for a mobile reader. Therefore, it is not enough if a website work on all devices. The website needs to respond to the device screen size, bandwidth and resolution, optimally to be responsive.

Drupal supports building of Responsive Themes. Drupal has compatibility to key concepts in Responsive design which include definition of breakpoints, integration to Modernizer and additional JavaScript libraries, Support for responsive images, videos and slideshows.

Additional powerful responsive features include: Mobile First UI Editor interface Mobile enabled, Mobile friendly admin toolbar Responsive Preview.

Drupal supports building customized user interfaces. The User Interface templates would be themed using the HTML / CSS created.

Social Media Tools & Search Engine Optimization

Sharing / Bookmarking

Content can be shared in the popular social media sites. This would facilitate “Viral Marketing” and spread the brand. Social share features are readily available as contributed modules in Drupal.

Social Media Widgets

Drupal has support for popular social media widgets like Facebook, Google Plus, Twitter, Youtube and more.

Search Engine Optimization

The Website to support different search engine optimization techniques. These would include:

  1. Creation of different sitemaps
  2. Meta tagging capabilities
  3. Support for keywords
  4. Optimized HTML structure and page speed

Integration Capabilities - RESTful APIs Support

Using Standard APIs to integrate with existing Traditional ECM solutions like Alfresco is a possibility.

REST is one of the most popular ways of making Web Services work. REST utilizes HTTP methods, such as GET, POST and DELETE. Support for RESTful APIs and an API first approach makes integrations easier than ever. These integrations can be presented as views.

RESTful Web Services in Drupal 8 Core and include:

  1. Serialized entities using HAL
  2. Provides HTTP Basic authentication process
  3. Exposes entities and other resources as RESTful APIs

Provides services to (de)serialize to/from JASON/XML

Multi Domain Capabilities

Drupal supports different techniques to manage Multi portal architecture:

  1. Single code base varying databases - multi domain
  2. Single code base, single database – multi domain

Multi Lingual Capabilities

Drupal supports any language with built-in translation interfaces.

E Commerce Capabilities

Drupal’s Commerce Modules help build sites with Ecommerce capabilities. Some key features include:

  1. Create product types with Custom Attributes
  2. Dynamic Product Displays
  3. Order Management, line item
  4. Payment method API, allowing many different payment gateways
  5. Tax calculation / VAT support
  6. Discount pricing rules
  7. Advanced Product Search Interfaces

Integrating Web CMS like Drupal would provide the following benefits:

  1. Speed to Market - Faster Publishing of content
  2. Customer Experience Improvements by bringing a Uniform, Consistent Experience across the Different Channels
  3. Operational Efficiencies by bringing technologies that would assist in facilitating content publishing with minimal or zero support from technical team
  4. Use of single code base to manage multiple platforms to simplify management of code base
  5. Process Improvement by bringing in workflows and version control for Business approvals/compliance
Feb 24 2017
Feb 24

In Drupal 7 we used Node Hierarchy module to keep track of a hierarchy of pages. Node hierarchy ties directly to the menu system. When getting a list of all ancestors or descendents, it is a O(n) operation, and at least one site we use it on has a lot of nodes in the tree. Performance was terrible. Add to that it has no notion of revisions or forward revisions, so changing the parent and saving a draft can cause all sorts of issues with your menu.

When the time came to update the site to Drupal 8, we took a different approach.

Nested Sets

The performance issues with Drupal 7 Node Hierarchy are due to the data structures being used to store the tree. We decided to dust off the old computer science textbooks, and look up the chapters on tree storage, and see what options we had.

Currently the data structure used to represent a tree is a Linked List where the table stores only three values:

  • ID
  • Parent ID
  • Weight (optional)

This means when finding all descendants of a node, we need to do a query for entries with the node ID stored as the parent node ID. Once we get that, we do another query using that ID as the parent node ID. Wash, rinse, repeat. You get the idea. For very large tables querying for descendents or ancestors is very inefficient, O(n) in Big O notation.

Nested Sets represent the data in a different way. They use a table which includes:

  • ID
  • Left Position
  • Right Position
  • Depth (optional)

This left and right position represent the set of all children contained within. The following diagram shows how this works.

By Nestedsetmodel.jpg: Sherahmderivative work: 0x24a537r9 (talk) - Nestedsetmodel.jpg, Public Domain,

For Suits, In the example above, we store a left position of 3 and a right position of 8. Any child elements must have a left position greater than 3 and right position less than 8, as Slacks and Jackets do.

The benefit of storing information in this way becomes obvious when we need to do a query to find all descendents. We just need to query for where left is greater than the node’s left, and right is less than the node’s right. All in a single query, even for thousands of nodes. Thats O(1) in Big O notation, and a massive improvement over O(n).

Updates, on the other hand, are slow. When we need to insert, delete or move a node, we have to potentially update all nodes in the tree. This is obviously an expensive and slow database update. However, given that in our case, this is only done by content editors when making change to the hierarchy, the tradeoff is well worth it, compared to many more times the queries are being made by end users.

Decoupling the Model from the Framework

Within PreviousNext our preferred approach to Drupal development is to start by modeling the domain logic in plain old PHP classes, then add Drupal wrappers and integration around it. Commonly known as hexagonal architecture or ports and adapters, this ensures our code is focussed on business rules and is easier to test and maintain. We will get into the details of this in a future post!

While thinking about how we could improve on Node Hierarchy for Entity Hierarchy in Drupal 8, we wanted to take the ‘separate the model from the framework’ approach and built a library that just deals with Nested Sets.

This library is completely decoupled from Drupal. There is no reference to any Drupal code in the code base. Instead of trying to work with Drupal’s database abstraction layer, (and making all of Drupal a dependency) we chose to use Doctrine DBAL as the database abstraction layer because of the simple API, the code maturity and the community around it.

We focussed on using PHP interfaces to decouple implementation, and a high level of testing to have confidence we are keeping data integrity.

We then went on to develop the Drupal 8 module for Entity Hierarchy, which requires the nested-set library. In order to provide the DBAL database connection it expects, we wrote a simple factory which takes the Drupal database connection and returns a DBAL one, called DBAL Connection.

Entity Hierarchy module provides a new field-type that extends from Entity Reference. To use it you setup a new Entity Reference Hierarchy field on the child bundles and configure it to reference valid parent bundles. For example, you may have a section content type. Under this may live articles and events. To configure this sort of hierarchy, you create a new entity-reference hierarchy field on the article and event content type called Parents and configure it to allow a single reference to a section.

The field comprises the standard entity-reference autocomplete and select widgets, but also comes with a weight field which editors can use in a similar fashion to the menu weight, this allows you to nominate child orders in the tree.

When you update the child entities by changing the parent or the weight, the entity-reference hierarchy field type takes care to update the nested set.

Once you have entered your data and have your entities in a tree structure, you can then use the views integration to filter and order the tree.

For example, you could create a view with a contextual filter for 'Is a child of' and limit it to children and grandchildren. You could then embed this view on the parent, taking the entity ID from the URL as the contextual filter value. This would allow you to display children, grandchildren etc on the parent page.

The Future

Now that we have a proof of concept, our goal is to get Entity Hierarchy to a stable release, and have the rest of the Drupal community start using it and providing feedback (and fixes!). To this effect, we've released 8.x-2.0-alpha1 - please take it for a spin and use the issue queue to report any issues you encounter.

Looking further ahead, there is no reason this approach could not be used to replace Drupal’s existing Menu and Taxonomy hierarchies too.

At present the only formatters in the module just extend the standard Entity Reference ones in core. Our plan is to add a formatter that lets you configure how high in the hierarchy to traverse. This would allow you to have a formatter that showed fields from the root entity in the tree (multiple roots are possible). So returning to the section example, this would allow you to add a 'section image' field to the section, but have that display on any child articles or events, by way of the parent formatter. Follow along with development of that feature in the issue queue.

Let us know what you think in the comments!


Co-authored by Lee Rowlands.

Feb 23 2017
Feb 23

When thinking about ways to measure your website’s effectiveness, you may also want to think about the metrics you use to gauge the success of the website in accomplishing your business goals. How else do you measure success?

If you’ve determined that your website drives traffic and revenue, especially for e-commerce – congratulations, your metrics are built in! If you use new customer acquisition as a metric for success, then this gets a little bit tricky. If you have many marketing channels, it can be hard to determine how much comes from any single source – but driving traffic to your web site can make it easier to measure the effectiveness of different campaigns. We also use Piwik in-house, which is great traffic analytic software we can plug into your website that’s easy to use, easy to report against and less confusing than Google Analytics.

Organizations may also measure success by establishing that their website provides information to their clients and is easily managed by internal personnel. If this is the case and already happening – perfect! If it isn’t, then with a little bit of planning, we can get this going in as little as 1 day with Drupal. Then, you could have a situation where you are saying to yourself, “We have mountains of data on our website and need an easy way to manage that.” If this is the case, we have some powerful tools for organizing, searching, and managing mountains of content. In the 10s of thousands to 100s of thousands. If you're talking about millions or more, you might need a Big Data solution….

Finally, many organizations rely on strong Customer Relationship Management (CRM) software and customer engagement tools to measure success. Corporations are constantly engaging leads, prospects, etc. to increase customer retention, track spending per order, increase new customer referrals, and so much more. Freelock does integrations all the time, but if your needs are modest, Drupal can do this entirely, without the need for another system! However, if you already have a system you are using outside of Drupal, it is quite possible to integrate that system with your website – for instance to report e-commerce sales from the website back to your CRM system.

Many may be cautious when selecting a vendor for work on their website or back-end software systems, and for good reason! We like to ask the question of what characteristics are most important to you when selecting a vendor? If you or your organization is most concerned about the cost, this could be a good and a bad thing. Quite often a prospect client who initially reaches out to us, and who've been concerned with cost, originally used a one person freelance developer before coming to us. The reason why they typically have reached out to us is either the previous developer did a shoddy job, or completely fell off the face of the earth. In either case, this isn’t helpful to a client who has spent thousands of dollars on that poor work. Then, once Freelock takes on the job, we’ll see terrible development practices and hacked modules – all big red flags. We offer great value to our clients, but we’re not low cost. However, we always find a way to work with a client’s budget and work towards those set goals and expectations.

If you’re not so concerned with cost, but your preference is to work with a vendor who has a long history of expertise, Freelock is a great fit. Freelock’s principal, John Locke, has been building websites since 1996, and Freelock has been in business for 15 years. Founded in 2002, we are innovators and leaders in the Drupal development community, we are abreast of cutting edge offerings for the platform, and can offer that breadth of knowledge to our clients in order to meet existing business needs and anticipate future requirements.

Often times, a client needs to be much more specific when looking for a vendor and wants to find one that is most experienced with their preferred content management system (CMS). While Freelock’s currently preferred CMS is Drupal, we’re also steadily taking on more clients with WordPress. Also, over the years, Freelock has worked with Joomla, WordPress, ZenCart, OSCommerce, and many custom PHP and Javascript frameworks. Since 2009, we've worked primarily with Drupal, because it can do what all those other systems can do, and we can get it done at lower cost. But, always remember, software frameworks all have their own tradeoffs, but we have deep experience helping clients choose wisely. It’s important to take these into consideration when deciding on what best works for your specific needs.

If when deciding on a vendor, your project is so large with many moving parts, that you’re really concerned with the size of staff to get you a viable product by the deadlines, then quite honestly you’re in a great situation. While we're a small team, we always deliver. We've rescued countless projects where other teams have failed, and carried them to completion. If you look at our client portfolio, you’ll notice that we work with some very large government and heathcare organizations, to mid-sized non-profits, to small private practices... then everything in the middle. Not 100% of projects have launched on time, but we’re personal, responsive, and always hands-on. Most projects see delays due to a lack of client responsiveness, because hey, vendors do love to get paid – so it doesn’t make much sense to not be responsive! On the other hand, if you choose a name out of a hat, we have a recommendation: Freelock, Freelock, Freelock!

Feel free to contact us here, emailing us directly at [email protected] or call us at (206) 577-0540!

Feb 23 2017
Feb 23

Thirsty for Drupal knowledge? Want to dive deep into a topic and learn from the best in the field? Like to get hands-on with your learning material? We are excited to offer 10 full-day training classes at DrupalCon Baltimore that will turn you into a Drupal superhero. No matter if you are an absolute beginner or Drupal expert, our classes cover all experience levels.

Our world-class Drupal trainers are eager to share their knowledge in what may be our most diverse line-up yet. Check out brand-new classes like Evolving Web's Content Strategy for Drupal or explore how to build interactive applications using Drupal 8 data in Four Kitchen's API First training.

Not surprising, a strong emphasis will be placed on what you need to know about Drupal 8. For example, get up to speed on Drupal 8 Module Development with DrupalEasy. But "in with the new" doesn’t necessarily mean "out with the old." We’re happy to have Zivtech returning with an all-time favorite, the Drupal DevOps training. Check out the full line-up to find the right class for you.

View All Training Courses

All courses are held on Monday, April 24, 9:00 a.m. - 5:00 p.m. Trainings are not included in a regular DrupalCon ticket and require a separate registration. You can save $50 if you purchase your training ticket at the early-bird rate of $450 by March 24. Light breakfast, lunch and coffee breaks are included with every training.

Our training courses are small by design, to provide attendees with plenty of one-on-one time with the instructors. However, each class must meet a minimum number of attendees by April 10 in order for the course to run. Help ensure your training class takes place by registering before April 10 - and remind friends and colleagues to attend.

Register Now

Feb 23 2017
Feb 23

DrupalCon is brought to you by the Drupal Association with support from an amazing team of volunteers. Powered by COD, the open source conference and event management solution. Creative design and implementation by Cheeky Monkey Media.

DrupalCon Baltimore is copyright 2016. Drupal is a registered trademark of Dries Buytaert.

Feb 23 2017
Feb 23

The Drupal Association Engineering Team delivers value to all who are using, building, and developing Drupal. The team is tasked with keeping and all of the 20 subsites and services up and running. Their work would not be possible without the community and the project would not thrive without close collaboration. This is why we are running a membership campaign all about the engineering team. These are a few of the recent projects where engineering team + community = win!

Icon of screen with person in center of itWant to hear more about the work of the team, rather than read about it? Check out this video from 11:15-22:00 where Tim Lehnen (@hestenet) talks about the team's recent and current work.

Leading the Documentation System migration

We now have a new system for Documentation. These are guides Drupal developers and users need to effectively build and use Drupal. The new system replaces the book outline structure with a guides system, where a collection of pages with their own menu are maintained by the people who volunteer to keep the guides updated, focused, and relevant. Three years of work from the engineering team and community collaborators paid off. Content strategy, design, user research, implementation, usability testing and migration have brought this project to life.

Basic structure doc page for Drupal 8 Creating Custom Modules section
Pages include code 'call-outs' for point-version specific information or warnings.

Thanks to the collaborators: 46 have signed up to be guide maintainers, the Documentation Working Group members (batigolix, LeeHunter, ifrik, eojthebrave), to tvn, and the many community members who write the docs!

Enabling Drupal contribution everywhere

Helping contributors is what we do best. Here are some recent highlights from the work we're doing to help the community:

Our project to help contributors currently in development is revamping the project applications process. More on this soon on our blog.

When a community need doesn't match our roadmap

We have a process for prioritizing community initiatives so we can still help contributors. Thanks to volunteers who have proposed and helped work on initiatives recently, we've supported the launch of the Drupal 8 User guide and the ongoing effort to bring Dreditor features into itself.  

Thanks to the collaborators: jhodgdon, eojthebrave, and the contributors to the user guide. Thanks also to markcarver for the Dreditor effort.

How to stay informed and support our work.

The change list and the roadmap help you to see what the board and staff have prioritized out of the many needs of the community.

You can help sustain the work of the Drupal Association by joining as a member. Thank you!

Feb 23 2017
Feb 23

In the traditional Drupal site, you don’t need to handle authentication, because Drupal handle everything by itself, gets a cookie, set session, error handling etc. But what about decoupled(headless drupal) sites? How can we authenticate the user on decoupled?

Before diving into this, we need to understand the authentication types provided by RESTful:

  1. Cookie - Validating the user cookie is not something new for us. We have been doing it for years, and it’s one of the first techniques web developers acquire. But, to validate the request we need to pass a CSRF token. This token helps make sure the form was not a fraud. An example could be a form that tweets on the behalf of us on Twitter. The existence of a valid CSRF in the request would make sure an internet scam could not generate the form and upload to Twitter a photo of a category when you’re a dog person.

  2. Access token - RESTful will generate an access token and bind it to the user. Unlike the cookie which needs a CSRF token to be valid by Restful, we get a two-for-one deal. The existence of the access token in the Drupal 8  is verified and references us to the user which is represented by that access token.

Important: in order to use access token authentication, you’ll need to enable the module RESTful token authentication (which is a submodule of RESTful).


I show you how an access token is generated using Angular JS at the following. If the authentication process passes, the endpoint will return an object with 3 values:

  1. access_token - This is the token which represents the user in any request.

  2. expires_in - a number of seconds in which the access token is valid.

  3. refresh_token - Once the token is no longer valid, you’ll need to ask for a new one using the refresh token.

You can see below a small amount of Angular JS code:

$http.get('http://YOURDRUPAL.COM/api/login-token', {
 headers: {
   'Authorization': 'Basic ' + Base64.encode(username + ':' + password)
.success(function(data) {
 localStorageService.set('access_token', data.access_token);

And this is what you’ll get back:

 "access_token": "Y3wQua-qFY-muksrePaLqKdNmlGdBQK4dly-UhlJcYk",
 "type": "Bearer",
 "expires_in": 86400,
 "refresh_token": "xRP-nnKA05GGsN-jr80Z_hfPHqrkpwtAtevDSeRfbYU"



As mentioned above, the access token is only valid for a specific amount of time, usually 24 hours, so you’ll need to check it before the request:

if (new Date().getTime() > localStorageService.get('expire_in')) {
 var refresh_token = localStorageService.get('refresh_token');
 $http.get('http://YOURDRUPAL.COM/refresh-token/' + refresh_token)
 .success(function(data) {
   localStorageService.set('access_token', data.access_token);
   localStorageService.set('refresh_token', data.refresh_token);
   localStorageService.set('expire_in', new Date().getTime() + data.expires_in);


OK, so we got the access token and we can refresh it when it’s no longer valid. The next thing you need to know is how to inject the access token into the header:

$'http://YOURDRUPAL.COM/api/article', {
 headers: {
   'access-token': localStorageService.get('access_token')
 data: {
   'label': 'zhilevan'
.success(function(data) {
 console.log('Well,No you can Post A Node(check your permissions before try that)');

You can have a look at Gizra Scaffold a headless Drupal backend yo hedley generator to see how they implemented HTTP interceptor to improve the process displayed above.

Additional Resources:


Feb 23 2017
Feb 23

Since its relaunch in 2015, the Drupal 7 powered has been gaining popularity among artists and design students to become their go-to platform. Until today, design students have uploaded over 700 portfolios providing guidance to enrolling candidates. These portfolios are linked to over 500 art faculties of hundreds of universities.

Before enrolling in a course, a candidate can research their local university and study other students' portfolios or enroll in their local design course to prepare for the entry tests - all of it on

On top of that, students provide and collect support on the forum which boasts over 20000 users who have written nearly 250000 posts. This may be the biggest and most beautiful forum built on top of Drupal core.Frontpage

The most powerful feature however may be the ability for guests to create most of the site's content without having to go through any type of registration process. Visitors can go ahead and correct their school's information just by clicking 'edit'. Likewise, anyone can write a blog post - no account or personal information needed. We think this technology has massively contributed to the quantity and quality of content on

While the numbers of design students, universities and art schools registering with the platform has been growing steadily, the visionaries behind the project, Ingo Rauth and Wolfgang Zeh from projektgestalten, recently decided to take the platform to the next level by bringing it to jobseekers and providers as well. Consequently gbyte has implemented the event functionality and the new job board.

This is not going to be the last improvement though, apparently artists have lots of creative ideas and we look forward to implementing them. We feel that this project is a great showcase of Drupal's possibilities and if you would like to learn more about the project or its implementation, make sure to leave a comment below or contact us via the contact form.

Check out other technology-centric posts about the project as well as more screenshots on the project page.

Feb 22 2017
Feb 22

I’ve written previously about git workflow for working on patches, and about how we don’t necessarily need to move to a github-style system on, we just maybe need better tools for our existing workflow. It’s true that much of it is repetitive, but then repetitive tasks are ripe for automation. In the two years since I released Dorgpatch, a shell script that handles the making of patches for issues, I’ve been thinking of how much more of the patch workflow could be automated.

Now, I have released a new script, Dorgflow, and the answer is just about everything. The only thing that Dorgflow doesn’t automate is uploading the patch to (and that’s because’s REST API is read-only). Oh, and writing the code to actually fix bugs or create new features. You still have to do that yourself, along with your cup of coffee.

So assuming you’ve made your own hot beverage of choice, how does Dorgflow work?

Simply! To start with, you need to have an up to date git clone of the project you want to work on, be it Drupal core or a contrib project.

To start work on an issue, just do:

$ dorgflow

You can copy and paste the URL from your browser. It doesn’t matter if it has an anchor link on the end, so if you followed a link from your issue tracker and it has ‘#new’ at the end, or clicked down to a comment and it has ‘#comment-1234’ that’s all fine.

The first thing this comment does it make a new git branch for you, using the issue number and the name. It then also downloads and applies all the patch files from the issue node, and makes a commit for each one. Your local git now shows you the history of the work on the issue. (Note though that if a patch no longer applies against the main branch, then it’s skipped, and if a patch has been set to not be displayed on the issue’s file list, then it’s skipped too.)

Let’s see how this works with an actual issue. Today I wanted to review the patch on an issue for Token module. The issue URL is So I did:

$ dorgflow

That got me a git history like this:

  * 6d07524 (2782605-Move-list-of-available-tokens-from-Help-to-Reports) Patch from Comment: 35; URL:; file: token-move-list-of-available-tokens-2782605-34.patch; fid 5784728. Automatic commit by dorgflow.
 * 6f8f6e0 Patch from Comment: 15; URL:; file: 2782605-13.patch; fid 5710235. Automatic commit by dorgflow.
* a3b68cc (8.x-1.x) Issue #2833328 by Berdir: Handle bubbleable metadata for block title token replacements
* [older commits…]

What we can see here is:

  • Git is now on a feature branch, called ‘2782605-Move-list-of-available-tokens-from-Help-to-Reports’. The first part is the issue number, and the rest is from the title of the issue node on
  • Two patches were found on the issue, and a commit was made for each one. Each patch’s commit message gives the comment index where the patch was posted, the URL to the comment, the patch filename, and the patch file entity ID (these last two are less interesting, but are used by Dorgflow when you update a feature branch with newer patches from an issue).

The commit for patch 35 will obviously only show the difference between it and patch 15, an interdiff effectively. To see what the patch actually contains, take a diff from the master branch, 8.x-1.x.

(As an aside, the trick to applying a patch that’s against 8.x-1.x to a feature branch that already has commit for a patch is that there is a way to check out files from any git commit while still keeping git’s HEAD on the current branch. So the patch applies, because the files look like 8.x-1.x, but when you make a commit, you’re on the feature branch. Details are on this Stack Overflow question.)

At this point, the feature branch is ready for work. You can make as many commits as you want. (You can rename the branch if you like, provided the ‘2782605-’ part stays at the beginning.) To make your own patch with your work, just run the Dorgflow script without any argument:

$ dorgflow

The script detects the current branch, and from that, the issue number, and then fetches the issue node from to get the number of the next comment to use in the patch filename. All you now have to do is upload the patch, and post a comment explaining your changes.

Alternatively, if you’re a maintainer for the project, and the latest patch is ready to be committed, you can do the following to put git into a state where the patch is applied to the main development branch:

$ dorgflow commit

At that point, you just need to obtain the git commit command from the issue node. (Remember the drupal standard git message format, and to check the attribution for the work on the issue is correct!)

What if you’ve previously reviewed a patch, and now there’s a new one? Dorgflow can download new patches with this command:

$ dorgflow update

This compares your feature branch to the issue node’s patches, and any patches you don’t yet have get new commits.

If you’ve made commits for your own work as well, then effectively there’s a fork in play, as your development in your commits and the other person’s patch are divergent lines of development. Appropriately, Dorgflow creates a separate branch. Your commits are moved onto this branch, while the feature branch is rewound to the last patch that was already there, and then has the new patches applied to it, so that it now reflects work on the issue. It’s then up to you to do a git merge of these two branches in order to combine the two lines of development back into one.

Dorgflow is still being developed. There are a few ideas for further features in the issue queue on github (not to mention a couple of bugs for some of the various possible cases the update command can encounter). I’m also pondering whether it’s worth the effort to convert the script to use Symfony Console; feel free to chime in with any opinions on the issue for that.

There are tests too, as it’s pretty important that a script that does things to your git repository does what it’s supposed to (though the only command that does anything destructive is ‘dorgflow cleanup’, which of course asks for confirmation). Having now written this, I’m obviously embarking upon cleaning it up and to some extent rewriting it, though I do have the excuse that the early weeks of working on this were the days after the late nights awake with my newborn daughter, and so the early versions of the code were written in a haze of sleep deprivation. If you’d like to submit a pull request, please do check in with me first on an issue to ensure it’s not going to clash with something I’m partway through changing.

Finally, if you find this as useful as I do (this was definitely an itch I’ve been wanting to scratch for a long time, as well as being a prime case of condiment-passing), please tell other Drupal developers about it. Let’s all spend less time downloading, applying, and rolling patches, and more time writing Drupal code!

Feb 22 2017
Feb 22

Continuing along with my series of reviews of Acquia Developer Certification exams (see the previous one: Drupal 8 Site Builder Exam, I recently took the Back End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Drupal Site Builder - Drupal 8 2016
I didn't get a badge with this exam, just a cert... so here's the previous exam's badge!

Acquia finally updated the full suite of Certifications—Back/Front End Specialist, Site Builder, and Developer—for Drupal 8, and the toughest exams to pass continue to be the Specialist exams. This exam, like the Drupal 7 version of the exam, requires a deeper knowledge of Drupal's core APIs, layout techniques, Plugin system, debugging, security, and even some esoteric things like basic webserver configuration!

A lot of new content makes for a difficult exam

Unlike the other exams, this exam sets a bit of a higher bar—if you don't do a significant amount of Drupal development and haven't built at least one or two custom Drupal modules (nothing crazy, but at least some block plugins, maybe a service or two, and some other integrations), then it's likely you won't pass.

There are a number of questions that require at least working knowledge of OOP, Composer, and Drupal's configuration system—things that an old-time Drupal developer might know absolutely nothing about! I didn't study for this exam at all, but would've likely scored higher if I spent more time going through some of the awesome Drupal ladders or other study materials. The only reason I passed is I work on Drupal 8 sites in my day job, and have for at least 6 months, and in my work I'm exposed to probably 30-50% of Drupal's APIs.

Unlike in Drupal 7, there are no CSS-related questions and few UI-related questions whatsoever. This is a completely new and more difficult exam that covers a lot of corners of Drupal 8 that you won't touch if you're mostly a site builder or themer.

My Results

I scored an 73%, with the following section-by-section breakdown:

  • Fundamental Web Concepts: 80.00%
  • Drupal core API : 55.00%
  • Debug code and troubleshooting: 75.00%
  • Theme Integration: 66.66%
  • Performance: 87.50%
  • Security: 87.50%
  • Leveraging Community: 100.00%

I am definitely least familiar with Drupal 8's core APIs, as I tend to stick to solutions that can be built with pre-existing modules, and have as yet avoided diving too deeply into custom code for the projects I work on. Drupal 8 is really streamlined in that sense—I can do a lot more just using Core and a few Contrib modules than I could've done in Drupal 7 with thousands of lines of custom code!

Also, I'm still trying to wrap my head around the much more formal OOP structure of Drupal (especially around caching, plugins, services, and theme-related components), and I bet that I could score 10% or more higher in another 6 months, just due to familiarity.

I also scored fairly low on the 'debug code and troubleshooting' section, because it dealt with some lower-level debugging tools than what I prefer to use day-to-day. I use Xdebug from time to time, and it really is necessary for some things in Drupal 8 (where it wasn't so in Drupal 7), but I stick to Devel's dpm() and Devel Kint's kint() as much as I can, so I can debug in the browser where I'm more comfortable.

In summary, this exam was by far the toughest one I've taken, and the first one where I'd consider studying a bit before attempting to pass it again. I've scheduled the D8 Front End Specialist exam for next week, and I'll hopefully have time to write a 'Thoughts on it' review on this blog after that—I want to see if it's as difficult (especially regarding to twig debugging and the render system changes) as the D8 Back End Specialist exam was!

Feb 22 2017
Feb 22

by Elliot Christenson on February 22, 2017 - 3:12pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Views module to fix an Access Bypass vulnerability.

The Views module allows site builders to create listings of various data in the Drupal database.

The Views module fails to call db_rewrite_sql() on queries that list Taxonomy Terms, which could cause private data stored on Taxonomy Terms to be leaked to users without permision to view it.

This is mitigated by the fact that a View must exist that lists Taxonomy Terms which contain private data. If all the data on Taxonomy Terms is public or there are no applicable Views, then your site is unaffected.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the Views module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on

Feb 22 2017
Feb 22

From 16-19 February, the first Drupal Mountain Camp took place in Davos, Switzerland. A very diverse crowd of 135 attendees from, 17 different countries, came together to share the latest and greatest in Drupal 8 development, as well as case studies from Swiss Drupal vendors.

When we started organizing Drupal Mountain Camp in the summer of 2016, it was hard to predict how much interest it would attract and how many people would join for the camp. By reaching out to the local and international Drupal ecosystem we were excited to get so many people to attend from all around the world including Australia, India, and the US.

Drupal Mountain Camp Team

As a team of a dozen organizers; we split up the tasks, like setting up the venue, registration, social media, room monitoring and much more. It was great seeing that we were able to split the workload across the entire team and keep it well balanced.

Drupal Mountain Camp Workshops

We are very thankful for 30 different speakers who travelled from afar and worked hard to share their expertise with the crowd. As a program organizer I might be biased, but I truly believe that the schedule was packed with great content :)

In addition to the sessions, we also provided free workshop trainings to help spread some more Drupal love.

Drupal Mountain Camp Speaker Dinner

We took all the speakers up to the mountain for Switzerland's most popular dish, cheese fondue, to say thank you for their sessions and inputs.

Drupal Mountain Camp Speaker Sledding

With Drupal Mountain Camp we wanted to set a theme that would not only excite attendees with Swiss quality sessions but also create a welcoming experience for everyone. On top of our Code of Conduct, we organized various social activities that would allow attendees to experience Switzerland, snow and the mountains.  

Drupal Mountain Camp Sprints

Sprints are an essential way to get started with contributing to Drupal. At Drupal Mountain Camp, we organized a First-time sprinter workshop and had Sprint rooms from Thursday until Sunday with many sprinters collaborating.

Drupal Mountain Camp

For our hosting company, Drupal Mountain Camp was a great opportunity to demonstrate our docker based development environment and scalable cluster stack using a set of raspberry pies.

Drupal Mountain Camp Snow

And of course, we ended the conference with skiing and snowboarding at the Swiss mountains :)

Pictures from the camp: selection and all. Curious about the next Drupal Mountain Camp? Follow us on twitter to stay on top and see you at the next event.

Feb 22 2017
Feb 22

For over 15 years, Seniorlink has pioneered solutions for caregivers across the nation, helping them provide their loved ones with the highest quality care. SeniorLink needed to upgrade their website to a modern, sleek and easy-to-navigate one that introduces Seniorlink as the parent company of Caregiver Homes and Vela and promotes Seniorlink as the trusted and credible source for information and support for homecare.

Third & Grove built the new site for Seniorlink on Drupal, providing an easy admin interface and a secure editing environment, which would also provide a scalable platform for future downstream integrations and personalizations.

View the Case Study

Feb 21 2017
Feb 21

Drupal 8 has several solutions and methods to manage access rights on each elements included in a content, and this in a very granular way. Enabling view or edit access on some field included in a content type can be achieved very simply, with a few lines of code, or with the Field Permissions module. We can use this module to allow certain roles to view or update a particular field.

The problem with the case of documents associated with content is slightly different. You may want to let view rights to a document or file attached (via a File field) to a content while controlling the rights to be able to download this document. In other words, you can want to manage the rights to download a file while allowing its visualization (and so its existence).

This is where the Protected file module answers. This module allows to define, for each attached file, if downloading it is publicly accessible or if it requires a particular role. In the case of a protected file, the module then presents an alternate link (configurable, for example the link to the authentication page) instead of the download link.

Let's discover this module.

Prerequisites for the installation of the module

In order to control access to files, this module can only work if the site has a private file system configured. In fact, files stored on the Drupal public file system are accessible directly from the Web server, and consequently Drupal can not control access rights to these files.

Using the module

The Protected file module provides a new field type called ... Protected file. This new field type extends the File field type provided by Drupal core and is almost similar in terms of configuration. To enable file access control, we need to add a new field to our content.

Configuring the Protected file

Let's add a Protected file field to our article content type.

Ajour du champ

And we can configure its storage settings.

Paramètres de stockage du champ

We note that the private file system is automatically selected and locked. We configure the field for an unlimited number of files.

Then we configure the parameters of the instance of this field on the content type Article on which we created it.

Paramètres du champ

We configure the various parameters, which are identical to those of a standard File field type (allowed extensions, upload directory, maximum file size, etc.).

Configuring Display Settings for the Protected File Field

We configure the display settings for our new field.

Paramètres d'affichage du champ

We have several options. We can :

  • Choose to open the file in a new tab or not
  • Configure the url that will replace the file's download url, if the user does not have sufficient access rights.
  • Choose to open the previously defined url in a modal window, or not
  • Define the message that will feed the title tag of the link set above. This message is provided as a variable to the template for rendering the links and can therefore be displayed directly with a simple override of the template in your theme

Configuring permissions

All you have to do is to set the permissions according your needs.

Permissions protected file

And the configuration is complete. We can now publish content and associated documents, protected or not.

Enabling Download Protection

Using the module is really simple. In the content creation / editing form, we can, for each file uploaded, activate or not this protection, by checking the corresponding checkbox.

Formulaire d'ajout des fichiers

And with the result : an authenticated user can access to the files download links.

Fichiers protégés téléchargeables

And for anonymous visitors

Fichiers protégés

In this example above, the download link of the PDF file example 1 has been replaced by the url that we defined in the display settings (/user/login). And a click on the protected file opens a modal window on this page.

Fenêtre modal de login

The Protected file module allows users to simply control access to the documents provided in a content. It should be noted that direct download links, sent by e-mail, for example, by an authenticated user, are also managed, and require the same access rights.

Feb 21 2017
Feb 21

If you believe the docs and the twitters, there is no way to automate letsencrypt certificates updates on You have to create the certificates manually, upload them manually, and maintain them manually.

But as readers of this blog know, the docs are only the start of the story. I’ve really enjoyed working with with one of my private clients, and I couldn’t believe that with all the flexibility – all the POWER – letsencrypt was really out of reach. I found a few attempts to script it, and one really great snippet on gitlab. But no one had ever really synthesized this stuff into an easy howto. So here we go.

1) Add some writeable directories where CLI and letsencrypt need them.

Normally when Platform deploys your application, it puts it all in a read-only filesystem. We’re going to mount some special directories read-write so all the letsencrypt/platform magic can work.

Edit your application’s file, and find the mounts: section. At the bottom, add these three lines. Make sure to match the indents with everything else under the mounts: section!

"/web/.well-known": "shared:files/.well-known"
"/keys": "shared:files/keys"
"/.platformsh": "shared:files/.platformsh"

Let’s walk through each of these:

  • /web/.well-known: In order to confirm that you actually control, letsencrypt drops a file somewhere on your website, and then tries to fetch it. This directory is where it’s going to do the drop and fetch. My webroot is web, you should change this to match your own environment. You might use public or www or something.
  • /keys: You have to store your keyfiles SOMEWHERE. This is that place.
  • /.platformsh: Your master environment needs a bit of configuration to be able to login to platform and update the certs on your account. This is where that will go.

2) Expose the .well-known directory to the Internet

I mentioned above that letsencrypt test your control over a domain by creating a file which it tries to fetch over the Internet. We already created the writeable directory where the scripts can drop the file, but (wisely) defaults to hide your directories from the Internet. We’re going to add some configuration to the “web” app section to expose this .well-known directory. Find the web: section of your file, and the locations: section under that. At the bottom of that section, add this:

        # Allow access to all files in the public files directory.
        allow: true
        expires: 5m
        passthru: false
        root: 'web/.well-known'
        # Do not execute PHP scripts.
        scripts: false

Make sure you match the indents of the other location entries! In my (default) file, I have 8 spaces before that '/.well-known': line. Also note that the root: parameter there also uses my webroot directory, so adjust that to fit your environment.

3) Download the binaries you need during the application “build” phase

In order to do this, we’re going to need to have the CLI tool, and a let’s encrypt CLI tool called lego. We’ll download them during the “build” phase of your application. Still in the file, find the hooks: section, and the build: section under that. Add these steps to the bottom of the build:

  cd ~
  curl -sL | tar -C .global/bin -xJ --strip-components=1 lego/lego
  curl -sfSL -o .global/bin/platform.phar

We’re just downloading reasonably recent releases of our two tools. If anyone has a better way to get the latest release of either tool, please let me know. Otherwise we’re stuck keeping this up to date manually.

4) Configure the CLI

In order to configure the CLI on your server, we have to deploy the changes from steps 1-3. Go ahead and do that now. I’ll wait.

Now connect to your platform environment via SSH (platform ssh -e master for most of us). First we’ll add a config file for platform. Edit a file in .platformsh/config.yaml with the editor of choice. You don’t have to use vi, but it will win you some points with me. Here are the contents for that file:

    check: false
    token_file: token

Pretty straightforward: this tells platform not to bother updating the CLI tool automatically (it can’t – read-only filesystem, remember?). It then tells it to login using an API token, which it can find in the file .platformsh/token. Let’s create that file next.

Log into the web UI (you can launch it with platform web if you’re feeling sassy), and navigate to your account settings > api tokens. That’s at (with your own user ID of course). Add an API token, and copy its value into .platformsh/token on the environment we’re working on. The token should be the only contents of that file.

Now let’s test it by running php /app/.global/bin/platform.phar auth:info. If you see your account information, congratulations! You have a working CLI installed.

5) Request your first certificate by hand

Still SSH’ed into that environment, let’s see if everything works.

lego --email="[email protected]" --domains="" --webroot=/app/public/ --path=/app/keys/ -a run
csplit -f /app/keys/certificates/ /app/keys/certificates/ '/-----BEGIN CERTIFICATE-----/' '{1}' -z -s
php /app/.global/bin/platform.phar domain:update -p $PLATFORM_PROJECT --no-wait --yes --cert /app/keys/certificates/ --chain /app/keys/certificates/ --key /app/keys/certificates/

This is three commands: register the cert with letsencrypt, then split the resulting file into it’s components, then register those components with If you didn’t get any errors, go ahead and test your site – it’s got a certificate! (yay)

6) Set up automatic renewals on cron

Back to, look for the crons: section. If you’re running drupal, you probably have a drupal cronjob in there already. Add this one at the bottom, matching indents as always.

    spec: '0 0 1 * *'
    cmd: '/bin/sh /app/scripts/'

Now let’s create the script. Add the file scripts/ to your repo, with this content:

#!/usr/bin/env bash

# Checks and updates the letsencrypt HTTPS cert.

set -e

if [ "$PLATFORM_ENVIRONMENT" = "master-7rqtwti" ]
    # Renew the certificate
    lego --email="[email protected]" --domains="" --webroot=/app/web/ --path=/app/keys/ -a renew
    # Split the certificate from any intermediate chain
    csplit -f /app/keys/certificates/ /app/keys/certificates/ '/-----BEGIN CERTIFICATE-----/' '{1}' -z -s
    # Update the certificates on the domain
    php /app/.global/bin/platform.phar domain:update -p $PLATFORM_PROJECT --no-wait --yes --cert /app/keys/certificates/ --chain /app/keys/certificates/ --key /app/keys/certificates/

Obviously you should replace all those example.orgs and email addresses with your own domain. Make the file executable with chmod u+x scripts/, commit it, and push it up to your environment.

7) Send a bragging email to Crell

Technically this isn’t supposed to be possible, but YOU DID IT! Make sure to rub it in.

"Larry is waiting to hear from you. (photo credit Jesus Manuel Olivas)"

Good luck!

PS – I’m just gonna link one more time to the guy whose snippet made this all possible: Ariel Barreiro did the hardest part of this. I’m grateful that he made his notes public!

Feb 21 2017
Feb 21

I’ve dreamed of a day when systems start to work like the home automation and listening (NSA Spying…) devices that people are inviting into their home. “Robots” that listen to trigger words and act on commands are very exciting. What’s most interesting to me in trying to build such systems is,… they really aren’t that hard any more. Why?

Well, the semantic web is what’s delivering the things for Siri, Google and Alexa to say on the other end. When you ask about something and it checks wikipedia THAT IS AMAZING…. but not really that difficult. The human voice is being continuously mapped and improved upon in accuracy daily as a result of people using things like Google Voice for years (where you basically give them your voice as data pieces in order to improve their speech engines).

So I said, well, I’d like to play with these things. I’ve written about Voicecommander in the past but it was mostly proof of concept. Today I’d like to announce the release of VoiceCommander 2.0 with built in support to do the “Ok Google” style wikipedia voice querying!

To do this, you’ll need a few things:

Enable the voicecommander_whatis module, tweak the voicecommander settings to your liking and then you’ll be able to build things like in this demo. The 1st video is a quick 1 minute of a voice based navigational system (this is how we do it in ELMSLN). The second is me talking through what’s involved, what’s actually happening, as well as A/B comparing different library configuration settings and how they relate to accuacy downstream.

Feb 21 2017
Feb 21

Besides being the recent desired destination for Instagram #wanderlust-ers, Iceland is now home to an exciting new Drupal event: DrupalCamp Northern Lights. With twenty speakers, lots of coffee, and a planned sightseeing trip to see the Golden Circle and Northern Lights, it is sure to be an exciting inaugural event.

A small crew of Palantiri will be proudly representing, so if you are making the trek overseas, keep an eye out and say hi to Allison Manley, Michelle Jackson, and Megh Plunkett while you’re taking in the sessions and sights.

Check out the schedule and make sure to stop by our sessions.

Kickoff Meetings, by Allison Manley

  • Time: Saturday, 10:45 - 11:35
  • Location: Room ÞINGVELLIR

How do you make the most use of your face-to-face time with your client and lay the groundwork for a successful project?

Allison will outline how to get the most out of the kickoff meetings that initiate any project. She'll talk about pre-meeting preparation and how to keep organized, and also give some tips on agenda creation, how to keep meetings productive (and fun), and what steps need to be taken once the meetings adjourn.

Competitive Analysis: Your UX must-have on a budget, by Michelle Jackson

  • Time: Sunday, 14:15-15:00
  • Location: Room ÞINGVELLIR

A tight budget and time constraints can make dedicating time and resources to understanding audience needs challenging. Competitive analysis is an affordable way to evaluate how competitor sites are succeeding or failing to meet the needs of your audience.

Michelle will cover how competitive analysis can help you avoid competitor pitfalls, gain insight into what your users want, and lead to better decision-making before you invest in and implement new designs and technical features.

7 Facts You Might Not Have Known About Iceland

  • Iceland was one of the last places on earth to be settled by humans.
  • They are getting their first Costco in May.
  • 60% of the Icelandic population lives in Reykjavík.
  • Babies in Iceland are routinely left outside to nap.
  • Surprisingly, Iceland is not the birthplace of ice cream.
  • First names not previously used in Iceland must be approved by the Icelandic Naming Committee.
  • Owning a pet turtle is against the law. Sorry Rafael, Franklin, and this kid:

I like turtles

Fact Sources:,

We want to make your project a success.

Let's Chat.
Feb 21 2017
Feb 21

Our tradition of presenting you short overviews of several modules of the month continues with today’s article. Previously we offered you some great contributed Drupal 8 modules in June 2016 a collection of modules in May 2016. In 2017 we published some modules, with the latest available release for Drupal 8 scheduled for the beginning of this year.


With this module, you can add tokens that are not supported by fields, other core elements and user interface for browsing them. This module is required for the Pathauto module. Applying both of them allows you to get the path alias of a formed URL to the current node in Drupal and control how it forms the URL using text, taken from a pattern as assisted by the token.


The Pathauto module allows you get SEO-friendly URLs for any site page. There is no need to manually specify the path and URL alias, as this module produces them automatically for different content types: taxonomy terms, nodes and users. These aliases are based on a "pattern" system, which uses tokens that commonly copy the the content page topic or article’s title, and which can be changed by the admin. So, together with a Token, these two key modules are important for further extending the functionality of the URL paths and aliases.


The Diff module helps you notice changes. It is used for adding a tab for users who don’t have sufficient permissions. They are able to see all revisions of ebook page versions in this tab as well as all words and phrases that were added, changed or deleted between revisions.

Views Slideshow

This module creates slideshows of images or any other type of content from Views. The module is able to be customized easily. It allows you to select settings for each View that you have created in your slideshow.

Flex Slider

This module serves for integration the Flex Slider library with Drupal and some contributed modules, enabling you to create responsive slideshows, which automatically adapts to different sizes of device screens or browser windows. Flex Slider provide you with configurable slide animations, multiple sliders per page and much more.


With the help of the Superfish module you are able to integrate the jQuery plugin called Superfish with your Drupal 8 menus. It allows you to add some "splash" to Drupal menus almost effortlessly.


The Simplenews module helps you easily and quickly inform many people at once by sending e-mails to a mailing list or lists of those who have subscribed to your newsletters. Those lists may consist of both authenticated and anonymous users.


The Webform module best suits those who need many flexible and customizable webforms on the sites built on Drupal. It supports contest, petition or contact webforms to be filled out by the site visitors. The latest 8.x-5.x version offers a new approach to form problem solving. It provides you with object oriented design patterns, extendable plugins, computerized tests and more.


You can use this module to avoid submissions to your webforms by spambots. It provides different challenge-response tests, allowing you to identify whether a human or a bot is taking some actions online.

We hope our list of 2017 modules is useful for you and you’ll apply some of them on your Drupal website. If you need experienced developers, we offer our help.

Feb 21 2017
Feb 21

on February 21st, 2017

Organized by the Icelandic Drupal community, the inaugural Northern Lights Drupal Camp will take place on the this weekend, February 24th - 26th, 2017 at the University of Iceland in Reykjavik. We are honored that our Digital Strategist, Jim Birch was invited to speak.

Jim will present his Holistic SEO and Drupal talk--which covers the modern state of Search Engine Optimization and how we at Xeno Media define best practices for technical SEO using Drupal.  It also presents ideas on how to guide and empower clients to create the best content to achieve their digital goals.

This presentation will review:

  • What Holistic SEO is, and some examples of modern search results explained.
  • The most common search engine ranking factors, and how to keep up to date.
  • An overview of Content strategy and how it can guide development.
  • An overview of technical SEO best practices in Drupal.

The presentation is:

  • Session time slot: Sunday 15:15 - 16:00
  • Session room: Room Eyjajallajökull

View the full schedule.

Feb 21 2017
Feb 21

I've been having tremendous fun writing tutorials about each of the Drupal 8 APIs in turn, and I hope people have been finding them useful. They've certainly been eye-openers for me, as I've always focussed on achieving a clear worked example, and doing that alone unearths all sorts of questions (and usually—but not always—answers) about how Drupal 8's core itself works.

However, as I'm about to start a big project, I'm going to take a break from writing tutorials. They're fun, like I say, but unfortunately they don't pay the bills; they certainly don't keep the cat in the fishy biscuits to which she's accustomed. So the recent post on the database abstraction layer is the last, for at least the foreseeable future. But that means I've covered eighteen of the thirty-six or so featured topics on api.d.o: half-way seems a good point to pause and take stock, regardless!

If you're at all interested in learning Drupal 8's APIs, then do feel free to have a read of some of the posts, as they're not going anywhere, and neither am I, really. Try out some of the worked examples; and like the very kind commenters before you, remember to leave a note about where I might have gone wrong. That way I can fix it for the next reader.

Before I next write a tutorial, then, I might see you at Drupalcamp London instead; just... don't ask me anything too complicated about the APIs!

Feb 20 2017
Feb 20

This is an ode to Dirk Engling’s OpenTracker.

It’s a BitTorrent tracker.

It’s what powered The Pirate Bay in 2007–2009.

I’ve been using it to power the downloads on since the end of November 2010. >6 years. It facilitated 9839566 downloads since December 1, 2010 until today. That’s almost 10 million downloads!


It’s one of the most stable pieces of software I ever encountered. I compiled it in 2010, it never once crashed. I’ve seen uptimes of hundreds of days.

[email protected]:~$ ls -al /data/opentracker
total 456
drwxr-xr-x  3 wim  wim   4096 Feb 11 01:02 .
drwxr-x--x 10 root wim   4096 Mar  8  2012 ..
-rwxr-xr-x  1 wim  wim  84824 Nov 29  2010 opentracker
-rw-r--r--  1 wim  wim   3538 Nov 29  2010 opentracker.conf
drwxr-xr-x  4 wim  wim   4096 Nov 19  2010 src
-rw-r--r--  1 wim  wim 243611 Nov 19  2010 src.tgz
-rwxrwxrwx  1 wim  wim  14022 Dec 24  2012 whitelist


The simplicity is fantastic. Getting up and running is incredibly simple: git clone git:// .; make; ./opentracker and you’re up and running. Let me quote a bit from its homepage, to show that it goes the extra mile to make users successful:

opentracker can be run by just typing ./opentracker. This will make opentracker bind to and happily serve all torrents presented to it. If ran as root, opentracker will immediately chroot to . and drop all priviliges after binding to whatever tcp or udp ports it is requested.

Emphasis mine. And I can’t emphasize my emphasis enough.

Performance & efficiency

All the while handling dozens of requests per second, opentracker causes less load than background processes of the OS. Let me again quote a bit from its homepage:

opentracker can easily serve multiple thousands of requests on a standard plastic WLAN-router, limited only by your kernels capabilities ;)

That’s also what the homepage said in 2010. It’s one of the reasons why I dared to give it a try. I didn’t test it on a “plastic WLAN-router”, but everything I’ve seen confirms it.


Its defaults are sane, but what if you want to have a whitelist?

  1. Uncomment the #FEATURES+=-DWANT_ACCESSLIST_WHITE line in the Makefile.
  2. Recompile.
  3. Create a file called whitelist, with one torrent hash per line.

Have a need to update this whitelist, for example a new release of your software to distribute? Of course you don’t want to reboot your opentracker instance and lose all current state. It’s got you covered:

  1. Append a line to whitelist.
  2. Send the SIGHUP UNIX signal to make opentracker reload its whitelist.


I’ve been in the process of moving off of my current (super reliable, but also expensive) hosting. There are plenty of specialized companies offering HTTP hosting and even rsync hosting. Thanks to their standardization and consequent scale, they can offer very low prices.

But I also needed to continue to run my own BitTorrent tracker. There are no companies that offer that. I don’t want to rely on another tracker, because I want there to be zero affiliation with illegal files. This is a BitTorrent tracker that does not allow anything to be shared: it only allows the software releases made by to be downloaded.

So, I found the cheapest VPS I could find, with the least amount of resources. For USD $13.50, I got the lowest specced VPS from a reliable-looking provider: with 128 MB RAM. Then I set it up:

  1. ssh‘d onto it.
  2. rsync‘d over the files from my current server (alternatively: git clone and make)
  3. added @reboot /data/opentracker/opentracker -f /data/opentracker/opentracker.conf to my crontab.
  4. removed the CNAME record for, and instead made it an A record pointing to my new VPS.
  5. watched on both the new and the old server, to verify traffic was moving over to my new cheap opentracker VPS as the DNS changes propagated

Drupal module

Since runs on Drupal, there of course is an OpenTracker Drupal module to integrate the two (I wrote it). It provides an API to:

  • create .torrent files for certain files uploaded to Drupal
  • append to the OpenTracker whitelist file
  • parse the statistics provided by the OpenTracker instance

You can see the live stats at


opentracker is the sort of simple, elegant software design that makes it a pleasure to use. And considering the low commit frequency over the past decade, with many of those commits being nitpick fixes, it also seems its simplicity also leads to excellent maintainability. It involves the HTTP and BitTorrent protocols, yet only relies on a single I/O library, and its source code is very readable. Not only that, but it’s also highly scalable.

It’s the sort of software many of us aspire to write.

Finally, its license. A glorious license indeed!

The beerware license is very open, close to public domain, but insists on honoring the original author by just not claiming that the code is yours. Instead assume that someone writing Open Source Software in the domain you’re obviously interested in would be a nice match for having a beer with.

So, just keep the name and contact details intact and if you ever meet the author in person, just have an appropriate brand of sparkling beverage choice together. The conversation will be worth the time for both of you.

Dirk, if you read this: I’d love to buy you sparkling beverages some time :)

Feb 20 2017
Feb 20

I’ve been building websites for the last 10 years. Design fads come and go but image galleries have stood the test of time and every client I’ve had has asked for one.

There are a lot of image gallery libraries out there, but today I want to show you how to use Juicebox.

Juicebox is an HTML5 responsive image gallery and it integrates with Drupal using the Juicebox module.

Juicebox is not open source, instead it offers a free version which is fully useable but you are limited to 50 images per gallery. The pro version allows for unlimited images and more features.

If you’re looking for an alternative solution look at Slick, which is open source, and it integrates with Drupal via the Slick module. I will cover this module in a future tutorial.

In this tutorial, you’ll learn how to display an image gallery from an image field and how to display a gallery using Views.

Getting Started

First, go download and install the Juicebox module.

Using Drush:

$ drush dl juicebox
$ drush en juicebox

Download Juicebox Library

Go to the Juicebox download page and download the free version.

Extract the downloaded file and copy the jbcore folder within the zip file into /libraries and rename the jbcore directory to juicebox.

Once everything has been copied and renamed, the path to juicebox.js should be /libraries/juicebox/juicebox.js.

Create a Gallery Using Fields

We’ll first look at how to create a gallery using just an image field. To do this, we’ll create an image field called “Image gallery” and this field will be used to store the images.

1. Go to Structure, “Content types” and click on “Manage fields” on the Article row.

2. Click on “Add field” and select Image from “Add a new field”.

3. Enter “Image gallery” into Label and click on “Save and continue”.

4. Change “Allowed number of values” to Unlimited and click on “Save field settings”.

You’ll need to do this if you want to store multiple images.

5. On the Edit page leave it as is and click on “Save settings”.

Configure Juicebox Gallery Formatter

Now that we’ve created the image fields, let’s configure the actual Juicebox gallery through the field formatter.

1. Click “Manage display”, and select “Juicebox Gallery” from the Format drop-down on the “Image gallery” field.

2. Click on the cogwheel to configure the gallery. Now there a lot of options but the only change we’ll make is to set the image alt text as the caption.

3. Click on the “Lite config” field-set and change the height to 500px.

4. Reorder the field so it’s below Body.

5. Click on Save at the bottom of the page.

Now if you go and create a test article and add images into the gallery you should see them below the Body field.

Create a Gallery Using Views

You’ve seen how to create a gallery using just the Juicebox gallery formatter, let’s now look at using Views to create a gallery.

We’ll create a single gallery that’ll use the first image of every gallery on the Article content type.

1. Go to Structure, Views and click on “Add view”.

2. Fill in the “Add new view” form with the values defined in Table 1-0.

Table 1-0. Create a new view

Option Value View name Article gallery Machine name article_gallery Show Content type of Article sorted by Newest first Create a page Unchecked Create a block Unchecked

3. Click on Add in the Fields section.

4. Search for the “Image gallery” field and add it to the view.

5. Change the Format from “Unformatted list” to “Juicebox Gallery” and click on Apply.

6. On the “Page: Style options” select the image field in “Image Source” and “Thumbnail Source”. The one you added to the View earlier.

You can configure the look and feel by expanding the “Lite config” field-set. You can change the width and height, text color and more.

7. Click on Apply.

8.Click on Add next to Master and select Page from the drop-down.

9. Make sure you set a path in the “Page settings” section. Add something like /gallery.

10. Do not forget to save the View by clicking on Save.

11. Make sure you have some test articles and go to /gallery. You should see a gallery made up of the first image from each gallery.


The reason I like Juicebox in Drupal is because it’s easy to set up. With little effort you can get a nice responsive image gallery from a field or a view. The only downside I can see is that it’s not open source.


Q: I get the following error message: “The Juicebox Javascript library does not appear to be installed. Please download and install the most recent version of the Juicebox library.”

This means Drupal can’t detect the Juicebox library in the /libraries directory. Refer to the “Getting started” section.

Feb 20 2017
Feb 20

Pay them in Tacos!

Say our partners who form the Chromatic brain trust, (Chris, Dave and Mark), do something crazy like base our reward system on the number of HeyTaco! emojis given out amongst team members in Slack. (Remember, this is an, ummmm, hypothetical example.) Now, say we wanted to display the taco leaderboard as a block on the Chromatic HQ home page. It's not like the taco leaderboard needs minute by minute updates so it is a good candidate for caching.

Why Cache?

What do we save by caching? Grabbing something that has already been built is quicker than building it from scratch. It's the difference between grabbing a Big Mac from McDonald's vs buying the ingredients from the supermarket, going home and making a Big Mac in your kitchen.

So, instead of each page refresh requiring a call to the HeyTaco! API, we can just tell Drupal to cache the leaderboard block and display the cached results. Instead of taking seconds to generate the page holding the block, it takes milliseconds to display the cached version. (ex. 2.97s vs 281ms in my local environment.)

Communicate with your Render Array

We have to remember that it's important that our render array - the thing that renders the HTML - knows to cache itself.

"It is of the utmost importance that you inform the Render API of the cacheability of a render array." - From D.O.'s page about the cacheability of render arrays

The above quote is what I'll try to explain, showing some of the nitty gritty with the help of a custom module and the HeyTaco! block it builds.

I created a module called heytaco and below is the build() function from my HeyTacoBlock class. As its name suggests, it's the part of the code that builds the HeyTaco! leaderboard block.

 * Provides a Hey Taco Results block
 * @Block(
 *   id = "heytaco_block",
 *   admin_label = @Translation("HeyTaco! Leaderboard"),
 * )
class HeyTacoBlock extends BlockBase implements ContainerFactoryPluginInterface {

   * construct() and create() functions here

   * [email protected]}
  public function build() {
    $user_id = $this->account->id();
    return array(
      '#theme' => 'heytaco_block',
      '#results' => $this->returnLeaderboard($user_id),
      '#partner_asterisk_blurb' => $this->isNotPartner($user_id),
      '#cache' => [
        'keys' => ['heytaco_block'],
        'contexts' => ['user'],
        'tags' => ['user_list'],
        'max-age' => 3600,

For the purposes of the rest of the blog post, I'll focus on the above code's #cache property, specifically its metadata:

  • keys
  • contexts
  • tags
  • max-age

I'm going to go through them similarly to (and inspired by) what is on the aforementioned D.O. page about the cacheability of render arrays.


From ...what identifies the thing I'm rendering?

In my words: This is the "what", as in "What entity is being rendered?". In my case, I'm just showing the HeyTaco! block and it doesn't have multiple displays from which to choose. (I will handle variations later using the contexts parameter.)

Many core modules don't include keys at all or they are single keys. For instance:

toolbar module

  • 'keys' => ['toolbar'],

dynamic_page_cache module

  • 'keys' => ['response'],

views module

After looking through many core modules, I (finally) found multiple values for a keys definition in the views module, in DisplayPluginBase.php:

  '#cache' => [
    'keys' => ['view', $view_id, 'display', $display_id],

So, in the views example above, the keys are telling us the "what" by telling us the view ID and its display ID.

I'd also mention that on D.O. you will find this tidbit: Cache keys must only be set if the render array should be cached.


From Does the representation of the thing I'm rendering vary per ... something?

In my words: This is the "which", as in, "Which version of the block should be shown?" (Sounds a bit like keys, right?)

The Finalized Cache Context API page tells us that when cache contexts were originally introduced they were regarded as "special keys" and keys and contexts actually intermingled. To make for a better developer experience, contexts was separated into its own parameter.

Going back to the need to vary an entity's representation, we see that what is rendered for one user might need to be rendered differently for another user, (ex. "Hello Märt" vs "Hello Adam"). If it helps, the D.O. page notes that, "...cache contexts are completely analogous to HTTP's Vary header." In the case of our HeyTaco! block, the only context we care about is the user.

Keys vs Contexts

Amongst team members, we discussed the difference between keys and contexts quite a bit. There is room for overlap between the two and I netted out at keeping things simple: let keys broadly define the thing being represented and let contexts take care of the variations. So keys are for completely different instances of a thing (ex. different menus, users, etc.) Contexts are for varying an instance, as in, "When should this item look different to different types of users?"

Different Contexts, Different Caching

To show our partners (Chris, Dave and Mark, remember?) how great they are I have added 100 tacos to their totals without telling them. When they log in to the site, they see an unassuming leaderboard with impressive Top 3 totals for themselves.

Partners Don't Know Their Taco Stats are Padded!

Partner HeyTaco! leaderboard

However, I don't want the rest of the team feeling left out, so for them I put asterisks next to the inflated taco totals and note that those totals have been modified.

Asterisks Remind us of the Real Score

Nonpartner HeyTaco! leaderboard with asterisks

So, our partners see one thing and the rest of our users see another, but all of these variations are still cached! I use contexts to allow different caching for different people. But remember, contexts aren't just user-based; they can also be based on ip, theme or url, to name a few examples. There is a list of cache contexts that you can find in Look for the entries prefaced with cache_context (ex. cache_context.user, cache_context.theme).


From Which things does it depend upon, so that when those things change, so should the representation?

In my words: What are the bits and pieces used to build the markup such that if any of them change, then the cached markup becomes outdated and needs to be regenerated? For instance, if a user changes her username, any cached instances using the old name will need to be regenerated. The tags may look like 'tags' => ['user:3'],. For HeyTaco!, I used 'tags' => ['user_list'],. This means that any user changing his/her user info will invalidate the existing cached block, forcing it to be rendered anew for everyone.


From When does this rendering become outdated? Is it only valid for a limited period of time?

In my words: If you want to give the rendering a maximum validity period, after which it is forced to refresh itself, then decide how many seconds and use max-age; the default is forever (Cache::PERMANENT).

In Cache-tastic Conclusion

So that's my stab at exploring #cache metadata and I feel like this is something that requires coding practice, with different use cases, to grasp what each metadata piece does.

For instance, I played with tags in my HeyTaco! example for quite some time. Using 'tags' => ['user:' . $user_id] only regenerated the block for the active user who changed his/her own info. So, I came upon an approach to use and pass all the team's IDs into Cache::buildTags(), like this Cache::buildTags('user', $team_uids). It felt ugly because I had to grab all the user IDs and put them into $team_uids manually. (What if we had thousands of users?) In my experimentation, that was the only way I could get the block updated if any user had his/her info changed.

However, after all that, Gus Childs reviewed my blog post and, since he knew of the existence of the node_list tag, he posited that all I needed to use as my tag is user_list, as in ('tags' => ['user_list'],). So, instead of manually grabbing user IDs, I just had to know to use 'user_list'. Thanks Gus!

Another colleague, Adam, didn't let me get away with skipping dependency injection in my sample code. He also questioned the difference between keys and contexts and made me think more about this stuff than is probably healthy.

Feb 20 2017
Feb 20

In this article we are going to look at how we can render images using image styles in Drupal 8.

In Drupal 7, rendering images with a particular style (say the default "thumbnail") was by calling the theme_image_style() theme and passing the image uri and image style you want to render (+ some other optional parameters):

$image = theme('image_style', array('style_name' => 'thumbnail', 'path' => 'public://my-image.png'));

You'll see this pattern all over the place in Drupal 7 codebases.

The theme prepares the URL for the image, runs the image through the style processors and returns a themed image (via theme_image()). The function it uses internally for preparing the url of the image is image_style_url() which returns the URL of the location where the image is stored after being prepared. It may not yet exist, but on the first request, it would get generated.

So how do we do it in Drupal 8?

First of all, image styles in Drupal 8 are configuration entities. This means they are created and exported like many other things. Second of all, in Drupal 8 we no longer (should) call theme functions like above directly. What we should do is always return render arrays and expect them to be rendered somewhere down the line. This helps with things like caching etc.

So to render an image with a particular image style, we need to do the following:

$render = [
    '#theme' => 'image_style',
    '#style_name' => 'thumbnail',
    '#uri' => 'public://my-image.png',
    // optional parameters

This would render the image tag with the image having been processed by the style.

Finally, if we just want the URL of an image with the image style applied, we need to load the image style config entity and ask it for the URL:

$style = \Drupal::entityTypeManager()->getStorage('image_style')->load('thumbnail');
$url = $style->buildUrl('public://my-image.png');

So that is it. You now have the image URL which will generate the image upon the first request.

Remember though to inject the entity type manager if you are in such a context that you can.

Feb 20 2017
Feb 20


Savas Labs has been using Docker for our local development and CI environments for some time to streamline our systems. On a recent project, we chose to integrate Phase 2’s Pattern Lab Starter theme to incorporate more front-end components into our standard build. This required building a new Docker image for running applications that the theme depends on. In this post, I’ll share:

  • A Dockerfile used to build an image with Node, npm, PHP, and Composer installed
  • A docker-compose.yml configuration and Docker commands for running theme commands such as npm start from within the container

Along the way, I’ll also provide:

  • A quick overview of why we use Docker for local development
    • This is part of a Docker series we’re publishing, so be on the lookout for more!
  • Tips for building custom images and running common front-end applications inside containers.


We switched to using Docker for local development last year and we love it - so much so that we even proposed a Drupalcon session on our approach and experience we hope to deliver. Using Docker makes it easy for developers to quickly spin up consistent local development environments that match production. In the past we used Vagrant and virtual machines, even a Drupal-specific flavor DrupalVM, for these purposes, but we’ve found Docker to be faster when switching between multiple projects, which we often do on any given Sunworkday.

Usually we build our Docker images from scratch to closely match production environments. However, for agile development and rapid prototyping, we often make use of public Docker images. In these cases we’ve relied on Wodby’s Docker4Drupal project, which is “a set of docker containers optimized for Drupal.”

We’re also fans of the atomic design methodology and present our clients interactive style guides early to facilitate better collaboration throughout. Real interaction with the design is necessary from the get-go; gone are the days of the static Photoshop file at the outset that “magically” translates to a living design at the end. So when we heard of the Pattern Lab Starter Drupal theme which leverages Pattern Lab (a tool for building pattern-driven user interfaces using atomic design), we were excited to bake the front-end components in to our Docker world. Oh, the beauty of open source!

Building the Docker image

To experiment with the Pattern Lab Starter theme we began with a vanilla Drupal 8 installation, and then quickly spun up our local Docker development environment using Docker4Drupal. We then copied the Pattern Lab Starter code to a new custom/theme/patter_lab_starter directory in our Drupal project.

Running the Phase 2 Pattern Lab Starter theme requires Node.js, the node package manager npm, PHP, and the PHP dependency manager Composer. Node and npm are required for managing the theme’s node dependencies (such as Gulp, Bower, etc.), while PHP and Composer are required by the theme to run and serve Pattern Lab.

While we could install these applications on the host machine, outside of the Docker image, that defeats the purpose of using Docker. One of the great advantages of virtualization, be it Docker or a full VM, is that you don’t have to rely on installing global dependencies on your local machine. One of the many benefits of this is that it ensures each team member is developing in the same environment.

Unfortunately, while Docker4Drupal provides public images for many applications (such as Nginx, PHP, MariaDB, Mailhog, Redis, Apache Solr, and Varnish), it does not provide images for running the applications required by the Pattern Lab Starter theme.

One of the nice features of Docker though is that it is relatively easy to create a new image that builds upon other images. This is done via a Dockerfile which specifies the commands for creating the image.

To build an image with the applications required by our theme we created a Dockerfile with the following contents:

FROM node:7.1
MAINTAINER Dan Murphy <[email protected]>

RUN apt-get update && \
    apt-get install -y php5-dev  && \
    curl -sS | php -- --install-dir=/usr/local/bin --filename=composer && \

    # Directory required by Yeoman to run.
    mkdir -p /root/.config/configstore \

    # Clean up.
    apt-get clean && \
    rm -rf \
      /root/.composer \
      /tmp/* \
      /usr/include/php \
      /usr/lib/php5/build \

# Permissions required by Yeoman to run:
RUN chmod g+rwx /root /root/.config /root/.config/configstore

EXPOSE 3001 3050

The commands in this Dockerfile:

  • Set the official Node 7 image as the base image. This base image includes Node and npm.
  • Install PHP 5 and Composer.
  • Make configuration changes necessary for running Yeoman, a popular Node scaffolding system used to create new component folders in Pattern Lab.
  • Expose ports 3001 and 3050 which are necessary for serving the Pattern Lab style guide.

From this Dockerfile we built the image savaslabs/node-php-composer and made it publicly available on DockerHub. Please check it out and use it to your delight!

One piece of advice I have for building images for local development is that while Alpine Linux based images may be much smaller in size, the bare-bones nature and lack of common packages brings with it some trade-offs that make it more difficult to build upon. For that reason, we based our image on the standard DebianJessie Node image rather than the Alpine variant.

This is also why we didn’t just simply start from the wodby/drupal-php:7.0 image and install Node and npm on it. Unfortunately, the wodby/drupal-php image is built from alpine:edge which lacks many of the dependencies required to install Node and npm.

Now a Docker purist might critique this image and recommend only “one process per container”. This is a drawback of this approach, especially since Wodby already provides a PHP image with Composer installed. Ideally, we’d use that in conjunction with separate images that run Node and npm.

However, the theme’s setup makes that difficult. Essentially PHP scripts and Composer commands are baked into the theme’s npm scripts and gulp tasks, making it difficult to untangle them. For example, the npm start command runs Gulp tasks that depend on PHP to generate and serve the Pattern Lab style guide.

Due to these constraints, and since this image is for local development, isn’t being used to deploy a production app, and encapsulates all of the applications required by the Pattern Lab Starter theme, we felt comfortable with this approach.

Using the image

To use this image, we specified it in our project’s docker-compose.yml file (see full file here) by adding the following lines to the services section:

 image: savaslabs/node-php-composer:1.2
   - "3050:3050"
   - "3001:3001"
   - php

This defines the configuration that is applied to a node-php-composer container when spun up. This configuration:

  • Specifies that the container should be created from the savaslabs/node-php-composer image that we built and referenced previously
  • Maps the container ports to our host ports so that we can access the Pattern Labs style guide locally
  • Mounts the project files (that are mounted to the php container) so that they are accessible to the container.

With this service defined in the docker-compose.yml we can start using the theme!

First we spin up the Docker containers by running docker-compose up -d.

Once the containers are running, we can open a Bash shell in the theme directory of the node-php-composer container by running the command:

docker-compose run --rm --service-ports -w /var/www/html/web/themes/custom/pattern_lab_starter node-php-composer /bin/bash

We use the --service-ports option to ensure the ports used for serving the style guide are mapped to the host.

Once inside the container in the theme directory, we install the theme’s dependencies and serve the style guide by running the following commands:

npm install --unsafe-perm
npm start

Voila! Once npm start is running we can access the Pattern Lab style guide at the URL’s that are outputted, for example http://localhost:3050/pattern-lab/public/.

Note: Docker runs containers as root, so we use the --unsafe-perm flag to run npm install with root privileges. This is okay for local development, but would be a security risk if deploying the container to production. For information on running the container as an unprivileged user, see this documentation.

Gulp and Bower are installed as theme dependencies during npm install, therefore we don’t need either installed globally in the container. However, to run these commands we must shell into the theme directory in the container (just as we did before), and then run Gulp and Bower commands as follows:

  • To install Bower libraries run $(npm bin)/bower install --allow-root {project-name} --save
  • To run arbitrary Gulp commands run $(npm bin)/gulp {command}

Other commands listed in the Pattern Lab Starter theme README can be run in similar ways from within the node-php-composer container.


Using Docker for local development has many benefits, one of which is that developers can run applications required by their project inside containers rather than having to install them globally on their local machines. While we typically think of this in terms of the web stack, it also extends to running applications required for front-end development. The Docker image described in this post allows several commonly used front-end applications to run within a container like the rest of the web stack.

While this blog post demonstrates how to build and use a Docker image specifically for use with the Pattern Lab Starter theme, the methodology can be adapted for other uses. A similar approach could be used with Zivtech’s Bear Skin theme, which is another Pattern Lab based theme, or with other contributed or custom themes that rely on npm, Gulp, Bower, or Composer.

If you have any questions or comments, please post them below!

Feb 18 2017
Feb 18

Drupal's API has a huge number of very useful utitlity classes and functions, especially in Drupal 8. Although the API docs are great, it's rather impossible to always find every little feature. Today I want to show you the Random utility class, which I've nearly overseen and found rather by accident.

On a project I'm currently working on, I have defined a custom entity type, for which I needed a quick way to autogenerate dummy and test data. In a first shot, the code generated 50 identical items, all having assigned the same staic "lorem ipsum" title and description text, all having assigned the same test image file. To improve that behaviour and get distinct data, I was looking in the Drupal API docs for a suitable helper, which I however didn't find at first glance. I was already on the way to integrate a simple text generation script I've found on Github, when I had to pause work. A few days later, while working on a different project, I've stumbled across the \Drupal\Component\Utility\Random class, which covers exactly the kind of functionality I was looking for.

The Random class offers different functions to generate names, strings, words (there are semantical differences, e.g. "words" look are strings looking like real words ~ blind text), whole sentences and paragraphs (consisting of sentences), PHP objects and even generated images.

Here's a snippet out of my generation script, that shows the generation of words, names, paragraphs and especially images, that are stored as file entities and assigned to an image field:

    for ($i = 0; $i < $count; $i++) {
      // Randomly choose the item's owner.
      $owner = array_rand($uids);

      // Define the full image path.
      $destination_dir = sprintf('public://uploads/%s/%s.jpg', $owner, $random->name(10, TRUE));
      // Generate the random image (width 700px, height 466px).
      $image_path = $random->image($destination_dir, '700x466', '700x466');
      // Save the generated image as file entity.
      $image_file = File::create([
        'uri' => $image_path,
        'uid' => $owner,
        'status' => 1,

      $item = RentableItem::create([
        'type' => 'default',
        'title' => $random->word(rand(5, 12)),
        'state' => 'draft',
        'category' => array_rand($category_ids),
        'description' => $random->paragraphs(2),
        'rent' => rand(1, 15),
        'deposit' => rand(5, 200),
        'uid' => $owner,
        'images' => $image_file->id(),

Please note:

  1. don't forget to import the namespace of the Random und File classes or fully qualify them. (use Drupal\Component\Utility\Random; and use Drupal\file\Entity\File;)
  2. the RentableItem class referes to a cusom entity type, you won't find anywhere. You can use nodes, taxonomy terms or any other content entity instead, that's not the important part of this script.
  3. for better understanding: $uids and $category_ids are arrays of user entity ids and taxonomy term ids, defined earlier in the script.
  4. If you look into the docs of the image() function, you'll find wrong and incomplete documentation of the parameters. Stick to the code in my example instead. I've already opened up an issue at and proposed a patch.

That's it. Have fun generating your own dummy content :) And when you're looking at the Random class, go along and have a look at its siblings in the Drupal\Component\Utility namespace, you'll probably find a lot of other stuff, you'll need quite often.

Feb 17 2017
Feb 17

Join as a member to keep thriving. Coffee cup with words We are happy to serve you. Drupal Association is home of the Drupal project and the Drupal community. It has been continuously operating since 2001. The Engineering Team— along with amazing community webmasters— keeps alive and well. As we launch the first membership campaign of 2017, our story is all about this small and productive team.

Join us as we celebrate all that the engineering team has accomplished. From helping grow Drupal adoption, to enabling contribution; improving infrastructure to making development faster. The team does a lot of good for the community, the project, and

Check out some of their accomplishments and if you aren't yet a Drupal Association member, join us! Help us continue the work needed to make better, every day.

Share these stories with others - now until our membership drive ends on March 8.

Facebook logoShare

Twitter logoTweet

LinkedIn logoShare

Thank you for supporting our work!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web