Feb 24 2017
Feb 24


One of the problems with Drupal distributions is that they, by nature, contain an installation profile — and Drupal sites can only have one profile. That means that consumers of a distribution give up the ability to easily customize the out of the box experience.

This was fine when profiles were first conceived. The original goal was to provide “ready-made downloadable packages with their own focus and vision”. The out of the box experience was customized by the profile, and then the app was built on top of that starting point. But customizing the out of the box experience is no longer reserved for those of us that create distributions for others to use as a starting point. It’s become a critical part of testing and continuous integration. Everyone involved in a project, including the CI server, needs a way to reliably and quickly build the application from a single command. Predictably, developers have looked to the installation profile to handle this.

This practice has become so ubiquitous, that I recently saw a senior architect refer to it as “the normal Drupal paradigm of each project having their own install profile”. Clearly, if distributions want to be a part of the modern Drupal landscape, they need to solve the problem of profiles.

Old Approach

In July 2016, Lightning introduced lightning.extend.yml which enabled site builders to:

  1. Install additional modules after Lightning had finished its installation
  2. Exclude certain Lightning components
  3. Redirect users to a custom URL upon completion

This worked quite well. It gave site builders the ability to fully customize the out of the box experience via contrib modules, custom code, and configuration. It even allowed them to present users with a custom “Installation Done” page if they chose — giving the illusion of a custom install profile.

But it didn’t allow developers to take full control over the install process and screens. It didn’t allow them to organize their code they way they would like. And it didn’t follow the “normal Drupal paradigm” of having an installation profile for each project.

New Approach

After much debate, the Lightning team has decided to embrace the concept of “inheriting” profiles. AKA sub-profiles. (/throws confetti)

This is not a new idea and we owe a huge thanks to those that have contributed to the current patch and kept the issue alive for over five years. Nor is it a done deal. It still needs to get committed which, at this point, means Drupal 8.4.x.

On a technical level, this means that — similar to sub-themes — you can place the following in your own installation profile’s *.info.yml file and immediately start building a distribution (or simply a profile) on top of Lightning:

base profile:
  name: lightning

To encourage developers to use this method, we will also be including a DrupalConsole command that interactively helps you construct a sub-profile and a script which will convert your old lightning.extend.yml file to the equivalent sub-profile.

This change will require some rearchitecting of Lightning itself. Mainly to remove the custom extension selection logic we had implements and replace it with standard dependencies.

This is all currently planned for the 2.0.5 release of lightning which is due out in mid March. Stay tuned for updates.

Feb 24 2017
Feb 24

Another day, another Acquia Developer Certification exam review (see the previous one: Certified Back end Specialist - Drupal 8, I recently took the Front End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Front End Specialist - Drupal 8 Exam Badge

Now that I've completed all the D8-specific Certifications, I think the only Acquia Certification I haven't completed is the 'Acquia Cloud Site Factory' Exam—one for which I'm definitely not qualified, as I haven't worked on a project that uses Acquia's 'ACSF' multisite setup (though I do a lot of other multisite and install profile/distribution work, just nothing specific to Site Factory!). Full disclosure: Since I work for Acquia, I am able to take these Exams free of charge, though many of them are worth the price depending on what you want to get out of them. I paid for the first two that I took (prior to Acquia employment) out of pocket!

Some old, some new

This exam feels very much in the style of the Drupal 7 Front End Specialist exam—there are questions on theme hook suggestions, template inheritance, basic HTML5 and CSS usage, basic PHP usage (e.g. how do you combine two arrays, in what order are PHP statements evaluated... really simple things), etc.

The main difference with this exam centers on the little differences in doing all the same things. For example, instead of PHPTemplate, Drupal 8 uses Twig, so there are questions relating to Twig syntax (e.g. how to chain a filter to a variable, how to print a string from a variable that has multiple array elements, how to do basic if/else statements, etc.). The question content is the same, but the syntax is using what would be done in Drupal 8. Another example is theme hook suggestions—the general functionality is identical, but there were a couple questions centered on how you add or use suggestions specifically in Drupal 8.

The main thing that tripped me up a little bit (mostly due to my not having used it too much) is new Javascript functionality and theme libraries in Drupal 8. You should definitely practice adding JS and CSS files, and also learn about differences in Drupal 8's Javascript layer (things like using use 'strict';, how to make sure Drupal.behaviors are available to your JS library, and the like).

I think if you've built at least one custom theme with a few Javascript and CSS files, and a few custom templates, you'll do decently on this exam. Bonus points if you've added a JS file that shouldn't be aggregated, added translatable strings in both Twig files and in JS, and worked out the differences in Drupal's stable and classy themes in Drupal 8 core.

For myself, the only preparation for this exam was:

  • I've helped build two Drupal 8 sites with rather complex themes, with many libraries, dozens of templates, use of Twig extends and include syntax, etc. Note that I was probably only involved in theming work 20-30% of the time.
  • I built one really simple Drupal 8 custom theme for a photo sharing website (closed to the public):
  • I read through the excellent Drupal 8 Theming Guide by Sander Tirez (sqndr)

My Results

I scored an 83.33% (10% better than the Back End test... maybe I should stick to theming :P), with the following section-by-section breakdown:

  • Fundamental Web Development Concepts : 92.85%
  • Theming concepts: 73.33%
  • Templates and Pre-process Functions: 87.50%
  • Layout Configuration: 66.66%
  • Performance: 100.00%
  • Security: 100.00%

I'm not surprised I scored worst in Layout Configuration, as there were some questions about defining custom regions, overriding region-specific markup, and configuring certain things using the Breakpoints and Responsive Images module. I've done all these things, but only rarely, since you generally set up breakpoints only when you initially build the theme (and I only did this once), and I only deal with Responsive Images for a few specific image field display styles, so I don't use it enough to remember certain settings, etc.

It's good to know I keep hitting 90%+ on performance and security-related sections—maybe I should just give up site building and theming and become a security and performance consultant! (Heck, I do a lot more infrastructure-related work than site-building outside of my day job nowadays...)

This exam was not as difficult as the Back End Specialist exam, because Twig syntax and general principles are very consistent from Drupal 7 to Drupal 8 (and dare I say better and more comprehensible than in the Drupal 7 era!). I'm also at a slight advantage because almost all my Ansible work touches on Jinja2, which is the templating system that inspired Twig—in most cases, syntax, functions, and functionality are identical... you just use {{.j2}} instead of {{.twig}} for the file extension!

Feb 24 2017
Feb 24

Drupal 8.3.0 release candidate phase

The release candidate phase for the 8.3.0 minor release begins the week of February 27. Starting that week, the 8.3.x branch will be subject to release candidate restrictions, with only critical fixes and certain other limited changes allowed.

8.3.x includes new experimental modules for workflows, layout discovery and field layouts; raises stability of the BigPipe module to stable and the Migrate module to beta; and includes several REST, content moderation, authoring experience, performance, and testing improvements among other things. You can read a detailed list of improvements in the announcements of alpha1 and beta1.

Minor versions may include changes to user interfaces, translatable strings, themes, internal APIs like render arrays and controllers, etc. (See the Drupal 8 backwards compatibility and internal API policy for details.) Developers and site owners should test the release candidate to prepare for these changes.

8.4.x will remain open for new development during the 8.3.x release candidate phase.

Drupal 8.3.0 will be released on April 5th, 2017.

No Drupal 8.2.x or 7.x releases planned

March 1 is also a monthly core patch (bug fix) release window for Drupal 8 and 7, but no patch release is planned. This is also the final bug fix release window for 8.2.x (meaning 8.2.x will not receive further development or support aside from its final security release window on March 15). Sites should plan to update to Drupal 8.3.0 on April 5.

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

Feb 24 2017
Feb 24

In this blog I want to explain the round up we have done around the refactoring of the acl_contact_cache. In the previous sprints we discovered that a lot of the performance was slowed down by the way the acl_contact_cache was used (or rather not used at all). See also the previous blog post:

At the socialist party they have 350.000 contacts and around 300 users who can access civicrm. Most of the users are only allowed to see only the members in their local chapter.

In the previous blog we explained the proof of concept. We now have implemented this proof of concept and the average performance increase was 60%.

We created a table which holds which user has access to which contacts. We then fill this table once in a few hours. See also issue CRM-19934 for the technical implementation of this proof of concept.

Performance increase in the search query

In the next examples we are logged in as a local member who can only see members in the chapter Amersfoort. We then search for persons with the name 'Jan'. And we measure how long the query for searching takes.

The query for presenting the list with letters in the search result looks like

SELECT count(DISTINCT as rowCount  
FROM civicrm_contact contact_a 
LEFT JOIN civicrm_value_geostelsel geostelsel ON = geostelsel.entity_id  
LEFT JOIN civicrm_membership membership_access ON = membership_access.contact_id  
WHERE  ((((contact_a.sort_name LIKE '%jan%'))))  
AND ( = 803832 
  OR ((((
    ( geostelsel.`afdeling` = 806816 OR geostelsel.`regio` = 806816 OR geostelsel.`provincie` = 806816 )
    AND (
      membership_access.membership_type_id IN (1, 2, 3) 
      AND (
        membership_access.status_id IN (1, 2, 3)
        OR (membership_access.status_id = '7' AND (membership_access.end_date >= NOW() - INTERVAL 3 MONTH))
  OR = 806816
 AND (contact_a.is_deleted = 0)
ORDER BY UPPER(LEFT(contact_a.sort_name, 1)) asc;

As you can see that is quite a complicated query and includes details about which members the user is allowed to see.  Only executing this query takes around 0.435 seconds and the reason is that mysql has to check each record in civicrm_contact (which in this case is around 350.000 and growing)

After refactoring the acl cache functionality in CiviCRM Core the query looks different:

SELECT DISTINCT UPPER(LEFT(contact_a.sort_name, 1)) as sort_name  
FROM civicrm_contact contact_a 
INNER JOIN `civicrm_acl_contacts` `civicrm_acl_contacts` ON `civicrm_acl_contacts`.`contact_id` = `contact_a`.`id`  
WHERE  (((( contact_a.sort_name LIKE '%jan%' ))))  
AND  `civicrm_acl_contacts`.`operation_type` = '2' 
AND `civicrm_acl_contacts`.`user_id` = '803832' 
AND `civicrm_acl_contacts`.`domain_id` = '1' 
AND (contact_a.is_deleted = 0)    
ORDER BY UPPER(LEFT(contact_a.sort_name, 1)) asc

The query now takes around 0,022 seconds to run (20 times faster).


How does this new functionality works:

1. Every time an ACL restriction is needed in a query civicrm core only does an inner join on the civicrm_acl_contacts table and that is all

2. The inner join is generated in the service 'acl_contact_cache'  that service also checks whether the the civicrm_acl_contacts table need to be updated or not.

3. When an update of civicrm_acl_contacts table is needed depends on the settings under administer --> System Settings --> Misc --> ACL Contact Cache Validity (in minutes)

So how does this look like in code?

Below an example of how you could use the acl_contact_cache service to inject acl logic into your query:

// First get the service from the Civi Container
$aclContactCache = \Civi::service('acl_contact_cache'); // The $aclContactCache is a class based on \Civi\ACL\ContactCacheInterface
// Now get the aclWhere and aclFrom part for our query
$aclWhere = $aclContactCache->getAclWhereClause(CRM_Core_Permission::VIEW, 'contact_a');
$aclFrom = $aclContactCache->getAclJoin(CRM_Core_Permission::VIEW, 'contact_a');

// Now build our query
$sql = "SELECT contact_a.* FROM civicrm_contact contact_a ".$aclFrom." WHERE 1 AND ".$aclWhere;
// That is it now execute our query and handle the output...

The reason we use a service in the Civi Container class is that it is now also quite easy to override this part of core in your own extension.

The \Civi\ACL\ContactCache class has all the logic to for building the ACL queries. Meaning that this class contains the logic to interact with the ACL settings in CiviCRM, with the permissioned relationship etc.. All those settings are taken into account when filling civicrm_acl_contacts table which is per user and per operation once in the three hours.

Feb 24 2017
Feb 24

If you want to create a web resource, or to implement certain improvements on an existing one, then you must find specialists who will make your ideas come true. It’s quite a challenging task. We offered our 10 tips for hiring a top web developer by mentioning qualities that must be possessed. However, in addition, there are different types of web developers, with different skills, duties and types of work. The objectives and the scale of your project require appropriate experts. It might be a little complicated to define who exactly you need, especially if you aren’t a dev yourself. We helped you see the difference between front-end and back-end development, and now we are going to help you distinguish between the main types of Drupal developers.

Site builder

Programming is what most developers are dealing with. They receive a task with the requirements and perform appropriate coding. For Drupal, writing custom code is not obligatory in this open source CMS with a large community. A great innovative and feature-rich website can be built with the help of only existing core modules and existing contribs. To solve a particular problem, Drupal site builders should have a deep understanding of the essence of modules, plugins and extensions, their pros and cons, know how they work together, and be aware of all their updates and upgrades. They must know how to use and configure all of Drupal’s potential to provide a wide spectrum of functionality for a usable and accessible web resource.

Theme developer

Drupal themes are responsible for the appearance of the site. They don’t influence the functionality much. However they do help attract visitors and convince them to stay, if the themes are pleasant to view and easy to navigate. The themers are front-end developers who specialize in graphic design. The content and the structure of the site in Drupal doesn’t depend on it, and remains unchanged when switching themes.

There are lots of free responsive Drupal themes. But, if you want something special, a theme developer can either design a custom theme for you from scratch or create a subtheme, grounding it in the off-the-shelf elements by customizing the existing ones. This specialists should have proficiency in HTML, CSS and sometimes in PHP and Javascript to turn a design into a working theme.

Module developer

If you need a custom theme, you should hire a theme developer. But if you need a custom module, then you should hire a module developer. If some functionality for your website can’t be created with the help of the existing Drupal modules, then these back end professionals are able to develop a personalized module specially for your site that fits your demands and wishes.

Sure, these three main types can be divided or supplemented with various other types. But we want to keep it simple, especially because Drupal developers usually possess many skills at once and can belong to several types at a time. True Drupal professionals are interested in continually learning something new, in extending knowledge and enriching skills in order to be able to manage with more tasks. Our company allows you to hire any type of Drupal developer you need, or a team of them from our experienced experts. Contact us to discuss the specifics of your project.

Feb 24 2017
Feb 24

Drupal 8 integrating with Traditional ECMs to enhance Enterprise Content Management Capabilities

“Shifting business requirements for digital content and new technologies are changing the ECM market. .By 2018, 50% of enterprises will manage their content using a hybrid content architecture.” - Says Gartner, Magic Quadrant for Enterprise Content Management 2016

Drupal 8 is a strong platform with it’s Strong Web Content Management System, has a role to Play in integrating with the existing Challengers/Traditional ECMs to enhance their Enterprise Content Management Capabilities.

Some of the key Web Content Management Features of Drupal that can be leveraged to provide this integrated solution include:

  1. Rich Content Management Tools
  2. Responsive Layout and Design
  3. Social Media Tools & Search Engine Optimization
  4. Integration Capabilities - RESTful APIs Support
  5. Multi Domain Capabilities
  6. Multi Lingual Capabilities
  7. E Commerce Capabilities (Capabilities to handle different type of domain contents like E-Commerce, Newspaper etc )

Rich Content Management Tools

In the context of the a large amount of today’s content being Dynamic, HTML, need for Multimedia Support the Content Management Tools of Drupal that can be taken advantage off.

The need for different types of pages in the sitemap, broadly divided into Landing Pages, Individual Pages and Functional Pages. This can be easily achieved by using out-of-the-box content modelling tools in Drupal like Content Types, Views and Call back functions. The flexibility for the editor to create dynamic landing pages with variants in UI (Image and Text combinations, layout variants) is possible by extending the content type interface.

Some of the key content management features include:

Rich Editorial Interface

  1. Interfaces with which editors could easily create and edit content. Drupal’s default content creation interface would be configured / customized to ensure that readers, depending on their roles and the categories of content they manage, can easily create content.
  2. When you need to make quick changes, choose in-context editing Better previews and drag-and-drop image uploads.
  3. The editors would be provided with a WYSIWYG editor using which content can be formatted interactively, courtesy of Aloha Editor. Edit your rich text with your theme's direct styling through the inline editor. It even works with images + captions, links directly to content in the site, and has basic support for tokenized strings.

Multimedia Asset Creation - Images and Videos

  1. Videos can be either uploaded to the site or embedded through a third-party site.
  2. Images library can be maintained within the CMS that can be accessed / used across articles. The EXIF data of the images can be read and stored in the system.

Content Publishing Workflow

Content publishing workflow allows a multi-stage publishing process involving authors, reviewers / publishers. Depending on the type of content and the publishing process, suitable workflows can be created.

Advanced Taxonomy Management

Advanced taxonomy system which would allow categorization of content at a granular level. The hierarchical taxonomy provides a flexible means to create a content structure that well represent the various categories and sections within site.

Content Promotion and Sequencing

Drupal’s node-queue system could be customized to provide Editors with a flexible interface to manage content promotion and their sequencing. For example, Editor would be able to manually promote content to the home page as “headlines” and sequence them based on importance.

Management of Social Media Posts

Editor would be able to moderate the content that would be posted in the social media platform, wherever automatic publishing provision is available.

Layout Management

Editors would be able to manage the page layouts in terms of bringing in new blocks, repositioning blocks, selecting content for the blocks etc. This aspects would need to be discussed during the design phase.

Responsive Layout Builder, courtesy of the Layout and Gridbuilder modules. You can configure layouts for separate breakpoints (e.g. Mobile, Tablet, Desktop) and even define your own grids for them to snap to.

The Panels module allows a site administrator to create customized layouts for multiple uses. At its core it is a drag and drop content manager that lets you visually design a layout and place content within that layout.

E-Newsletters Creation and Management

Editor could compose newsletters by picking the content from various sections. Alternatively, the system can be configured to automatically compose newsletters by picking the latest news headlines, most read articles, latest image/videos from the gallery etc. The reader can choose the appropriate categories to include it in the newsletter. The send-out of newsletter can be integrated with standard third party bulk mailing system.

Responsive Design

With the mobile revolution, readers prefer consuming information on the go on handheld devices. Hence the website would be able to adapt itself based on the device in which it is viewed and present a unified branding.

Responsive Design would be adopted, with a mobile reader in mind, bringing focus to the most important and relevant content. The design moves away from throwing up lots of needless information to presenting what a reader actually needs. Usability and Performance become important aspects to be taken care of towards optimizing designs for a mobile reader. Therefore, it is not enough if a website work on all devices. The website needs to respond to the device screen size, bandwidth and resolution, optimally to be responsive.

Drupal supports building of Responsive Themes. Drupal has compatibility to key concepts in Responsive design which include definition of breakpoints, integration to Modernizer and additional JavaScript libraries, Support for responsive images, videos and slideshows.

Additional powerful responsive features include: Mobile First UI Editor interface Mobile enabled, Mobile friendly admin toolbar Responsive Preview.

Drupal supports building customized user interfaces. The User Interface templates would be themed using the HTML / CSS created.

Social Media Tools & Search Engine Optimization

Sharing / Bookmarking

Content can be shared in the popular social media sites. This would facilitate “Viral Marketing” and spread the brand. Social share features are readily available as contributed modules in Drupal.

Social Media Widgets

Drupal has support for popular social media widgets like Facebook, Google Plus, Twitter, Youtube and more.

Search Engine Optimization

The Website to support different search engine optimization techniques. These would include:

  1. Creation of different sitemaps
  2. Meta tagging capabilities
  3. Support for keywords
  4. Optimized HTML structure and page speed

Integration Capabilities - RESTful APIs Support

Using Standard APIs to integrate with existing Traditional ECM solutions like Alfresco is a possibility.

REST is one of the most popular ways of making Web Services work. REST utilizes HTTP methods, such as GET, POST and DELETE. Support for RESTful APIs and an API first approach makes integrations easier than ever. These integrations can be presented as views.

RESTful Web Services in Drupal 8 Core and include:

  1. Serialized entities using HAL
  2. Provides HTTP Basic authentication process
  3. Exposes entities and other resources as RESTful APIs

Provides services to (de)serialize to/from JASON/XML

Multi Domain Capabilities

Drupal supports different techniques to manage Multi portal architecture:

  1. Single code base varying databases - multi domain
  2. Single code base, single database – multi domain

Multi Lingual Capabilities

Drupal supports any language with built-in translation interfaces.

E Commerce Capabilities

Drupal’s Commerce Modules help build sites with Ecommerce capabilities. Some key features include:

  1. Create product types with Custom Attributes
  2. Dynamic Product Displays
  3. Order Management, line item
  4. Payment method API, allowing many different payment gateways
  5. Tax calculation / VAT support
  6. Discount pricing rules
  7. Advanced Product Search Interfaces

Integrating Web CMS like Drupal would provide the following benefits:

  1. Speed to Market - Faster Publishing of content
  2. Customer Experience Improvements by bringing a Uniform, Consistent Experience across the Different Channels
  3. Operational Efficiencies by bringing technologies that would assist in facilitating content publishing with minimal or zero support from technical team
  4. Use of single code base to manage multiple platforms to simplify management of code base
  5. Process Improvement by bringing in workflows and version control for Business approvals/compliance
Feb 24 2017
Feb 24

In Drupal 7 we used Node Hierarchy module to keep track of a hierarchy of pages. Node hierarchy ties directly to the menu system. When getting a list of all ancestors or descendents, it is a O(n) operation, and at least one site we use it on has a lot of nodes in the tree. Performance was terrible. Add to that it has no notion of revisions or forward revisions, so changing the parent and saving a draft can cause all sorts of issues with your menu.

When the time came to update the site to Drupal 8, we took a different approach.

Nested Sets

The performance issues with Drupal 7 Node Hierarchy are due to the data structures being used to store the tree. We decided to dust off the old computer science textbooks, and look up the chapters on tree storage, and see what options we had.

Currently the data structure used to represent a tree is a Linked List where the table stores only three values:

  • ID
  • Parent ID
  • Weight (optional)

This means when finding all descendants of a node, we need to do a query for entries with the node ID stored as the parent node ID. Once we get that, we do another query using that ID as the parent node ID. Wash, rinse, repeat. You get the idea. For very large tables querying for descendents or ancestors is very inefficient, O(n) in Big O notation.

Nested Sets represent the data in a different way. They use a table which includes:

  • ID
  • Left Position
  • Right Position
  • Depth (optional)

This left and right position represent the set of all children contained within. The following diagram shows how this works.

By Nestedsetmodel.jpg: Sherahmderivative work: 0x24a537r9 (talk) - Nestedsetmodel.jpg, Public Domain,

For Suits, In the example above, we store a left position of 3 and a right position of 8. Any child elements must have a left position greater than 3 and right position less than 8, as Slacks and Jackets do.

The benefit of storing information in this way becomes obvious when we need to do a query to find all descendents. We just need to query for where left is greater than the node’s left, and right is less than the node’s right. All in a single query, even for thousands of nodes. Thats O(1) in Big O notation, and a massive improvement over O(n).

Updates, on the other hand, are slow. When we need to insert, delete or move a node, we have to potentially update all nodes in the tree. This is obviously an expensive and slow database update. However, given that in our case, this is only done by content editors when making change to the hierarchy, the tradeoff is well worth it, compared to many more times the queries are being made by end users.

Decoupling the Model from the Framework

Within PreviousNext our preferred approach to Drupal development is to start by modeling the domain logic in plain old PHP classes, then add Drupal wrappers and integration around it. Commonly known as hexagonal architecture or ports and adapters, this ensures our code is focussed on business rules and is easier to test and maintain. We will get into the details of this in a future post!

While thinking about how we could improve on Node Hierarchy for Entity Hierarchy in Drupal 8, we wanted to take the ‘separate the model from the framework’ approach and built a library that just deals with Nested Sets.

This library is completely decoupled from Drupal. There is no reference to any Drupal code in the code base. Instead of trying to work with Drupal’s database abstraction layer, (and making all of Drupal a dependency) we chose to use Doctrine DBAL as the database abstraction layer because of the simple API, the code maturity and the community around it.

We focussed on using PHP interfaces to decouple implementation, and a high level of testing to have confidence we are keeping data integrity.

We then went on to develop the Drupal 8 module for Entity Hierarchy, which requires the nested-set library. In order to provide the DBAL database connection it expects, we wrote a simple factory which takes the Drupal database connection and returns a DBAL one, called DBAL Connection.

Entity Hierarchy module provides a new field-type that extends from Entity Reference. To use it you setup a new Entity Reference Hierarchy field on the child bundles and configure it to reference valid parent bundles. For example, you may have a section content type. Under this may live articles and events. To configure this sort of hierarchy, you create a new entity-reference hierarchy field on the article and event content type called Parents and configure it to allow a single reference to a section.

The field comprises the standard entity-reference autocomplete and select widgets, but also comes with a weight field which editors can use in a similar fashion to the menu weight, this allows you to nominate child orders in the tree.

When you update the child entities by changing the parent or the weight, the entity-reference hierarchy field type takes care to update the nested set.

Once you have entered your data and have your entities in a tree structure, you can then use the views integration to filter and order the tree.

For example, you could create a view with a contextual filter for 'Is a child of' and limit it to children and grandchildren. You could then embed this view on the parent, taking the entity ID from the URL as the contextual filter value. This would allow you to display children, grandchildren etc on the parent page.

The Future

Now that we have a proof of concept, our goal is to get Entity Hierarchy to a stable release, and have the rest of the Drupal community start using it and providing feedback (and fixes!). To this effect, we've released 8.x-2.0-alpha1 - please take it for a spin and use the issue queue to report any issues you encounter.

Looking further ahead, there is no reason this approach could not be used to replace Drupal’s existing Menu and Taxonomy hierarchies too.

At present the only formatters in the module just extend the standard Entity Reference ones in core. Our plan is to add a formatter that lets you configure how high in the hierarchy to traverse. This would allow you to have a formatter that showed fields from the root entity in the tree (multiple roots are possible). So returning to the section example, this would allow you to add a 'section image' field to the section, but have that display on any child articles or events, by way of the parent formatter. Follow along with development of that feature in the issue queue.

Let us know what you think in the comments!


Co-authored by Lee Rowlands.

Feb 23 2017
Feb 23

When thinking about ways to measure your website’s effectiveness, you may also want to think about the metrics you use to gauge the success of the website in accomplishing your business goals. How else do you measure success?

If you’ve determined that your website drives traffic and revenue, especially for e-commerce – congratulations, your metrics are built in! If you use new customer acquisition as a metric for success, then this gets a little bit tricky. If you have many marketing channels, it can be hard to determine how much comes from any single source – but driving traffic to your web site can make it easier to measure the effectiveness of different campaigns. We also use Piwik in-house, which is great traffic analytic software we can plug into your website that’s easy to use, easy to report against and less confusing than Google Analytics.

Organizations may also measure success by establishing that their website provides information to their clients and is easily managed by internal personnel. If this is the case and already happening – perfect! If it isn’t, then with a little bit of planning, we can get this going in as little as 1 day with Drupal. Then, you could have a situation where you are saying to yourself, “We have mountains of data on our website and need an easy way to manage that.” If this is the case, we have some powerful tools for organizing, searching, and managing mountains of content. In the 10s of thousands to 100s of thousands. If you're talking about millions or more, you might need a Big Data solution….

Finally, many organizations rely on strong Customer Relationship Management (CRM) software and customer engagement tools to measure success. Corporations are constantly engaging leads, prospects, etc. to increase customer retention, track spending per order, increase new customer referrals, and so much more. Freelock does integrations all the time, but if your needs are modest, Drupal can do this entirely, without the need for another system! However, if you already have a system you are using outside of Drupal, it is quite possible to integrate that system with your website – for instance to report e-commerce sales from the website back to your CRM system.

Many may be cautious when selecting a vendor for work on their website or back-end software systems, and for good reason! We like to ask the question of what characteristics are most important to you when selecting a vendor? If you or your organization is most concerned about the cost, this could be a good and a bad thing. Quite often a prospect client who initially reaches out to us, and who've been concerned with cost, originally used a one person freelance developer before coming to us. The reason why they typically have reached out to us is either the previous developer did a shoddy job, or completely fell off the face of the earth. In either case, this isn’t helpful to a client who has spent thousands of dollars on that poor work. Then, once Freelock takes on the job, we’ll see terrible development practices and hacked modules – all big red flags. We offer great value to our clients, but we’re not low cost. However, we always find a way to work with a client’s budget and work towards those set goals and expectations.

If you’re not so concerned with cost, but your preference is to work with a vendor who has a long history of expertise, Freelock is a great fit. Freelock’s principal, John Locke, has been building websites since 1996, and Freelock has been in business for 15 years. Founded in 2002, we are innovators and leaders in the Drupal development community, we are abreast of cutting edge offerings for the platform, and can offer that breadth of knowledge to our clients in order to meet existing business needs and anticipate future requirements.

Often times, a client needs to be much more specific when looking for a vendor and wants to find one that is most experienced with their preferred content management system (CMS). While Freelock’s currently preferred CMS is Drupal, we’re also steadily taking on more clients with WordPress. Also, over the years, Freelock has worked with Joomla, WordPress, ZenCart, OSCommerce, and many custom PHP and Javascript frameworks. Since 2009, we've worked primarily with Drupal, because it can do what all those other systems can do, and we can get it done at lower cost. But, always remember, software frameworks all have their own tradeoffs, but we have deep experience helping clients choose wisely. It’s important to take these into consideration when deciding on what best works for your specific needs.

If when deciding on a vendor, your project is so large with many moving parts, that you’re really concerned with the size of staff to get you a viable product by the deadlines, then quite honestly you’re in a great situation. While we're a small team, we always deliver. We've rescued countless projects where other teams have failed, and carried them to completion. If you look at our client portfolio, you’ll notice that we work with some very large government and heathcare organizations, to mid-sized non-profits, to small private practices... then everything in the middle. Not 100% of projects have launched on time, but we’re personal, responsive, and always hands-on. Most projects see delays due to a lack of client responsiveness, because hey, vendors do love to get paid – so it doesn’t make much sense to not be responsive! On the other hand, if you choose a name out of a hat, we have a recommendation: Freelock, Freelock, Freelock!

Feel free to contact us here, emailing us directly at [email protected] or call us at (206) 577-0540!

Feb 23 2017
Feb 23

Thirsty for Drupal knowledge? Want to dive deep into a topic and learn from the best in the field? Like to get hands-on with your learning material? We are excited to offer 10 full-day training classes at DrupalCon Baltimore that will turn you into a Drupal superhero. No matter if you are an absolute beginner or Drupal expert, our classes cover all experience levels.

Our world-class Drupal trainers are eager to share their knowledge in what may be our most diverse line-up yet. Check out brand-new classes like Evolving Web's Content Strategy for Drupal or explore how to build interactive applications using Drupal 8 data in Four Kitchen's API First training.

Not surprising, a strong emphasis will be placed on what you need to know about Drupal 8. For example, get up to speed on Drupal 8 Module Development with DrupalEasy. But "in with the new" doesn’t necessarily mean "out with the old." We’re happy to have Zivtech returning with an all-time favorite, the Drupal DevOps training. Check out the full line-up to find the right class for you.

View All Training Courses

All courses are held on Monday, April 24, 9:00 a.m. - 5:00 p.m. Trainings are not included in a regular DrupalCon ticket and require a separate registration. You can save $50 if you purchase your training ticket at the early-bird rate of $450 by March 24. Light breakfast, lunch and coffee breaks are included with every training.

Our training courses are small by design, to provide attendees with plenty of one-on-one time with the instructors. However, each class must meet a minimum number of attendees by April 10 in order for the course to run. Help ensure your training class takes place by registering before April 10 - and remind friends and colleagues to attend.

Register Now

Feb 23 2017
Feb 23

DrupalCon is brought to you by the Drupal Association with support from an amazing team of volunteers. Powered by COD, the open source conference and event management solution. Creative design and implementation by Cheeky Monkey Media.

DrupalCon Baltimore is copyright 2016. Drupal is a registered trademark of Dries Buytaert.

Feb 23 2017
Feb 23

The Drupal Association Engineering Team delivers value to all who are using, building, and developing Drupal. The team is tasked with keeping and all of the 20 subsites and services up and running. Their work would not be possible without the community and the project would not thrive without close collaboration. This is why we are running a membership campaign all about the engineering team. These are a few of the recent projects where engineering team + community = win!

Icon of screen with person in center of itWant to hear more about the work of the team, rather than read about it? Check out this video from 11:15-22:00 where Tim Lehnen (@hestenet) talks about the team's recent and current work.

Leading the Documentation System migration

We now have a new system for Documentation. These are guides Drupal developers and users need to effectively build and use Drupal. The new system replaces the book outline structure with a guides system, where a collection of pages with their own menu are maintained by the people who volunteer to keep the guides updated, focused, and relevant. Three years of work from the engineering team and community collaborators paid off. Content strategy, design, user research, implementation, usability testing and migration have brought this project to life.

Basic structure doc page for Drupal 8 Creating Custom Modules section
Pages include code 'call-outs' for point-version specific information or warnings.

Thanks to the collaborators: 46 have signed up to be guide maintainers, the Documentation Working Group members (batigolix, LeeHunter, ifrik, eojthebrave), to tvn, and the many community members who write the docs!

Enabling Drupal contribution everywhere

Helping contributors is what we do best. Here are some recent highlights from the work we're doing to help the community:

Our project to help contributors currently in development is revamping the project applications process. More on this soon on our blog.

When a community need doesn't match our roadmap

We have a process for prioritizing community initiatives so we can still help contributors. Thanks to volunteers who have proposed and helped work on initiatives recently, we've supported the launch of the Drupal 8 User guide and the ongoing effort to bring Dreditor features into itself.  

Thanks to the collaborators: jhodgdon, eojthebrave, and the contributors to the user guide. Thanks also to markcarver for the Dreditor effort.

How to stay informed and support our work.

The change list and the roadmap help you to see what the board and staff have prioritized out of the many needs of the community.

You can help sustain the work of the Drupal Association by joining as a member. Thank you!

Feb 23 2017
Feb 23

In the traditional Drupal site, you don’t need to handle authentication, because Drupal handle everything by itself, gets a cookie, set session, error handling etc. But what about decoupled(headless drupal) sites? How can we authenticate the user on decoupled?

Before diving into this, we need to understand the authentication types provided by RESTful:

  1. Cookie - Validating the user cookie is not something new for us. We have been doing it for years, and it’s one of the first techniques web developers acquire. But, to validate the request we need to pass a CSRF token. This token helps make sure the form was not a fraud. An example could be a form that tweets on the behalf of us on Twitter. The existence of a valid CSRF in the request would make sure an internet scam could not generate the form and upload to Twitter a photo of a category when you’re a dog person.

  2. Access token - RESTful will generate an access token and bind it to the user. Unlike the cookie which needs a CSRF token to be valid by Restful, we get a two-for-one deal. The existence of the access token in the Drupal 8  is verified and references us to the user which is represented by that access token.

Important: in order to use access token authentication, you’ll need to enable the module RESTful token authentication (which is a submodule of RESTful).


I show you how an access token is generated using Angular JS at the following. If the authentication process passes, the endpoint will return an object with 3 values:

  1. access_token - This is the token which represents the user in any request.

  2. expires_in - a number of seconds in which the access token is valid.

  3. refresh_token - Once the token is no longer valid, you’ll need to ask for a new one using the refresh token.

You can see below a small amount of Angular JS code:

$http.get('http://YOURDRUPAL.COM/api/login-token', {
 headers: {
   'Authorization': 'Basic ' + Base64.encode(username + ':' + password)
.success(function(data) {
 localStorageService.set('access_token', data.access_token);

And this is what you’ll get back:

 "access_token": "Y3wQua-qFY-muksrePaLqKdNmlGdBQK4dly-UhlJcYk",
 "type": "Bearer",
 "expires_in": 86400,
 "refresh_token": "xRP-nnKA05GGsN-jr80Z_hfPHqrkpwtAtevDSeRfbYU"



As mentioned above, the access token is only valid for a specific amount of time, usually 24 hours, so you’ll need to check it before the request:

if (new Date().getTime() > localStorageService.get('expire_in')) {
 var refresh_token = localStorageService.get('refresh_token');
 $http.get('http://YOURDRUPAL.COM/refresh-token/' + refresh_token)
 .success(function(data) {
   localStorageService.set('access_token', data.access_token);
   localStorageService.set('refresh_token', data.refresh_token);
   localStorageService.set('expire_in', new Date().getTime() + data.expires_in);


OK, so we got the access token and we can refresh it when it’s no longer valid. The next thing you need to know is how to inject the access token into the header:

$'http://YOURDRUPAL.COM/api/article', {
 headers: {
   'access-token': localStorageService.get('access_token')
 data: {
   'label': 'zhilevan'
.success(function(data) {
 console.log('Well,No you can Post A Node(check your permissions before try that)');

You can have a look at Gizra Scaffold a headless Drupal backend yo hedley generator to see how they implemented HTTP interceptor to improve the process displayed above.

Additional Resources:


Feb 23 2017
Feb 23

Since its relaunch in 2015, the Drupal 7 powered has been gaining popularity among artists and design students to become their go-to platform. Until today, design students have uploaded over 700 portfolios providing guidance to enrolling candidates. These portfolios are linked to over 500 art faculties of hundreds of universities.

Before enrolling in a course, a candidate can research their local university and study other students' portfolios or enroll in their local design course to prepare for the entry tests - all of it on

On top of that, students provide and collect support on the forum which boasts over 20000 users who have written nearly 250000 posts. This may be the biggest and most beautiful forum built on top of Drupal core.Frontpage

The most powerful feature however may be the ability for guests to create most of the site's content without having to go through any type of registration process. Visitors can go ahead and correct their school's information just by clicking 'edit'. Likewise, anyone can write a blog post - no account or personal information needed. We think this technology has massively contributed to the quantity and quality of content on

While the numbers of design students, universities and art schools registering with the platform has been growing steadily, the visionaries behind the project, Ingo Rauth and Wolfgang Zeh from projektgestalten, recently decided to take the platform to the next level by bringing it to jobseekers and providers as well. Consequently gbyte has implemented the event functionality and the new job board.

This is not going to be the last improvement though, apparently artists have lots of creative ideas and we look forward to implementing them. We feel that this project is a great showcase of Drupal's possibilities and if you would like to learn more about the project or its implementation, make sure to leave a comment below or contact us via the contact form.

Check out other technology-centric posts about the project as well as more screenshots on the project page.

Feb 22 2017
Feb 22

I’ve written previously about git workflow for working on patches, and about how we don’t necessarily need to move to a github-style system on, we just maybe need better tools for our existing workflow. It’s true that much of it is repetitive, but then repetitive tasks are ripe for automation. In the two years since I released Dorgpatch, a shell script that handles the making of patches for issues, I’ve been thinking of how much more of the patch workflow could be automated.

Now, I have released a new script, Dorgflow, and the answer is just about everything. The only thing that Dorgflow doesn’t automate is uploading the patch to (and that’s because’s REST API is read-only). Oh, and writing the code to actually fix bugs or create new features. You still have to do that yourself, along with your cup of coffee.

So assuming you’ve made your own hot beverage of choice, how does Dorgflow work?

Simply! To start with, you need to have an up to date git clone of the project you want to work on, be it Drupal core or a contrib project.

To start work on an issue, just do:

$ dorgflow

You can copy and paste the URL from your browser. It doesn’t matter if it has an anchor link on the end, so if you followed a link from your issue tracker and it has ‘#new’ at the end, or clicked down to a comment and it has ‘#comment-1234’ that’s all fine.

The first thing this comment does it make a new git branch for you, using the issue number and the name. It then also downloads and applies all the patch files from the issue node, and makes a commit for each one. Your local git now shows you the history of the work on the issue. (Note though that if a patch no longer applies against the main branch, then it’s skipped, and if a patch has been set to not be displayed on the issue’s file list, then it’s skipped too.)

Let’s see how this works with an actual issue. Today I wanted to review the patch on an issue for Token module. The issue URL is So I did:

$ dorgflow

That got me a git history like this:

  * 6d07524 (2782605-Move-list-of-available-tokens-from-Help-to-Reports) Patch from Comment: 35; URL:; file: token-move-list-of-available-tokens-2782605-34.patch; fid 5784728. Automatic commit by dorgflow.
 * 6f8f6e0 Patch from Comment: 15; URL:; file: 2782605-13.patch; fid 5710235. Automatic commit by dorgflow.
* a3b68cc (8.x-1.x) Issue #2833328 by Berdir: Handle bubbleable metadata for block title token replacements
* [older commits…]

What we can see here is:

  • Git is now on a feature branch, called ‘2782605-Move-list-of-available-tokens-from-Help-to-Reports’. The first part is the issue number, and the rest is from the title of the issue node on
  • Two patches were found on the issue, and a commit was made for each one. Each patch’s commit message gives the comment index where the patch was posted, the URL to the comment, the patch filename, and the patch file entity ID (these last two are less interesting, but are used by Dorgflow when you update a feature branch with newer patches from an issue).

The commit for patch 35 will obviously only show the difference between it and patch 15, an interdiff effectively. To see what the patch actually contains, take a diff from the master branch, 8.x-1.x.

(As an aside, the trick to applying a patch that’s against 8.x-1.x to a feature branch that already has commit for a patch is that there is a way to check out files from any git commit while still keeping git’s HEAD on the current branch. So the patch applies, because the files look like 8.x-1.x, but when you make a commit, you’re on the feature branch. Details are on this Stack Overflow question.)

At this point, the feature branch is ready for work. You can make as many commits as you want. (You can rename the branch if you like, provided the ‘2782605-’ part stays at the beginning.) To make your own patch with your work, just run the Dorgflow script without any argument:

$ dorgflow

The script detects the current branch, and from that, the issue number, and then fetches the issue node from to get the number of the next comment to use in the patch filename. All you now have to do is upload the patch, and post a comment explaining your changes.

Alternatively, if you’re a maintainer for the project, and the latest patch is ready to be committed, you can do the following to put git into a state where the patch is applied to the main development branch:

$ dorgflow commit

At that point, you just need to obtain the git commit command from the issue node. (Remember the drupal standard git message format, and to check the attribution for the work on the issue is correct!)

What if you’ve previously reviewed a patch, and now there’s a new one? Dorgflow can download new patches with this command:

$ dorgflow update

This compares your feature branch to the issue node’s patches, and any patches you don’t yet have get new commits.

If you’ve made commits for your own work as well, then effectively there’s a fork in play, as your development in your commits and the other person’s patch are divergent lines of development. Appropriately, Dorgflow creates a separate branch. Your commits are moved onto this branch, while the feature branch is rewound to the last patch that was already there, and then has the new patches applied to it, so that it now reflects work on the issue. It’s then up to you to do a git merge of these two branches in order to combine the two lines of development back into one.

Dorgflow is still being developed. There are a few ideas for further features in the issue queue on github (not to mention a couple of bugs for some of the various possible cases the update command can encounter). I’m also pondering whether it’s worth the effort to convert the script to use Symfony Console; feel free to chime in with any opinions on the issue for that.

There are tests too, as it’s pretty important that a script that does things to your git repository does what it’s supposed to (though the only command that does anything destructive is ‘dorgflow cleanup’, which of course asks for confirmation). Having now written this, I’m obviously embarking upon cleaning it up and to some extent rewriting it, though I do have the excuse that the early weeks of working on this were the days after the late nights awake with my newborn daughter, and so the early versions of the code were written in a haze of sleep deprivation. If you’d like to submit a pull request, please do check in with me first on an issue to ensure it’s not going to clash with something I’m partway through changing.

Finally, if you find this as useful as I do (this was definitely an itch I’ve been wanting to scratch for a long time, as well as being a prime case of condiment-passing), please tell other Drupal developers about it. Let’s all spend less time downloading, applying, and rolling patches, and more time writing Drupal code!

Feb 22 2017
Feb 22

Continuing along with my series of reviews of Acquia Developer Certification exams (see the previous one: Drupal 8 Site Builder Exam, I recently took the Back End Specialist – Drupal 8 Exam, so I'll post some brief thoughts on the exam below.

Acquia Certified Drupal Site Builder - Drupal 8 2016
I didn't get a badge with this exam, just a cert... so here's the previous exam's badge!

Acquia finally updated the full suite of Certifications—Back/Front End Specialist, Site Builder, and Developer—for Drupal 8, and the toughest exams to pass continue to be the Specialist exams. This exam, like the Drupal 7 version of the exam, requires a deeper knowledge of Drupal's core APIs, layout techniques, Plugin system, debugging, security, and even some esoteric things like basic webserver configuration!

A lot of new content makes for a difficult exam

Unlike the other exams, this exam sets a bit of a higher bar—if you don't do a significant amount of Drupal development and haven't built at least one or two custom Drupal modules (nothing crazy, but at least some block plugins, maybe a service or two, and some other integrations), then it's likely you won't pass.

There are a number of questions that require at least working knowledge of OOP, Composer, and Drupal's configuration system—things that an old-time Drupal developer might know absolutely nothing about! I didn't study for this exam at all, but would've likely scored higher if I spent more time going through some of the awesome Drupal ladders or other study materials. The only reason I passed is I work on Drupal 8 sites in my day job, and have for at least 6 months, and in my work I'm exposed to probably 30-50% of Drupal's APIs.

Unlike in Drupal 7, there are no CSS-related questions and few UI-related questions whatsoever. This is a completely new and more difficult exam that covers a lot of corners of Drupal 8 that you won't touch if you're mostly a site builder or themer.

My Results

I scored an 73%, with the following section-by-section breakdown:

  • Fundamental Web Concepts: 80.00%
  • Drupal core API : 55.00%
  • Debug code and troubleshooting: 75.00%
  • Theme Integration: 66.66%
  • Performance: 87.50%
  • Security: 87.50%
  • Leveraging Community: 100.00%

I am definitely least familiar with Drupal 8's core APIs, as I tend to stick to solutions that can be built with pre-existing modules, and have as yet avoided diving too deeply into custom code for the projects I work on. Drupal 8 is really streamlined in that sense—I can do a lot more just using Core and a few Contrib modules than I could've done in Drupal 7 with thousands of lines of custom code!

Also, I'm still trying to wrap my head around the much more formal OOP structure of Drupal (especially around caching, plugins, services, and theme-related components), and I bet that I could score 10% or more higher in another 6 months, just due to familiarity.

I also scored fairly low on the 'debug code and troubleshooting' section, because it dealt with some lower-level debugging tools than what I prefer to use day-to-day. I use Xdebug from time to time, and it really is necessary for some things in Drupal 8 (where it wasn't so in Drupal 7), but I stick to Devel's dpm() and Devel Kint's kint() as much as I can, so I can debug in the browser where I'm more comfortable.

In summary, this exam was by far the toughest one I've taken, and the first one where I'd consider studying a bit before attempting to pass it again. I've scheduled the D8 Front End Specialist exam for next week, and I'll hopefully have time to write a 'Thoughts on it' review on this blog after that—I want to see if it's as difficult (especially regarding to twig debugging and the render system changes) as the D8 Back End Specialist exam was!

Feb 22 2017
Feb 22

by Elliot Christenson on February 22, 2017 - 3:12pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Views module to fix an Access Bypass vulnerability.

The Views module allows site builders to create listings of various data in the Drupal database.

The Views module fails to call db_rewrite_sql() on queries that list Taxonomy Terms, which could cause private data stored on Taxonomy Terms to be leaked to users without permision to view it.

This is mitigated by the fact that a View must exist that lists Taxonomy Terms which contain private data. If all the data on Taxonomy Terms is public or there are no applicable Views, then your site is unaffected.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the Views module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on

Feb 22 2017
Feb 22

From 16-19 February, the first Drupal Mountain Camp took place in Davos, Switzerland. A very diverse crowd of 135 attendees from, 17 different countries, came together to share the latest and greatest in Drupal 8 development, as well as case studies from Swiss Drupal vendors.

When we started organizing Drupal Mountain Camp in the summer of 2016, it was hard to predict how much interest it would attract and how many people would join for the camp. By reaching out to the local and international Drupal ecosystem we were excited to get so many people to attend from all around the world including Australia, India, and the US.

Drupal Mountain Camp Team

As a team of a dozen organizers; we split up the tasks, like setting up the venue, registration, social media, room monitoring and much more. It was great seeing that we were able to split the workload across the entire team and keep it well balanced.

Drupal Mountain Camp Workshops

We are very thankful for 30 different speakers who travelled from afar and worked hard to share their expertise with the crowd. As a program organizer I might be biased, but I truly believe that the schedule was packed with great content :)

In addition to the sessions, we also provided free workshop trainings to help spread some more Drupal love.

Drupal Mountain Camp Speaker Dinner

We took all the speakers up to the mountain for Switzerland's most popular dish, cheese fondue, to say thank you for their sessions and inputs.

Drupal Mountain Camp Speaker Sledding

With Drupal Mountain Camp we wanted to set a theme that would not only excite attendees with Swiss quality sessions but also create a welcoming experience for everyone. On top of our Code of Conduct, we organized various social activities that would allow attendees to experience Switzerland, snow and the mountains.  

Drupal Mountain Camp Sprints

Sprints are an essential way to get started with contributing to Drupal. At Drupal Mountain Camp, we organized a First-time sprinter workshop and had Sprint rooms from Thursday until Sunday with many sprinters collaborating.

Drupal Mountain Camp

For our hosting company, Drupal Mountain Camp was a great opportunity to demonstrate our docker based development environment and scalable cluster stack using a set of raspberry pies.

Drupal Mountain Camp Snow

And of course, we ended the conference with skiing and snowboarding at the Swiss mountains :)

Pictures from the camp: selection and all. Curious about the next Drupal Mountain Camp? Follow us on twitter to stay on top and see you at the next event.

Feb 22 2017
Feb 22

For over 15 years, Seniorlink has pioneered solutions for caregivers across the nation, helping them provide their loved ones with the highest quality care. SeniorLink needed to upgrade their website to a modern, sleek and easy-to-navigate one that introduces Seniorlink as the parent company of Caregiver Homes and Vela and promotes Seniorlink as the trusted and credible source for information and support for homecare.

Third & Grove built the new site for Seniorlink on Drupal, providing an easy admin interface and a secure editing environment, which would also provide a scalable platform for future downstream integrations and personalizations.

View the Case Study

Feb 21 2017
Feb 21

Drupal 8 has several solutions and methods to manage access rights on each elements included in a content, and this in a very granular way. Enabling view or edit access on some field included in a content type can be achieved very simply, with a few lines of code, or with the Field Permissions module. We can use this module to allow certain roles to view or update a particular field.

The problem with the case of documents associated with content is slightly different. You may want to let view rights to a document or file attached (via a File field) to a content while controlling the rights to be able to download this document. In other words, you can want to manage the rights to download a file while allowing its visualization (and so its existence).

This is where the Protected file module answers. This module allows to define, for each attached file, if downloading it is publicly accessible or if it requires a particular role. In the case of a protected file, the module then presents an alternate link (configurable, for example the link to the authentication page) instead of the download link.

Let's discover this module.

Prerequisites for the installation of the module

In order to control access to files, this module can only work if the site has a private file system configured. In fact, files stored on the Drupal public file system are accessible directly from the Web server, and consequently Drupal can not control access rights to these files.

Using the module

The Protected file module provides a new field type called ... Protected file. This new field type extends the File field type provided by Drupal core and is almost similar in terms of configuration. To enable file access control, we need to add a new field to our content.

Configuring the Protected file

Let's add a Protected file field to our article content type.

Ajour du champ

And we can configure its storage settings.

Paramètres de stockage du champ

We note that the private file system is automatically selected and locked. We configure the field for an unlimited number of files.

Then we configure the parameters of the instance of this field on the content type Article on which we created it.

Paramètres du champ

We configure the various parameters, which are identical to those of a standard File field type (allowed extensions, upload directory, maximum file size, etc.).

Configuring Display Settings for the Protected File Field

We configure the display settings for our new field.

Paramètres d'affichage du champ

We have several options. We can :

  • Choose to open the file in a new tab or not
  • Configure the url that will replace the file's download url, if the user does not have sufficient access rights.
  • Choose to open the previously defined url in a modal window, or not
  • Define the message that will feed the title tag of the link set above. This message is provided as a variable to the template for rendering the links and can therefore be displayed directly with a simple override of the template in your theme

Configuring permissions

All you have to do is to set the permissions according your needs.

Permissions protected file

And the configuration is complete. We can now publish content and associated documents, protected or not.

Enabling Download Protection

Using the module is really simple. In the content creation / editing form, we can, for each file uploaded, activate or not this protection, by checking the corresponding checkbox.

Formulaire d'ajout des fichiers

And with the result : an authenticated user can access to the files download links.

Fichiers protégés téléchargeables

And for anonymous visitors

Fichiers protégés

In this example above, the download link of the PDF file example 1 has been replaced by the url that we defined in the display settings (/user/login). And a click on the protected file opens a modal window on this page.

Fenêtre modal de login

The Protected file module allows users to simply control access to the documents provided in a content. It should be noted that direct download links, sent by e-mail, for example, by an authenticated user, are also managed, and require the same access rights.

Feb 21 2017
Feb 21

If you believe the docs and the twitters, there is no way to automate letsencrypt certificates updates on You have to create the certificates manually, upload them manually, and maintain them manually.

But as readers of this blog know, the docs are only the start of the story. I’ve really enjoyed working with with one of my private clients, and I couldn’t believe that with all the flexibility – all the POWER – letsencrypt was really out of reach. I found a few attempts to script it, and one really great snippet on gitlab. But no one had ever really synthesized this stuff into an easy howto. So here we go.

1) Add some writeable directories where CLI and letsencrypt need them.

Normally when Platform deploys your application, it puts it all in a read-only filesystem. We’re going to mount some special directories read-write so all the letsencrypt/platform magic can work.

Edit your application’s file, and find the mounts: section. At the bottom, add these three lines. Make sure to match the indents with everything else under the mounts: section!

"/web/.well-known": "shared:files/.well-known"
"/keys": "shared:files/keys"
"/.platformsh": "shared:files/.platformsh"

Let’s walk through each of these:

  • /web/.well-known: In order to confirm that you actually control, letsencrypt drops a file somewhere on your website, and then tries to fetch it. This directory is where it’s going to do the drop and fetch. My webroot is web, you should change this to match your own environment. You might use public or www or something.
  • /keys: You have to store your keyfiles SOMEWHERE. This is that place.
  • /.platformsh: Your master environment needs a bit of configuration to be able to login to platform and update the certs on your account. This is where that will go.

2) Expose the .well-known directory to the Internet

I mentioned above that letsencrypt test your control over a domain by creating a file which it tries to fetch over the Internet. We already created the writeable directory where the scripts can drop the file, but (wisely) defaults to hide your directories from the Internet. We’re going to add some configuration to the “web” app section to expose this .well-known directory. Find the web: section of your file, and the locations: section under that. At the bottom of that section, add this:

        # Allow access to all files in the public files directory.
        allow: true
        expires: 5m
        passthru: false
        root: 'web/.well-known'
        # Do not execute PHP scripts.
        scripts: false

Make sure you match the indents of the other location entries! In my (default) file, I have 8 spaces before that '/.well-known': line. Also note that the root: parameter there also uses my webroot directory, so adjust that to fit your environment.

3) Download the binaries you need during the application “build” phase

In order to do this, we’re going to need to have the CLI tool, and a let’s encrypt CLI tool called lego. We’ll download them during the “build” phase of your application. Still in the file, find the hooks: section, and the build: section under that. Add these steps to the bottom of the build:

  cd ~
  curl -sL | tar -C .global/bin -xJ --strip-components=1 lego/lego
  curl -sfSL -o .global/bin/platform.phar

We’re just downloading reasonably recent releases of our two tools. If anyone has a better way to get the latest release of either tool, please let me know. Otherwise we’re stuck keeping this up to date manually.

4) Configure the CLI

In order to configure the CLI on your server, we have to deploy the changes from steps 1-3. Go ahead and do that now. I’ll wait.

Now connect to your platform environment via SSH (platform ssh -e master for most of us). First we’ll add a config file for platform. Edit a file in .platformsh/config.yaml with the editor of choice. You don’t have to use vi, but it will win you some points with me. Here are the contents for that file:

    check: false
    token_file: token

Pretty straightforward: this tells platform not to bother updating the CLI tool automatically (it can’t – read-only filesystem, remember?). It then tells it to login using an API token, which it can find in the file .platformsh/token. Let’s create that file next.

Log into the web UI (you can launch it with platform web if you’re feeling sassy), and navigate to your account settings > api tokens. That’s at (with your own user ID of course). Add an API token, and copy its value into .platformsh/token on the environment we’re working on. The token should be the only contents of that file.

Now let’s test it by running php /app/.global/bin/platform.phar auth:info. If you see your account information, congratulations! You have a working CLI installed.

5) Request your first certificate by hand

Still SSH’ed into that environment, let’s see if everything works.

lego --email="[email protected]" --domains="" --webroot=/app/public/ --path=/app/keys/ -a run
csplit -f /app/keys/certificates/ /app/keys/certificates/ '/-----BEGIN CERTIFICATE-----/' '{1}' -z -s
php /app/.global/bin/platform.phar domain:update -p $PLATFORM_PROJECT --no-wait --yes --cert /app/keys/certificates/ --chain /app/keys/certificates/ --key /app/keys/certificates/

This is three commands: register the cert with letsencrypt, then split the resulting file into it’s components, then register those components with If you didn’t get any errors, go ahead and test your site – it’s got a certificate! (yay)

6) Set up automatic renewals on cron

Back to, look for the crons: section. If you’re running drupal, you probably have a drupal cronjob in there already. Add this one at the bottom, matching indents as always.

    spec: '0 0 1 * *'
    cmd: '/bin/sh /app/scripts/'

Now let’s create the script. Add the file scripts/ to your repo, with this content:

#!/usr/bin/env bash

# Checks and updates the letsencrypt HTTPS cert.

set -e

if [ "$PLATFORM_ENVIRONMENT" = "master-7rqtwti" ]
    # Renew the certificate
    lego --email="[email protected]" --domains="" --webroot=/app/web/ --path=/app/keys/ -a renew
    # Split the certificate from any intermediate chain
    csplit -f /app/keys/certificates/ /app/keys/certificates/ '/-----BEGIN CERTIFICATE-----/' '{1}' -z -s
    # Update the certificates on the domain
    php /app/.global/bin/platform.phar domain:update -p $PLATFORM_PROJECT --no-wait --yes --cert /app/keys/certificates/ --chain /app/keys/certificates/ --key /app/keys/certificates/

Obviously you should replace all those example.orgs and email addresses with your own domain. Make the file executable with chmod u+x scripts/, commit it, and push it up to your environment.

7) Send a bragging email to Crell

Technically this isn’t supposed to be possible, but YOU DID IT! Make sure to rub it in.

"Larry is waiting to hear from you. (photo credit Jesus Manuel Olivas)"

Good luck!

PS – I’m just gonna link one more time to the guy whose snippet made this all possible: Ariel Barreiro did the hardest part of this. I’m grateful that he made his notes public!

Feb 21 2017
Feb 21

I’ve dreamed of a day when systems start to work like the home automation and listening (NSA Spying…) devices that people are inviting into their home. “Robots” that listen to trigger words and act on commands are very exciting. What’s most interesting to me in trying to build such systems is,… they really aren’t that hard any more. Why?

Well, the semantic web is what’s delivering the things for Siri, Google and Alexa to say on the other end. When you ask about something and it checks wikipedia THAT IS AMAZING…. but not really that difficult. The human voice is being continuously mapped and improved upon in accuracy daily as a result of people using things like Google Voice for years (where you basically give them your voice as data pieces in order to improve their speech engines).

So I said, well, I’d like to play with these things. I’ve written about Voicecommander in the past but it was mostly proof of concept. Today I’d like to announce the release of VoiceCommander 2.0 with built in support to do the “Ok Google” style wikipedia voice querying!

To do this, you’ll need a few things:

Enable the voicecommander_whatis module, tweak the voicecommander settings to your liking and then you’ll be able to build things like in this demo. The 1st video is a quick 1 minute of a voice based navigational system (this is how we do it in ELMSLN). The second is me talking through what’s involved, what’s actually happening, as well as A/B comparing different library configuration settings and how they relate to accuacy downstream.

Feb 21 2017
Feb 21

Besides being the recent desired destination for Instagram #wanderlust-ers, Iceland is now home to an exciting new Drupal event: DrupalCamp Northern Lights. With twenty speakers, lots of coffee, and a planned sightseeing trip to see the Golden Circle and Northern Lights, it is sure to be an exciting inaugural event.

A small crew of Palantiri will be proudly representing, so if you are making the trek overseas, keep an eye out and say hi to Allison Manley, Michelle Jackson, and Megh Plunkett while you’re taking in the sessions and sights.

Check out the schedule and make sure to stop by our sessions.

Kickoff Meetings, by Allison Manley

  • Time: Saturday, 10:45 - 11:35
  • Location: Room ÞINGVELLIR

How do you make the most use of your face-to-face time with your client and lay the groundwork for a successful project?

Allison will outline how to get the most out of the kickoff meetings that initiate any project. She'll talk about pre-meeting preparation and how to keep organized, and also give some tips on agenda creation, how to keep meetings productive (and fun), and what steps need to be taken once the meetings adjourn.

Competitive Analysis: Your UX must-have on a budget, by Michelle Jackson

  • Time: Sunday, 14:15-15:00
  • Location: Room ÞINGVELLIR

A tight budget and time constraints can make dedicating time and resources to understanding audience needs challenging. Competitive analysis is an affordable way to evaluate how competitor sites are succeeding or failing to meet the needs of your audience.

Michelle will cover how competitive analysis can help you avoid competitor pitfalls, gain insight into what your users want, and lead to better decision-making before you invest in and implement new designs and technical features.

7 Facts You Might Not Have Known About Iceland

  • Iceland was one of the last places on earth to be settled by humans.
  • They are getting their first Costco in May.
  • 60% of the Icelandic population lives in Reykjavík.
  • Babies in Iceland are routinely left outside to nap.
  • Surprisingly, Iceland is not the birthplace of ice cream.
  • First names not previously used in Iceland must be approved by the Icelandic Naming Committee.
  • Owning a pet turtle is against the law. Sorry Rafael, Franklin, and this kid:

I like turtles

Fact Sources:,

We want to make your project a success.

Let's Chat.
Feb 21 2017
Feb 21

Our tradition of presenting you short overviews of several modules of the month continues with today’s article. Previously we offered you some great contributed Drupal 8 modules in June 2016 a collection of modules in May 2016. In 2017 we published some modules, with the latest available release for Drupal 8 scheduled for the beginning of this year.


With this module, you can add tokens that are not supported by fields, other core elements and user interface for browsing them. This module is required for the Pathauto module. Applying both of them allows you to get the path alias of a formed URL to the current node in Drupal and control how it forms the URL using text, taken from a pattern as assisted by the token.


The Pathauto module allows you get SEO-friendly URLs for any site page. There is no need to manually specify the path and URL alias, as this module produces them automatically for different content types: taxonomy terms, nodes and users. These aliases are based on a "pattern" system, which uses tokens that commonly copy the the content page topic or article’s title, and which can be changed by the admin. So, together with a Token, these two key modules are important for further extending the functionality of the URL paths and aliases.


The Diff module helps you notice changes. It is used for adding a tab for users who don’t have sufficient permissions. They are able to see all revisions of ebook page versions in this tab as well as all words and phrases that were added, changed or deleted between revisions.

Views Slideshow

This module creates slideshows of images or any other type of content from Views. The module is able to be customized easily. It allows you to select settings for each View that you have created in your slideshow.

Flex Slider

This module serves for integration the Flex Slider library with Drupal and some contributed modules, enabling you to create responsive slideshows, which automatically adapts to different sizes of device screens or browser windows. Flex Slider provide you with configurable slide animations, multiple sliders per page and much more.


With the help of the Superfish module you are able to integrate the jQuery plugin called Superfish with your Drupal 8 menus. It allows you to add some "splash" to Drupal menus almost effortlessly.


The Simplenews module helps you easily and quickly inform many people at once by sending e-mails to a mailing list or lists of those who have subscribed to your newsletters. Those lists may consist of both authenticated and anonymous users.


The Webform module best suits those who need many flexible and customizable webforms on the sites built on Drupal. It supports contest, petition or contact webforms to be filled out by the site visitors. The latest 8.x-5.x version offers a new approach to form problem solving. It provides you with object oriented design patterns, extendable plugins, computerized tests and more.


You can use this module to avoid submissions to your webforms by spambots. It provides different challenge-response tests, allowing you to identify whether a human or a bot is taking some actions online.

We hope our list of 2017 modules is useful for you and you’ll apply some of them on your Drupal website. If you need experienced developers, we offer our help.

Feb 21 2017
Feb 21

on February 21st, 2017

Organized by the Icelandic Drupal community, the inaugural Northern Lights Drupal Camp will take place on the this weekend, February 24th - 26th, 2017 at the University of Iceland in Reykjavik. We are honored that our Digital Strategist, Jim Birch was invited to speak.

Jim will present his Holistic SEO and Drupal talk--which covers the modern state of Search Engine Optimization and how we at Xeno Media define best practices for technical SEO using Drupal.  It also presents ideas on how to guide and empower clients to create the best content to achieve their digital goals.

This presentation will review:

  • What Holistic SEO is, and some examples of modern search results explained.
  • The most common search engine ranking factors, and how to keep up to date.
  • An overview of Content strategy and how it can guide development.
  • An overview of technical SEO best practices in Drupal.

The presentation is:

  • Session time slot: Sunday 15:15 - 16:00
  • Session room: Room Eyjajallajökull

View the full schedule.

Feb 21 2017
Feb 21

I've been having tremendous fun writing tutorials about each of the Drupal 8 APIs in turn, and I hope people have been finding them useful. They've certainly been eye-openers for me, as I've always focussed on achieving a clear worked example, and doing that alone unearths all sorts of questions (and usually—but not always—answers) about how Drupal 8's core itself works.

However, as I'm about to start a big project, I'm going to take a break from writing tutorials. They're fun, like I say, but unfortunately they don't pay the bills; they certainly don't keep the cat in the fishy biscuits to which she's accustomed. So the recent post on the database abstraction layer is the last, for at least the foreseeable future. But that means I've covered eighteen of the thirty-six or so featured topics on api.d.o: half-way seems a good point to pause and take stock, regardless!

If you're at all interested in learning Drupal 8's APIs, then do feel free to have a read of some of the posts, as they're not going anywhere, and neither am I, really. Try out some of the worked examples; and like the very kind commenters before you, remember to leave a note about where I might have gone wrong. That way I can fix it for the next reader.

Before I next write a tutorial, then, I might see you at Drupalcamp London instead; just... don't ask me anything too complicated about the APIs!

Feb 20 2017
Feb 20

This is an ode to Dirk Engling’s OpenTracker.

It’s a BitTorrent tracker.

It’s what powered The Pirate Bay in 2007–2009.

I’ve been using it to power the downloads on since the end of November 2010. >6 years. It facilitated 9839566 downloads since December 1, 2010 until today. That’s almost 10 million downloads!


It’s one of the most stable pieces of software I ever encountered. I compiled it in 2010, it never once crashed. I’ve seen uptimes of hundreds of days.

[email protected]:~$ ls -al /data/opentracker
total 456
drwxr-xr-x  3 wim  wim   4096 Feb 11 01:02 .
drwxr-x--x 10 root wim   4096 Mar  8  2012 ..
-rwxr-xr-x  1 wim  wim  84824 Nov 29  2010 opentracker
-rw-r--r--  1 wim  wim   3538 Nov 29  2010 opentracker.conf
drwxr-xr-x  4 wim  wim   4096 Nov 19  2010 src
-rw-r--r--  1 wim  wim 243611 Nov 19  2010 src.tgz
-rwxrwxrwx  1 wim  wim  14022 Dec 24  2012 whitelist


The simplicity is fantastic. Getting up and running is incredibly simple: git clone git:// .; make; ./opentracker and you’re up and running. Let me quote a bit from its homepage, to show that it goes the extra mile to make users successful:

opentracker can be run by just typing ./opentracker. This will make opentracker bind to and happily serve all torrents presented to it. If ran as root, opentracker will immediately chroot to . and drop all priviliges after binding to whatever tcp or udp ports it is requested.

Emphasis mine. And I can’t emphasize my emphasis enough.

Performance & efficiency

All the while handling dozens of requests per second, opentracker causes less load than background processes of the OS. Let me again quote a bit from its homepage:

opentracker can easily serve multiple thousands of requests on a standard plastic WLAN-router, limited only by your kernels capabilities ;)

That’s also what the homepage said in 2010. It’s one of the reasons why I dared to give it a try. I didn’t test it on a “plastic WLAN-router”, but everything I’ve seen confirms it.


Its defaults are sane, but what if you want to have a whitelist?

  1. Uncomment the #FEATURES+=-DWANT_ACCESSLIST_WHITE line in the Makefile.
  2. Recompile.
  3. Create a file called whitelist, with one torrent hash per line.

Have a need to update this whitelist, for example a new release of your software to distribute? Of course you don’t want to reboot your opentracker instance and lose all current state. It’s got you covered:

  1. Append a line to whitelist.
  2. Send the SIGHUP UNIX signal to make opentracker reload its whitelist.


I’ve been in the process of moving off of my current (super reliable, but also expensive) hosting. There are plenty of specialized companies offering HTTP hosting and even rsync hosting. Thanks to their standardization and consequent scale, they can offer very low prices.

But I also needed to continue to run my own BitTorrent tracker. There are no companies that offer that. I don’t want to rely on another tracker, because I want there to be zero affiliation with illegal files. This is a BitTorrent tracker that does not allow anything to be shared: it only allows the software releases made by to be downloaded.

So, I found the cheapest VPS I could find, with the least amount of resources. For USD $13.50, I got the lowest specced VPS from a reliable-looking provider: with 128 MB RAM. Then I set it up:

  1. ssh‘d onto it.
  2. rsync‘d over the files from my current server (alternatively: git clone and make)
  3. added @reboot /data/opentracker/opentracker -f /data/opentracker/opentracker.conf to my crontab.
  4. removed the CNAME record for, and instead made it an A record pointing to my new VPS.
  5. watched on both the new and the old server, to verify traffic was moving over to my new cheap opentracker VPS as the DNS changes propagated

Drupal module

Since runs on Drupal, there of course is an OpenTracker Drupal module to integrate the two (I wrote it). It provides an API to:

  • create .torrent files for certain files uploaded to Drupal
  • append to the OpenTracker whitelist file
  • parse the statistics provided by the OpenTracker instance

You can see the live stats at


opentracker is the sort of simple, elegant software design that makes it a pleasure to use. And considering the low commit frequency over the past decade, with many of those commits being nitpick fixes, it also seems its simplicity also leads to excellent maintainability. It involves the HTTP and BitTorrent protocols, yet only relies on a single I/O library, and its source code is very readable. Not only that, but it’s also highly scalable.

It’s the sort of software many of us aspire to write.

Finally, its license. A glorious license indeed!

The beerware license is very open, close to public domain, but insists on honoring the original author by just not claiming that the code is yours. Instead assume that someone writing Open Source Software in the domain you’re obviously interested in would be a nice match for having a beer with.

So, just keep the name and contact details intact and if you ever meet the author in person, just have an appropriate brand of sparkling beverage choice together. The conversation will be worth the time for both of you.

Dirk, if you read this: I’d love to buy you sparkling beverages some time :)

Feb 20 2017
Feb 20

I’ve been building websites for the last 10 years. Design fads come and go but image galleries have stood the test of time and every client I’ve had has asked for one.

There are a lot of image gallery libraries out there, but today I want to show you how to use Juicebox.

Juicebox is an HTML5 responsive image gallery and it integrates with Drupal using the Juicebox module.

Juicebox is not open source, instead it offers a free version which is fully useable but you are limited to 50 images per gallery. The pro version allows for unlimited images and more features.

If you’re looking for an alternative solution look at Slick, which is open source, and it integrates with Drupal via the Slick module. I will cover this module in a future tutorial.

In this tutorial, you’ll learn how to display an image gallery from an image field and how to display a gallery using Views.

Getting Started

First, go download and install the Juicebox module.

Using Drush:

$ drush dl juicebox
$ drush en juicebox

Download Juicebox Library

Go to the Juicebox download page and download the free version.

Extract the downloaded file and copy the jbcore folder within the zip file into /libraries and rename the jbcore directory to juicebox.

Once everything has been copied and renamed, the path to juicebox.js should be /libraries/juicebox/juicebox.js.

Create a Gallery Using Fields

We’ll first look at how to create a gallery using just an image field. To do this, we’ll create an image field called “Image gallery” and this field will be used to store the images.

1. Go to Structure, “Content types” and click on “Manage fields” on the Article row.

2. Click on “Add field” and select Image from “Add a new field”.

3. Enter “Image gallery” into Label and click on “Save and continue”.

4. Change “Allowed number of values” to Unlimited and click on “Save field settings”.

You’ll need to do this if you want to store multiple images.

5. On the Edit page leave it as is and click on “Save settings”.

Configure Juicebox Gallery Formatter

Now that we’ve created the image fields, let’s configure the actual Juicebox gallery through the field formatter.

1. Click “Manage display”, and select “Juicebox Gallery” from the Format drop-down on the “Image gallery” field.

2. Click on the cogwheel to configure the gallery. Now there a lot of options but the only change we’ll make is to set the image alt text as the caption.

3. Click on the “Lite config” field-set and change the height to 500px.

4. Reorder the field so it’s below Body.

5. Click on Save at the bottom of the page.

Now if you go and create a test article and add images into the gallery you should see them below the Body field.

Create a Gallery Using Views

You’ve seen how to create a gallery using just the Juicebox gallery formatter, let’s now look at using Views to create a gallery.

We’ll create a single gallery that’ll use the first image of every gallery on the Article content type.

1. Go to Structure, Views and click on “Add view”.

2. Fill in the “Add new view” form with the values defined in Table 1-0.

Table 1-0. Create a new view

Option Value View name Article gallery Machine name article_gallery Show Content type of Article sorted by Newest first Create a page Unchecked Create a block Unchecked

3. Click on Add in the Fields section.

4. Search for the “Image gallery” field and add it to the view.

5. Change the Format from “Unformatted list” to “Juicebox Gallery” and click on Apply.

6. On the “Page: Style options” select the image field in “Image Source” and “Thumbnail Source”. The one you added to the View earlier.

You can configure the look and feel by expanding the “Lite config” field-set. You can change the width and height, text color and more.

7. Click on Apply.

8.Click on Add next to Master and select Page from the drop-down.

9. Make sure you set a path in the “Page settings” section. Add something like /gallery.

10. Do not forget to save the View by clicking on Save.

11. Make sure you have some test articles and go to /gallery. You should see a gallery made up of the first image from each gallery.


The reason I like Juicebox in Drupal is because it’s easy to set up. With little effort you can get a nice responsive image gallery from a field or a view. The only downside I can see is that it’s not open source.


Q: I get the following error message: “The Juicebox Javascript library does not appear to be installed. Please download and install the most recent version of the Juicebox library.”

This means Drupal can’t detect the Juicebox library in the /libraries directory. Refer to the “Getting started” section.

Feb 20 2017
Feb 20

Pay them in Tacos!

Say our partners who form the Chromatic brain trust, (Chris, Dave and Mark), do something crazy like base our reward system on the number of HeyTaco! emojis given out amongst team members in Slack. (Remember, this is an, ummmm, hypothetical example.) Now, say we wanted to display the taco leaderboard as a block on the Chromatic HQ home page. It's not like the taco leaderboard needs minute by minute updates so it is a good candidate for caching.

Why Cache?

What do we save by caching? Grabbing something that has already been built is quicker than building it from scratch. It's the difference between grabbing a Big Mac from McDonald's vs buying the ingredients from the supermarket, going home and making a Big Mac in your kitchen.

So, instead of each page refresh requiring a call to the HeyTaco! API, we can just tell Drupal to cache the leaderboard block and display the cached results. Instead of taking seconds to generate the page holding the block, it takes milliseconds to display the cached version. (ex. 2.97s vs 281ms in my local environment.)

Communicate with your Render Array

We have to remember that it's important that our render array - the thing that renders the HTML - knows to cache itself.

"It is of the utmost importance that you inform the Render API of the cacheability of a render array." - From D.O.'s page about the cacheability of render arrays

The above quote is what I'll try to explain, showing some of the nitty gritty with the help of a custom module and the HeyTaco! block it builds.

I created a module called heytaco and below is the build() function from my HeyTacoBlock class. As its name suggests, it's the part of the code that builds the HeyTaco! leaderboard block.

 * Provides a Hey Taco Results block
 * @Block(
 *   id = "heytaco_block",
 *   admin_label = @Translation("HeyTaco! Leaderboard"),
 * )
class HeyTacoBlock extends BlockBase implements ContainerFactoryPluginInterface {

   * construct() and create() functions here

   * [email protected]}
  public function build() {
    $user_id = $this->account->id();
    return array(
      '#theme' => 'heytaco_block',
      '#results' => $this->returnLeaderboard($user_id),
      '#partner_asterisk_blurb' => $this->isNotPartner($user_id),
      '#cache' => [
        'keys' => ['heytaco_block'],
        'contexts' => ['user'],
        'tags' => ['user_list'],
        'max-age' => 3600,

For the purposes of the rest of the blog post, I'll focus on the above code's #cache property, specifically its metadata:

  • keys
  • contexts
  • tags
  • max-age

I'm going to go through them similarly to (and inspired by) what is on the aforementioned D.O. page about the cacheability of render arrays.


From ...what identifies the thing I'm rendering?

In my words: This is the "what", as in "What entity is being rendered?". In my case, I'm just showing the HeyTaco! block and it doesn't have multiple displays from which to choose. (I will handle variations later using the contexts parameter.)

Many core modules don't include keys at all or they are single keys. For instance:

toolbar module

  • 'keys' => ['toolbar'],

dynamic_page_cache module

  • 'keys' => ['response'],

views module

After looking through many core modules, I (finally) found multiple values for a keys definition in the views module, in DisplayPluginBase.php:

  '#cache' => [
    'keys' => ['view', $view_id, 'display', $display_id],

So, in the views example above, the keys are telling us the "what" by telling us the view ID and its display ID.

I'd also mention that on D.O. you will find this tidbit: Cache keys must only be set if the render array should be cached.


From Does the representation of the thing I'm rendering vary per ... something?

In my words: This is the "which", as in, "Which version of the block should be shown?" (Sounds a bit like keys, right?)

The Finalized Cache Context API page tells us that when cache contexts were originally introduced they were regarded as "special keys" and keys and contexts actually intermingled. To make for a better developer experience, contexts was separated into its own parameter.

Going back to the need to vary an entity's representation, we see that what is rendered for one user might need to be rendered differently for another user, (ex. "Hello Märt" vs "Hello Adam"). If it helps, the D.O. page notes that, "...cache contexts are completely analogous to HTTP's Vary header." In the case of our HeyTaco! block, the only context we care about is the user.

Keys vs Contexts

Amongst team members, we discussed the difference between keys and contexts quite a bit. There is room for overlap between the two and I netted out at keeping things simple: let keys broadly define the thing being represented and let contexts take care of the variations. So keys are for completely different instances of a thing (ex. different menus, users, etc.) Contexts are for varying an instance, as in, "When should this item look different to different types of users?"

Different Contexts, Different Caching

To show our partners (Chris, Dave and Mark, remember?) how great they are I have added 100 tacos to their totals without telling them. When they log in to the site, they see an unassuming leaderboard with impressive Top 3 totals for themselves.

Partners Don't Know Their Taco Stats are Padded!

Partner HeyTaco! leaderboard

However, I don't want the rest of the team feeling left out, so for them I put asterisks next to the inflated taco totals and note that those totals have been modified.

Asterisks Remind us of the Real Score

Nonpartner HeyTaco! leaderboard with asterisks

So, our partners see one thing and the rest of our users see another, but all of these variations are still cached! I use contexts to allow different caching for different people. But remember, contexts aren't just user-based; they can also be based on ip, theme or url, to name a few examples. There is a list of cache contexts that you can find in Look for the entries prefaced with cache_context (ex. cache_context.user, cache_context.theme).


From Which things does it depend upon, so that when those things change, so should the representation?

In my words: What are the bits and pieces used to build the markup such that if any of them change, then the cached markup becomes outdated and needs to be regenerated? For instance, if a user changes her username, any cached instances using the old name will need to be regenerated. The tags may look like 'tags' => ['user:3'],. For HeyTaco!, I used 'tags' => ['user_list'],. This means that any user changing his/her user info will invalidate the existing cached block, forcing it to be rendered anew for everyone.


From When does this rendering become outdated? Is it only valid for a limited period of time?

In my words: If you want to give the rendering a maximum validity period, after which it is forced to refresh itself, then decide how many seconds and use max-age; the default is forever (Cache::PERMANENT).

In Cache-tastic Conclusion

So that's my stab at exploring #cache metadata and I feel like this is something that requires coding practice, with different use cases, to grasp what each metadata piece does.

For instance, I played with tags in my HeyTaco! example for quite some time. Using 'tags' => ['user:' . $user_id] only regenerated the block for the active user who changed his/her own info. So, I came upon an approach to use and pass all the team's IDs into Cache::buildTags(), like this Cache::buildTags('user', $team_uids). It felt ugly because I had to grab all the user IDs and put them into $team_uids manually. (What if we had thousands of users?) In my experimentation, that was the only way I could get the block updated if any user had his/her info changed.

However, after all that, Gus Childs reviewed my blog post and, since he knew of the existence of the node_list tag, he posited that all I needed to use as my tag is user_list, as in ('tags' => ['user_list'],). So, instead of manually grabbing user IDs, I just had to know to use 'user_list'. Thanks Gus!

Another colleague, Adam, didn't let me get away with skipping dependency injection in my sample code. He also questioned the difference between keys and contexts and made me think more about this stuff than is probably healthy.

Feb 20 2017
Feb 20

In this article we are going to look at how we can render images using image styles in Drupal 8.

In Drupal 7, rendering images with a particular style (say the default "thumbnail") was by calling the theme_image_style() theme and passing the image uri and image style you want to render (+ some other optional parameters):

$image = theme('image_style', array('style_name' => 'thumbnail', 'path' => 'public://my-image.png'));

You'll see this pattern all over the place in Drupal 7 codebases.

The theme prepares the URL for the image, runs the image through the style processors and returns a themed image (via theme_image()). The function it uses internally for preparing the url of the image is image_style_url() which returns the URL of the location where the image is stored after being prepared. It may not yet exist, but on the first request, it would get generated.

So how do we do it in Drupal 8?

First of all, image styles in Drupal 8 are configuration entities. This means they are created and exported like many other things. Second of all, in Drupal 8 we no longer (should) call theme functions like above directly. What we should do is always return render arrays and expect them to be rendered somewhere down the line. This helps with things like caching etc.

So to render an image with a particular image style, we need to do the following:

$render = [
    '#theme' => 'image_style',
    '#style_name' => 'thumbnail',
    '#uri' => 'public://my-image.png',
    // optional parameters

This would render the image tag with the image having been processed by the style.

Finally, if we just want the URL of an image with the image style applied, we need to load the image style config entity and ask it for the URL:

$style = \Drupal::entityTypeManager()->getStorage('image_style')->load('thumbnail');
$url = $style->buildUrl('public://my-image.png');

So that is it. You now have the image URL which will generate the image upon the first request.

Remember though to inject the entity type manager if you are in such a context that you can.

Feb 20 2017
Feb 20


Savas Labs has been using Docker for our local development and CI environments for some time to streamline our systems. On a recent project, we chose to integrate Phase 2’s Pattern Lab Starter theme to incorporate more front-end components into our standard build. This required building a new Docker image for running applications that the theme depends on. In this post, I’ll share:

  • A Dockerfile used to build an image with Node, npm, PHP, and Composer installed
  • A docker-compose.yml configuration and Docker commands for running theme commands such as npm start from within the container

Along the way, I’ll also provide:

  • A quick overview of why we use Docker for local development
    • This is part of a Docker series we’re publishing, so be on the lookout for more!
  • Tips for building custom images and running common front-end applications inside containers.


We switched to using Docker for local development last year and we love it - so much so that we even proposed a Drupalcon session on our approach and experience we hope to deliver. Using Docker makes it easy for developers to quickly spin up consistent local development environments that match production. In the past we used Vagrant and virtual machines, even a Drupal-specific flavor DrupalVM, for these purposes, but we’ve found Docker to be faster when switching between multiple projects, which we often do on any given Sunworkday.

Usually we build our Docker images from scratch to closely match production environments. However, for agile development and rapid prototyping, we often make use of public Docker images. In these cases we’ve relied on Wodby’s Docker4Drupal project, which is “a set of docker containers optimized for Drupal.”

We’re also fans of the atomic design methodology and present our clients interactive style guides early to facilitate better collaboration throughout. Real interaction with the design is necessary from the get-go; gone are the days of the static Photoshop file at the outset that “magically” translates to a living design at the end. So when we heard of the Pattern Lab Starter Drupal theme which leverages Pattern Lab (a tool for building pattern-driven user interfaces using atomic design), we were excited to bake the front-end components in to our Docker world. Oh, the beauty of open source!

Building the Docker image

To experiment with the Pattern Lab Starter theme we began with a vanilla Drupal 8 installation, and then quickly spun up our local Docker development environment using Docker4Drupal. We then copied the Pattern Lab Starter code to a new custom/theme/patter_lab_starter directory in our Drupal project.

Running the Phase 2 Pattern Lab Starter theme requires Node.js, the node package manager npm, PHP, and the PHP dependency manager Composer. Node and npm are required for managing the theme’s node dependencies (such as Gulp, Bower, etc.), while PHP and Composer are required by the theme to run and serve Pattern Lab.

While we could install these applications on the host machine, outside of the Docker image, that defeats the purpose of using Docker. One of the great advantages of virtualization, be it Docker or a full VM, is that you don’t have to rely on installing global dependencies on your local machine. One of the many benefits of this is that it ensures each team member is developing in the same environment.

Unfortunately, while Docker4Drupal provides public images for many applications (such as Nginx, PHP, MariaDB, Mailhog, Redis, Apache Solr, and Varnish), it does not provide images for running the applications required by the Pattern Lab Starter theme.

One of the nice features of Docker though is that it is relatively easy to create a new image that builds upon other images. This is done via a Dockerfile which specifies the commands for creating the image.

To build an image with the applications required by our theme we created a Dockerfile with the following contents:

FROM node:7.1
MAINTAINER Dan Murphy <[email protected]>

RUN apt-get update && \
    apt-get install -y php5-dev  && \
    curl -sS | php -- --install-dir=/usr/local/bin --filename=composer && \

    # Directory required by Yeoman to run.
    mkdir -p /root/.config/configstore \

    # Clean up.
    apt-get clean && \
    rm -rf \
      /root/.composer \
      /tmp/* \
      /usr/include/php \
      /usr/lib/php5/build \

# Permissions required by Yeoman to run:
RUN chmod g+rwx /root /root/.config /root/.config/configstore

EXPOSE 3001 3050

The commands in this Dockerfile:

  • Set the official Node 7 image as the base image. This base image includes Node and npm.
  • Install PHP 5 and Composer.
  • Make configuration changes necessary for running Yeoman, a popular Node scaffolding system used to create new component folders in Pattern Lab.
  • Expose ports 3001 and 3050 which are necessary for serving the Pattern Lab style guide.

From this Dockerfile we built the image savaslabs/node-php-composer and made it publicly available on DockerHub. Please check it out and use it to your delight!

One piece of advice I have for building images for local development is that while Alpine Linux based images may be much smaller in size, the bare-bones nature and lack of common packages brings with it some trade-offs that make it more difficult to build upon. For that reason, we based our image on the standard DebianJessie Node image rather than the Alpine variant.

This is also why we didn’t just simply start from the wodby/drupal-php:7.0 image and install Node and npm on it. Unfortunately, the wodby/drupal-php image is built from alpine:edge which lacks many of the dependencies required to install Node and npm.

Now a Docker purist might critique this image and recommend only “one process per container”. This is a drawback of this approach, especially since Wodby already provides a PHP image with Composer installed. Ideally, we’d use that in conjunction with separate images that run Node and npm.

However, the theme’s setup makes that difficult. Essentially PHP scripts and Composer commands are baked into the theme’s npm scripts and gulp tasks, making it difficult to untangle them. For example, the npm start command runs Gulp tasks that depend on PHP to generate and serve the Pattern Lab style guide.

Due to these constraints, and since this image is for local development, isn’t being used to deploy a production app, and encapsulates all of the applications required by the Pattern Lab Starter theme, we felt comfortable with this approach.

Using the image

To use this image, we specified it in our project’s docker-compose.yml file (see full file here) by adding the following lines to the services section:

 image: savaslabs/node-php-composer:1.2
   - "3050:3050"
   - "3001:3001"
   - php

This defines the configuration that is applied to a node-php-composer container when spun up. This configuration:

  • Specifies that the container should be created from the savaslabs/node-php-composer image that we built and referenced previously
  • Maps the container ports to our host ports so that we can access the Pattern Labs style guide locally
  • Mounts the project files (that are mounted to the php container) so that they are accessible to the container.

With this service defined in the docker-compose.yml we can start using the theme!

First we spin up the Docker containers by running docker-compose up -d.

Once the containers are running, we can open a Bash shell in the theme directory of the node-php-composer container by running the command:

docker-compose run --rm --service-ports -w /var/www/html/web/themes/custom/pattern_lab_starter node-php-composer /bin/bash

We use the --service-ports option to ensure the ports used for serving the style guide are mapped to the host.

Once inside the container in the theme directory, we install the theme’s dependencies and serve the style guide by running the following commands:

npm install --unsafe-perm
npm start

Voila! Once npm start is running we can access the Pattern Lab style guide at the URL’s that are outputted, for example http://localhost:3050/pattern-lab/public/.

Note: Docker runs containers as root, so we use the --unsafe-perm flag to run npm install with root privileges. This is okay for local development, but would be a security risk if deploying the container to production. For information on running the container as an unprivileged user, see this documentation.

Gulp and Bower are installed as theme dependencies during npm install, therefore we don’t need either installed globally in the container. However, to run these commands we must shell into the theme directory in the container (just as we did before), and then run Gulp and Bower commands as follows:

  • To install Bower libraries run $(npm bin)/bower install --allow-root {project-name} --save
  • To run arbitrary Gulp commands run $(npm bin)/gulp {command}

Other commands listed in the Pattern Lab Starter theme README can be run in similar ways from within the node-php-composer container.


Using Docker for local development has many benefits, one of which is that developers can run applications required by their project inside containers rather than having to install them globally on their local machines. While we typically think of this in terms of the web stack, it also extends to running applications required for front-end development. The Docker image described in this post allows several commonly used front-end applications to run within a container like the rest of the web stack.

While this blog post demonstrates how to build and use a Docker image specifically for use with the Pattern Lab Starter theme, the methodology can be adapted for other uses. A similar approach could be used with Zivtech’s Bear Skin theme, which is another Pattern Lab based theme, or with other contributed or custom themes that rely on npm, Gulp, Bower, or Composer.

If you have any questions or comments, please post them below!

Feb 18 2017
Feb 18

Drupal's API has a huge number of very useful utitlity classes and functions, especially in Drupal 8. Although the API docs are great, it's rather impossible to always find every little feature. Today I want to show you the Random utility class, which I've nearly overseen and found rather by accident.

On a project I'm currently working on, I have defined a custom entity type, for which I needed a quick way to autogenerate dummy and test data. In a first shot, the code generated 50 identical items, all having assigned the same staic "lorem ipsum" title and description text, all having assigned the same test image file. To improve that behaviour and get distinct data, I was looking in the Drupal API docs for a suitable helper, which I however didn't find at first glance. I was already on the way to integrate a simple text generation script I've found on Github, when I had to pause work. A few days later, while working on a different project, I've stumbled across the \Drupal\Component\Utility\Random class, which covers exactly the kind of functionality I was looking for.

The Random class offers different functions to generate names, strings, words (there are semantical differences, e.g. "words" look are strings looking like real words ~ blind text), whole sentences and paragraphs (consisting of sentences), PHP objects and even generated images.

Here's a snippet out of my generation script, that shows the generation of words, names, paragraphs and especially images, that are stored as file entities and assigned to an image field:

    for ($i = 0; $i < $count; $i++) {
      // Randomly choose the item's owner.
      $owner = array_rand($uids);

      // Define the full image path.
      $destination_dir = sprintf('public://uploads/%s/%s.jpg', $owner, $random->name(10, TRUE));
      // Generate the random image (width 700px, height 466px).
      $image_path = $random->image($destination_dir, '700x466', '700x466');
      // Save the generated image as file entity.
      $image_file = File::create([
        'uri' => $image_path,
        'uid' => $owner,
        'status' => 1,

      $item = RentableItem::create([
        'type' => 'default',
        'title' => $random->word(rand(5, 12)),
        'state' => 'draft',
        'category' => array_rand($category_ids),
        'description' => $random->paragraphs(2),
        'rent' => rand(1, 15),
        'deposit' => rand(5, 200),
        'uid' => $owner,
        'images' => $image_file->id(),

Please note:

  1. don't forget to import the namespace of the Random und File classes or fully qualify them. (use Drupal\Component\Utility\Random; and use Drupal\file\Entity\File;)
  2. the RentableItem class referes to a cusom entity type, you won't find anywhere. You can use nodes, taxonomy terms or any other content entity instead, that's not the important part of this script.
  3. for better understanding: $uids and $category_ids are arrays of user entity ids and taxonomy term ids, defined earlier in the script.
  4. If you look into the docs of the image() function, you'll find wrong and incomplete documentation of the parameters. Stick to the code in my example instead. I've already opened up an issue at and proposed a patch.

That's it. Have fun generating your own dummy content :) And when you're looking at the Random class, go along and have a look at its siblings in the Drupal\Component\Utility namespace, you'll probably find a lot of other stuff, you'll need quite often.

Feb 17 2017
Feb 17

Join as a member to keep thriving. Coffee cup with words We are happy to serve you. Drupal Association is home of the Drupal project and the Drupal community. It has been continuously operating since 2001. The Engineering Team— along with amazing community webmasters— keeps alive and well. As we launch the first membership campaign of 2017, our story is all about this small and productive team.

Join us as we celebrate all that the engineering team has accomplished. From helping grow Drupal adoption, to enabling contribution; improving infrastructure to making development faster. The team does a lot of good for the community, the project, and

Check out some of their accomplishments and if you aren't yet a Drupal Association member, join us! Help us continue the work needed to make better, every day.

Share these stories with others - now until our membership drive ends on March 8.

Facebook logoShare

Twitter logoTweet

LinkedIn logoShare

Thank you for supporting our work!

Feb 17 2017
Feb 17

We started regular Drupal usability meetings twice a week almost a year ago in March 2016. That is a long time and we succeeded in supporting many key initiatives in this time, including reviews on new media handling and library functionality, feedback on workflow user experience, outside-in editing and place block functionality. We helped set scope for the changes required to inline form errors on its way to stability. Those are all supporting existing teams working on their respective features where user interfaces are involved.

However, we also started to look at some Drupal components and whether we can gradually improve them. One of the biggest tasks we took on was redesigning the status page, where Drupal's system information is presented and errors and warnings are printed for site owners to resolve. While that looks like a huge monster issue, Roy Scholten in fact posted a breakdown of how the process itself went. If we were to start a fresh issue (which we should have), the process would be much easier to follow and would be more visible. The result is quite remarkable:

New status page in Drupal 8.3

While the new status page is amazing, for me the biggest outcome is how eager people were for this refresh and the example we set as to what is possible to do in Drupal 8. We can take a page, redesign it completely and get it released in a minor release for everyone to use! Some feedback on the result:

This looks great! #drupalnerd

— tiny red flowers (@tinyredflowers) February 12, 2017

Wow, just wow.

— Thomas Donahue (@dasginganinja) February 9, 2017

It's about damn time!

— Jerry Low (@jerrylowm) February 8, 2017

Wow! Just wow! Sometimes Christmas comes early. Sweet!

— Adam Evertsson (@AdamEvertsson) February 8, 2017


— Jan Laureys (@JanLaureys) February 8, 2017

Wow this looks really nice @DrupalUx.

— Eric Heydrich (@ericheydrich) February 8, 2017

A status page that does justice for the elegance of Drupal 8. Looks very nice!

— David Lanier (@nadavoid) February 8, 2017


— Phéna Proxima (@djphenaproxima) February 8, 2017

My mistakes are now much nicer to look at. Great work @DrupalUx !

— dawehner (@da_wehner) February 7, 2017

This is the best

— drupteg (@drupteg) February 7, 2017

If this is not enough proof that we can make significant improvements and that people are more than open to receive them, not sure what else could be it.

To join these efforts, there are several smaller things in the works currently, including improving the bulk operations UI on views forms (or for the more adventurous redesigning filters and bulk operations, which would affect the content, users and even the media admin pages). We are working to update the throbber in the Seven theme, make the add content link finger friendly, and so on. There are many smaller to bigger issues for anyone to work on, we can match you with an issue. We need designers, developers, testers, etc.

Want to be a part of the next celebrated improvement? Join the UX slack on (get an invite at Meetings are 9pm CEST every Tuesday and 9am CEST every Wednesday. See you there!

Feb 17 2017
Feb 17

Now that Drupal 8.3 is in beta, it is time to look at progress around core initiatives again and see how you can help move one or more of them forward. Once again I asked initiative contributors to help compile this post to inform you all of progress across the board. This is just a sampling of some improvements, there are a lot more that we could not cover here.

Default content and new core theme

The default content and new core theme teams decided to join forces because the goals are intertwined. The teams found it hard to demonstrate good default content without a supporting visual look and vice versa. The plan to go with a farmer's market site changed to a more general publication site, but that still allows for plenty of things to showcase. We are looking for a designer / art director for the project (deadline today!).

Use the Slack channel if you want to help or if you just want to follow our progress. Get an invite at


The media team held a sprint in Berlin in December. Unfortunately none of these media improvements landed in Drupal 8.3, however we are very close to complete the base media functionality early in Drupal 8.4. There was significant progress on the visual media library too. Next step is to finalise the plugins for images, documents and oEmbed.

Join in the #drupal-media channel on IRC.


The Migrate API became beta in Drupal 8.2.x with 8.2.5 and will apply to 8.3.0 as well. On the other hand other parts of the migration system like the Migrate Drupal API are still alpha stability and received some big updates. Two huge additions are the migration path for Drupal 7 node translations to Drupal 8 content translation and support added for configuration translations (and implemented initially for user profile fields).

Join in the #drupal-migrate channel on IRC.


Most of the recent progress on the multilingual initiative was in collaboration with the migration team and that is still heavily ongoing. Further feature development around multilingual features is not on the table currently, as most contributors moved on to more pressing areas given the advances achieved in multilingual with Drupal 8 already. Therefore it is being proposed to officially remove the multilingual initiative from the list.


The work in the PHPUnit initiative is focusing on converting a large part of old Simpletest web tests to PHPUnit based browser tests. The goal is to commit a larger patch on February 21st to the Drupal 8.3.x branch. After that one third of Drupal core’s web tests would be converted to PHPUnit browser tests. We are also discussing the timeline for deprecating Simpletest.

We are also working on improving our JavaScript browser tests in multiple issues. For documentation there is also a Javascript browser test tutorial now!

If you want to convert your tests in your contrib / custom module, please read the PHPUnit browser test tutorial and help out on in case you run into problems. Please follow the PHPUnit initiative issue for status updates. Join us in IRC in #drupal-phpunit.

(This update written by klausi & dawehner)


The biggest user facing change with workflows since the last update is the introduction of the Workflows module as a separate concept from content moderation. Now other modules can use workflows for user levels, commerce and other needs as well, when the workflow has nothing to do with content moderation. Many API changes were also made including support for moderation of non-translatable entity types and entity types without bundles (as long as revisions are enabled). Publishing entities implementing EntityPublishedInterface is also possible now, not just nodes.

Wondering how to join an initiative? Meeting information for each initiative is listed at

Feb 17 2017
Feb 17


2017-02-21 18:00 Europe/Vienna

Event type: 

Online meeting (eg. IRC meeting)

As part of the PHPUnit initiative a considerable part of Simpletests will be converted to PHPUnit based browser tests on February 21st 2017. A backwards compatibility layer has been implemented so that many Simpletests can be converted by just using the new BrowserTestBase base class and moving the test file. There is also a script to automatically convert test files in the conversion issue.

Developers are encouraged to use BrowserTestBase instead of Simpletest as of Drupal 8.3.0, but both test systems are fully supported during the Drupal 8 release cycle. The timeline for the deprecation of Simpletest's WebTestBase is under discussion.

Feb 16 2017
Feb 16
Mike and Matt sit down (literally!) with Lullabot's sales team. Learn how Lullabot does sales, and what it takes to sell large projects.
Feb 16 2017
Feb 16

This blog is all about How Drupal handles the Mail system & the stages it has to go through.
In Drupal to sends an email we need to take care of two rules

  1. Declare all the required properties under hook_mail().
  2. Call drupal_mail() with argument for actually sending the email

However in the scenario like bigger & complex site the above steps won’t be enough. But Drupal gives us a Flexibility to customize email sending process, before that it’s necessary to know how stuff works behind the scenes first. In this article I’ll show you how you can customize and extend the Drupal mail system to fulfill your needs.

While sending an email drupal_mail() function uses system class for sending an email. Every mail system needs to implement MailSystemInterface class to declare own sending behaviour.
MailsystemInterface class uses two methods 1) format() and 2) mail() 

As you can get it from name itself the first method format() method is used for formatting an email before it get send. Second method mail() method defines the exact email sending behaviour.

Flow of drupal_mail()

  • Set up default email messages properties. 
  • Build email(subject, body, with other required parameters) by calling hook_mail().
  • Allows module to alter email by calling hook_mail_alter().
  • Check for the responsible mail interface system which will handle the email.
  • Format the email message using format() method of the called mail system class.
  • send email using mail() method of the called mail system class.
  • return the processed email.

So you can  understand better that mailsystem plays an important role while sending email. By default drupal uses DefaultMailSystem class to send email. Will discuss this class in more detail first, let’s check out drupal 7 mail system module.

Mail system module in drupal provides an Administrative UI and Developer API for managing the used email/plugin. Allow to use different backends for formatting and sending e-mails by default, per module and per mail key. Additionally, a theme can be configured that is used for sent mails. In Drupal 7, that must be enabled for each template, in Drupal 8, it works reliably for any template being rendered while building and sending e-mails.

Mail system configuration

Drupal 7 don’t provide administrative UI to adjust the mail_system variable which defines  “email key” => “mail system class” type key-value pairs which correspond to what mail system class will be used by drupal_mail() at a specific email key. This is the point where the Mail System module comes in. It allows you to adjust the mail_system variable by an administrative UI and it provides some other useful configuration options, too.

The Mail System module related settings can be found under the Configuration > System > Mail System path 
inside your Drupal installation.

Mail System Settings

The default-system “email key” always exists and it determines which mail system class will be used by default for all outgoing emails. As you can see its default value is DefaultMailSystem so this is the reason why this mail system class will be used by drupal_mail() by default. 

The mailsystem_theme key is a little bit different since it defines which theme will be used to render the emails. let's assume that there is a drupal mail system related module which uses a template file to render emails. The mentioned template file obviously belongs to a hook_theme() entry to be able to use it at appropriate theme() calls. 
The Mail System module checks every theme registry entry for a specific “mail theme” key/property and if it exists in a particular entry then the specified mailsystem_theme value will be used to search for more specific template files when the theme registry entry related theme hook will be called.

Site wide mailsystem configuration

New Class

Imagine you have two or more different mail system classes available in your system provided by different modules. Each one has a format() and a mail() method. Now you need a custom mail system class which would use the format() method from one mail system class and it would use the mail() method from another class. This is the point where the Mail System module’s  “New Class” tool comes in, since it allows you to easily combine the behavior of two mail system classes by selecting their format() and mail() method for the new class.

New class configuration

New Setting

As we have seen, the mail_system variable’s default-system email key defines the site-wide mail system class which will be used for all emails. However there could be a situation when mail system class is not enough. In a situation like these the Mail System module allows you to easily add new email keys to the mail_system variable by selecting a module and a specific email key from its hook_mail() implementation.

New mail system setting

Mostly use mail system classes
My suggestion to all the learners is, before you start writing your own mail system class it is a good idea to study the most commonly used ones, because there are already some of them that will complete your needs and save lot of time. 

Drupal's built-in default mail system class. If you don’t modify the site-wide mail system class then this one will be used by drupal_mail() by default.

DefaultMailSystem’s format() method enforces the emails' output as plain text, therefore it doesn't matter how your messages are formatted — the result will always be plain text. The mail() method of the class sends the emails via PHP's mail() function, so a correctly set up and working email sending service is required for it.

Email sending is an important task that affects most of your projects, therefore it is always good to know how to get the most out of the system. I hope this overview of the options help you to customizing and extending Drupal’s mail system helps you choose the best solution for your needs.

Feb 16 2017
Feb 16

With the ever growing Drupal Community, a beginner is many a times lost in the vast number of resources, with increasing number of developers in Valuebound, I spoke to some of the seasoned developers on their opinion about the skills that a Drupal developer should have and also sifted through tons of materials from Stackoverflow and some more places.
The skillset that we are discussing here will give a clear idea about where you stand, what you know, what you do not know and of course fill me up with what I have missed, and I will quickly add up the suggestions. Before this I have 6 things that drupal developer should know.

Technology Stack

From what I have understood, the very basic things  Drupal Developer would be expected to know in terms of languages are much similar to web development in general, since Drupal is built with PHP it is good to have a grip to begin with Drupal. And SQL well, a database to handle the rest.

  • PHP


  • JQuery

Version control — Git

To collaborate in a project, Drupal developers use the Git version control software . Learning the Git basics will help to stay organized and enable with essential skills for working with a team. Even if you're the only person on the project, there are still lots of advantages to using a version control system as part of your daily workflow.

  • A Vision for Version Control — What is version control, and why should you be using it for all of your projects?

  • Git for beginners — Everything you need to know to get started using Git.

  • Apply and Create Patches — How to apply a patch to a module provided by another developer, and how to create your own so you can contribute your fixes back to the community.

Drupal Skills

  • Research and install modules according to project requirements

  • Configure basic modules and core settings to get a site running

  • Drush command line tool

  • Make a custom Theme from zero which validates with good HTML/CSS.

  • Able to customize and tweak  forms, core, themes without altering core files but by using template.php or custom modules.

  • Can make forms from scratch using the API - with validation and posting back to the DB/email

  • Knowledge of key Drupal APIs like Queue API, Node API, Entity API,APIs of Drupal.

The Form API is not the only one. You should understand the Menu system (page, access, title and delivery callbacks, how to pass parameters to them, etc.), the Queue API for asynchronous operations, Batch API for long running operations, Entities and Field APIs for user editable structured data, Theme API and Render Arrays for anything presentation, Cache API, Schema and Database APIs, File API, Cache API and the Localization API.

  • Can create custom modules from scratch utilising core hooks and module hooks.

  • PHP, it's a PHP framework, so to really understand and use it, you need to understand PHP.

  • SQL, the list of SQL serves that Drupal can use is growing, but you will need to understand * SQL, relational database and how to setup some basic architecture.

  • Javascript (and jQuery). Drupal uses the jQuery js library, so it will be a lot easier if you not only, know how yo use javascript, but also understand how to use jQuery and some of it concepts.

  • OOPs - OOPs - OOPs

  • Web Services : The RESTful Web Services API is new in Drupal 8.  

Modules There is no must know module list, since it will all depend on the site and how you use them, but the following are widely used:


  • Know how to make basic views and blocks.

  • Know how to make more complex views with relationships and terms.

  • Know how to use hook_views_query_alter, to make complex queries.

  • Know how to use hook_views_default_views, to create specific views.


The Panels module allows a site administrator to create customized layouts for multiple uses.

At its core it is a drag and drop content manager that lets you visually design a layout and place content within that layout.

Other Skills

  • Be involved with the community and contributions, understand the naming conventions, CVS system and ideally have submitted some code or revisions, a module to (however simple) or submitted a patch (the process of getting a CVS account and getting your first code in is instructive to the community and to standards

  • Ability to use Drush to update or setup a site

  • Being able to edit existing functionality (core or module) without touching the core or module and knowing whether to put it in template or a custom module.

  • good understanding of client-server architecture, how servers and browsers works. And knowledge of php and mysql, templates engines. And of course, you should also read Drupal documentations.

For any kind of a development setup, even in Drupal there is a range of roles which cluster up together to build and support Drupal Applications which includes :

System Admin or Devops who run the live stack and work in the process of deployment of Drupal sites from dev to live, they deal with performance issues, set up a Content Delivery Network, Varnish, Memcache - basically everything related to things after and during Deployment. These facilitators also help to run Drills to avoid issues like the one that happened with Gitlab recently.

QA - Test to ensure Quality adherence and matching requirements. Set up automated testing environments, auto schedule and run tests.

Project Manager / Scrum Master - run the team, remove obstacles to progress, ensure on time delivery of the project within budget.

Product owner - Facilitates with the requirements working closely with the project manager to prioritize the backlog. Normally has final sign off of all changes.

Design / UX - Creates up with the design and user experience. They help build prototypes that can then be converted into a Drupal theme.

A complete team consist of all the things above and you can eventually choose to select a profile to evolve into.

In some of the upcoming posts we will discuss about the things you should know that will give you an edge as a Drupal Developer.



Feb 16 2017
Feb 16

There's no shortage of generic web performance optimization advice out there. You probably have a checklist of things to do to speed up your site. Maybe your hosting company sent you a list of performance best practices. Or maybe you use an automated recommendation service.

You've gone through all the checklist compliance work, but haven't seen any change in your site's speed. What's going on here? You added a CDN. You optimized your image sizes. You removed unused code. You even turned off database logging on your Drupal site and no one can read the logs anymore! But it should be worth it, right? You followed best practice recommendations, so why don't you see an improvement?

Here are three reasons why generic performance checklists don't usually help, and what really does.

1. Micro-Optimizations

Generic performance recommendations don't provide you with a sense of scale for how effective they'll be, so how do you know if they're worth doing?

People will say, "Well, if it's an improvement, it's an improvement, and we should do it. You're not against improvements are you?" This logic only works if you have infinite time or money. You wouldn't spend $2000 to make changes you knew would only save 1ms time on a 10s current page load.

Long performance checklists are usually full of well-meaning suggestions that turn out to be micro-optimizations on your specific site. It makes for an impressive list. We fall for it because it plays into our desire for completion. We think, "ROI be damned! There's an item on this list and we have got to do it."

Just try to remember: ABP. Always Be Prioritizing.

You don't have to tackle every item on the list just for completion's sake. You need to measure optimizations to determine whether you're adding a micro-optimization or slaying a serious bottleneck.

2. Redundant Caching

In security, the advice is to add more layers of protection. In caching, not so much. Adding redundant caching will often have little to no effect on your page load time.

Caching lets your process take a shortcut the majority of the time. Imagine a kid who takes a shortcut on her walk to school. She cuts through her neighbor's backyard instead of going around the block. One in 10,000 times, there's a rabid squirrel in the yard, so she takes the long way. Her entrepreneurial friend offers to sell her a map to a new shortcut. It's a best practice! It cuts off time from the original full route that she almost never uses but it's longer than her usual shortcut. It will save her a little time on rabid squirrel days. Is it worth the price?

The benefit of a redundancy like this is marginal, but if there's a significant time or cost investment it’s probably not worth it. It's better to focus on getting the most bang for your buck. Keep in mind that the time involved to add caching includes making sure that new caches invalidate properly so that your site does not show stale content (and leave your editors calling support to report a problem when their new post does not appear promptly.)

3. Bottlenecks or Bust

Simply speaking, page load time consists of two main layers. First there is the server-side (back-end) which creates the HTML for a web page. Next, the client-side (front-end) renders it, adding the images, CSS, and JavaScript in your web browser.

The first step to effective performance optimization is to determine which layer is slow. It may be both. Developers tend to optimize the layer of their expertise and ignore the other one. It's common to focus efforts on the wrong layer.

Now on the back-end, a lot of the process occurs in series. One thing happens, and then another. First the web server routes a request. Then a PHP function runs. And another. It calls the database with a query. One task completes and then the next one begins. If you decrease the time of any of the tasks, the total time will decrease. If you do enough micro-optimizations, they can actually add up to something perceptible.

But the front-end, generally speaking, is different. The browser tries to do many things all at the same time (in parallel). This changes the game completely.

Imagine you and your brother start a company called Speedy Car Cleaning. You want your customers to get the fastest car cleaning possible, so you decide that while you wash a car, your brother will vacuum at the same time. One step doesn't rely on the other to be completed first, so you'll work in parallel. It takes you five minutes to wash a car, and it takes your brother two minutes to vacuum it. Total time to clean a car? Five minutes. You want to speed things up even more, so your brother buys a more powerful vacuum and now it only takes him one minute. What's the new total time to clean a car?

If you said five minutes, you are correct. When processes run in parallel, the slowest process is the only one that impacts total time.

This is a lesson of factory optimization as well. A factory has many machines and stations running in parallel, so if you speed up the process at any point that is not the bottleneck, you'll have no impact on the throughput. Not a small impact - no impact.

Ok, then what can we do?

So is it worthless to follow best practices to optimize your speed? No. You might get lucky, and those optimizations will make a big impact. They have their place, especially if you have a fairly typical site.

But if you don't see results from following guesses about why your site is slow, there's only one sure way to speed things up.

You have to find out what's really going on.

Establish a benchmark of current performance. Determine which layer contributes the most to the total time. Use profiling tools to find where the big costs are on the back-end and the bottlenecks on the front-end. Measure the effects of your improvements.

If your site's performance is an issue, ask an expert for a performance audit. Make sure they have expertise with your server infrastructure, your CMS or framework, and front-end performance. With methodical analysis and profiling measurements, they'll get to the bottom of it. And don't sweat those 'best practice' checklists.

Feb 16 2017
Feb 16

Every brand needs a logo. When Dries Buytaert decided to release the software behind back in 2001, making Drupal an open source project, he needed a symbol too. So, Kristjan Jansen and Steven Wittens joined their powers and stylized a Druplicon, a drop with eyes, curved nose and a mischievous smile. Since then Druplicon has been an indispensable part of the Drupal Community.

Druplicon is relatively easy to manage, moderate and share, so people in Drupal Community like to work with it very much. Back in December 2016, when it was Christmas time, we presented you the possibilities of how to style your Druplicon to lift up your Christmas spirit. Now, we decided it's time to present you Druplicons that were used in the past and Druplicons that are still used among Drupal Community in a variety of shapes.

Our journey will start with Druplicons in the shapes of Humans and Superhumans. We'll focus first on latter. The visual identity of Drupal is in that area, practically in all aspects, covered by Drupal Heroes, a group that is »keeping the Internet safe of bugs and bad programming practices«.


Drupal heroes 1


They have designed Superman, Spiderman, Loki, Flash, Batman, Catwoman, Hulk ...


Drupal heroes 2


We'll just add Druplicon in the shape of Super Saiyan from Dragon Ball.


Drupal heroe 3


On the other hand, Druplicon is not just designed with superpowers, but it takes normal humans into account as well. For example,

Grandfather (Drupal Camp RS)




Pirate (Bay Area Drupal Camp)




Gentleman (MIdCamp 2017)








The fun is, of course, not over. We'll look at Drupal logos from some other fields in the future as well. If you have any Druplicon on your mind that has not been published here, feel free to post it on our Twitter account. But remember, Superhumans and Humans only (Try to avoid Human professions as well, because they are going to be explored in the future)! The rest is still coming in our next blog posts.


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web