Nov 06 2019
Nov 06

We’ve recently given the site a new face lift and stumbled across an interesting problem during development when it came to implementing the new product carousel on their product pages, more specifically, the handling of the images themselves.

The design brief stipulated that the transparent studio style product images required a light grey background behind them, giving the impression of a product floating nicely in the middle of a light grey surrounding.

We had 2 problems here:

1. A lot of the studio style product images didn’t have sufficient transparent space around themselves and we ended up with unsightly results; images would be touching the edges of their container and weren’t sitting in the middle as intended. We needed to cater for these types of images.

2. We have a mix of studio and lifestyle shots. We couldn't just apply the same image style to both types of image; we would have to come up with something magic/clever to distinguish a studio shot from a lifestyle shot and then apply an image style accordingly.

Given that we are ComputerMinds (we massively care about our clients and we love a challenge) and knowing that the client would have to manually go back, edit and re-upload thousands of images, we decided if would be cool if we could come up with a code solution for them. Below is a brief outline as to how we achieved a code solution to a design/content problem. Enjoy!

Lifestyle and studio shots

Our concept was pretty clear; for images that had at least 1 - and no more than 3 - edges that contained transparent pixels, apply a Drupal image style with a custom effect that would “pad” out the image. The idea was, if an image has 4 transparent edges, the image had sufficient space around it. If it had no transparent edges we knew this was a lifestyle product shot, i.e a radiator in a lounge.

I started looking at the possibilities of detecting transparent pixels with PHP and came across a handy function from the PHP GD image library called imagecolorat(). Using this function and some hexadecimal converter madness (please don't ask me to explain!), we can detect what type of pixel we are looking at given a set of coordinates.

// The result of $transparency will be an integer between 0 and 127 
// 127 is transparent, 0 is opaque/solid.
$rgba = imagecolorat($image_resource, $x, $y);
$transparency = ($rgba >> 24) & 0x7F;

Now we needed to run this function for every pixel along on all four edges. So, first things first, grab the image width and height and subtract 1 from the result. Subtracting 1 ensures you won’t hit a PHP notice about being “out of bounds” - we’re not playing a round of golf here :). Next, we need to sort out our coordinate ranges for us to loop over:

// Top side
X = 0
Y = $image_width

// Bottom side
X = $image_height
Y = $image_width

// Left side
X = $image_height
Y = 0

// Right side
X = $image_width
Y = $image_height

For each permutation of coordinates (x=0, y=750, x=1, y=750, x=2, y=750 etc) , we simply check each pixel result from imagecolorat() and save the result to an array for us to check later on. Once we have detected a transparent and a non-transparent pixel (remembering to check our magic number 127), we then break out because we have all the information we need from this particular edge.

After you’ve completed this process for all four sides, we then do a simple check to see if we have a mix of transparent and non transparent pixel edges. If we do, then we pass the image along to our custom image style for processing.

Our custom Drupal image style uses the “define canvas” image processor from the contrib module imagecache actions. We define our own image style effect but use imagecache_actions’ “define canvas” processor to transform our image. This is where we add X amount of pixels to increase the “padding” around the image.

In order to create a custom Drupal image style, we would need to implement hook_image_effect_info() and place any effect code into the "effect callback". See below for an example.

 * Implements hook_image_effect_info().
function MY_MODULE_image_effect_info() {
  $effects = array();
  // Remember to replace MY_MODULE with the name of your module (in lower case).
  $effects['MY_MODULE_transparent_padding'] = array(
    'label' => t('Transparent padding'),
    'help' => t('Applies padding if an image has a mix of transparent and solid pixels around the edges of an image.'),
    'effect callback' => 'MY_MODULE_transparent_padding_effect_callback',
    'form callback' => 'MY_MODULE_transparent_padding_form',
    'summary theme' => 'transparent_padding_summary',
  return $effects;

 * Callback for MY_MODULE_transparent_padding image effect.
function MY_MODULE_transparent_padding_effect_callback(stdClass $image, array $data) {
  if ($image->info['mime_type'] == 'image/png') {
    $image_height = ($image->info['height'] - 1);
    $image_width = ($image->info['width'] - 1);
    $image_resource = $image->resource;

    // Do the coordinate magic and determine transparent and non transparent pixels
    // Build up $data array (this will contain background colour and amount of pixels -
    // to pad out the image.
    // Use imagecache_actions custom effect "define canvas" and pass in your $data array.
    $success = image_toolkit_invoke('definecanvas', $image, array($data));
    return $success

Below is the result of our custom image style effect settings form as defined above in hook_image_effect_info via "form callback". (The form was taken directly from imagecache_actions' definecanvas effect where I made a couple of edits - so credit goes to them)

Custom image style settings page

And the result is shown below: a before and after. Much better! Furthermore, we’ve saved the client from having to manually edit thousands of images and then manually re-upload them. Win win!

Image before processing

Image after processing

Oct 29 2019
Oct 29

Does your site have an online store? Are you looking to have sales on during the Black Friday and Cyber Monday period? Then read on - here are some quick tips to help you make the most of the sales period by ensuring that everything goes as smoothly as possible to make sure both your marketing and site are up to scratch.

Marketing / Promotion

You should have a plan in place for marketing your deals, otherwise how are people going to find out about them? This could include; promotions on your own website, promotion via social media (e.g Facebook, Twitter, Instagram) and also via email.

Social media posts and emails should be planned out in advance and posted regularly enough in the run up so that people are fully aware about when your sale will be taking place and what offers they might be able to get (be careful not to post *too much* and annoy people though!).

Having promotional material on your site homepage is a great idea as it’s the first page a lot of customers will enter the site through. Effective promotional content present in the run-up to the sales period should help ensure that you are maximising the potential number of customers that will return to check out your sales. You should ensure that any promotional messages or content remain in place until after you have finished your sales.

Analytics - Sales goals

If your site was launched at least a year ago and you have Google Analytics in place (you’d be crazy not to, right?) then you should hopefully have some reliable data that you can look at from the same sales period last year, in order to better prepare and improve upon for this year.

  • What pages / products were the most popular last year?
  • Was there anything in particular about these pages / products e.g. better design / marketing that you think helped over others?
  • Are there any sales targets that you want to set or update and improve upon? 
  • How many people visited your site more than usual? 
  • Were your SEO keywords effective, if not can they be improved upon?
  • How many users are visiting from mobile devices, is your site as mobile friendly as it can be?

Once the sales period is over for this year and you are armed with all the analytics data from this year and the previous year(s)... then you can compare and see if you have hit or missed any sales targets that you may have set. You’ll hopefully also be able to see what worked well (or not so well) in your SEO and marketing to note areas of improvement for subsequent sales you may have in the future.


Arguably one of the most important points of what you should consider are the discounts you will be offering on the site. If you don’t usually offer discounts or use discount codes on the site, then it would be wise to test any new discount logic or discount codes on a test environment before they go live. If there are any unexpected issues with the discounts not working correctly or not behaving in the way that you’d imagine, then you (or your developer!) have a chance to fix these things before you actually put them live. 

If you have a Drupal site with e-commerce and use discounts, then you will most likely be using the commerce_discount module to manage discounts on the site. This module generally works well at a basic level, although sometimes once you try and add in some more complex discounting logic, things can sometimes stop working properly. If you are having any commerce_discount related issues on your site and need them solved, or need some bespoke development done to handle your more complex discounting logic, then get in contact with us.


A final point that you may not have thought of is in regards to the (hopeful) extra increase in orders going through the site in the sales period. Is your stock management up to date and adequate to handle the extra increase in sales? Overselling any products and having to cancel orders is not good for the customer experience and is likely to lead to negative reviews, potentially damaging your reputation. If your site is using Drupal commerce then the commerce_stock contrib module is your best friend here. Amongst other things it allows you to have "add to cart" validation that can prevent users from adding products that are out of stock, and also disable the "add to cart" button when the stock level reaches zero.

Similarly you should think about how you’ll meet your usual shipping estimates that you have displayed on your website. If you anticipate that you won’t be able to keep to your usual timings then you should display a message in a prominent place on the site that the order processing and shipping may take longer than usual during the sales period.

The (Slightly) Technical Bit - Site load, speed, optimisation.

Well before the sales period even begins you should ensure that your site is running at an optimal speed. Google’s PageSpeed Insights can be used to benchmark your site and give you a detailed analysis of page load times and where you can improve by optimising your assets, code and more. The quicker your pages load, the more likely it is that people stick around on your site and don’t get frustrated trying to access what they want - and this is all the more important when your site might get a big influx of visitors during the sales period.

Following on from the Analytics section above; using any data you have for site visitor numbers from the previous year should give you a good indication of the number of visitors that you would hopefully expect to get this year. If the larger-than-usual amount of people visiting your site caused any server timeout or other connection issues, then you should ensure to better prepare the server and site for the influx this year.

I won’t go into too much detail here as benchmarking and optimising a site is a whole article in itself! but a few key points and quick wins to help your site are:

  • Ensuring you have an appropriate caching setup on your site. On a Drupal site as a bare minimum this would include enabling CSS & Javascript aggregation and ensuring Drupal page caching is enabled. The use of a reverse proxy such as Varnish is highly recommended as well.
  • Optimising images the site is serving up with a third party image optimising service such as Kraken. This will shrink file sizes without any noticeable decline in quality.
  • Using a CDN such as Amazon CloudFront to serve up your assets.
  • Minifying Javascript. On a Drupal site this is easy with the use of a module such as Minify JS.
  • Reducing the number of DOM elements to improve page load time.

The PageSpeed Insights tool mentioned above can act as a useful tool to test the optimisations you have done so that you know when you are actually making a (positive!) difference. Hopefully your developer has already done most of the above but if not, make sure they do in good time before the sales period begins.

And finally, if you haven't already, it's a good idea to let your web team know that you are planning a Black Friday sale. This will ensure they're not surprised when the server traffic spikes and they can make recommendations (if needed) to handle the increased traffic. The last thing you want is your site falling down and costing you sales!

Oct 15 2019
Oct 15

Last month I begun my second decade of working with Drupal! How crazy is that? I started at ComputerMinds in 2009. Drupalcon Paris was my first week on the job - I just remember taking so many notes, as if it were a university course! I had a lot to learn then, but now I can look back with a much more experienced head, hopefully wiser, with some sort of perspective.

Conference room before the keynote at Drupalcon Paris 2009The conference room before the keynote at Drupalcon Paris 2009, when my career with Drupal began.

My first steps with Drupal were on Drupal 6. It could do a lot (at least once CCK and Views were installed), and Wordpress was much smaller so the two were still occupying a similar space in the market. Most of the sites we built were trying to be a mini social network, a blog, or a brochure site, sometimes with e-commerce thrown in. There were rounded corners everywhere, garish pinks, and some terribly brittle javascript. Supporting Internet Explorer 6 was a nightmare, but still necessary.

It's probably only over the last few years that it became clear that Drupal's strength is its ability to be turned to just about anything that a website needs to do. That has meant that whilst alternative products have picked up the simpler sites, Drupal has been brilliant for projects with complex requirements. Integrating with CRMs and all sorts of other APIs, handling enormous traffic loads, providing content for apps - this kind of stuff has always been Drupal's jam. You just have to know how to get it hooked up in all the right places!

Speaking of hooks, it's been interesting to see Drupal move from its famous magical hooks, towards event-driven architecture. For me, that single shift represented an enormous change in direction for Drupal. I believe the events/subscriber pattern, as a part of a wider effort in Drupal 8 to deliberately use existing well-defined design patterns and solutions, is a sign of a much more mature and professional platform. Most coding problems have been solved well elsewhere, we shouldn't reinvent the wheel! (Although I know I can be guilty of that!) That's just one example of how the Drupal ecosystem has become more professional over the last ten years. Formal testing is another example. Many people have felt left behind by this shift, as a need in the enterprise world that Drupal could meet was identified. 'Enterprise' gets used as a dirty word sometimes - but to be frank, there's more money and more suitable opportunity there!

That is something the Drupal community has to be honest about. It is rightly aiming to champion diversity, and be as accessible as possible for as many as possible across the world (I especially love that Drupal is now so good for multilingual projects). It's not like Drupal is suddenly inappropriate for smaller projects - in fact, I'd still suggest it's far ahead in many aspects.

But money makes things happen, and gives people their livelihood. I appreciate seeing honesty and innovation about this coming from community leaders. Thankfully those kinds of values are what drive the Drupal project, even if money is often the facilitator. As a community we must always fight to keep those things in the right relation to each other: money has an inevitable influence that we must accept, but it must be led by us and our values, not the other way around. I should add that I am very aware that I am privileged to be a developer in a leading Drupal agency, so my opinion will be shaped by that position!

To end with honesty and innovation myself, I would love to see the following carried into the Drupal project's next 10 years, which I saw the community espouse over the last decade. I know I need to grow in these things myself!

  • Maintain excellence, even as the make-up of the community changes.
  • Deliberately listen to under-represented portions of the community, as an intentional force against the skewing effect of money/power.
  • Keep watch for what competitors and other relevant products are doing, to incorporate worthwhile things wherever possible.
  • Reflect on the strengths & weaknesses of Drupal (and its community) with honesty. Let's make the most of what makes Drupal what it is and not be afraid for it to focus on that; the same goes for us as individuals.

I'm proud to work with Drupal, long may it continue!

Jul 02 2019
Jul 02

Here at ComputerMinds, we think of ourselves as Drupal specialists for the UK, but we don't limit ourselves to that. We offer clients a close working relationship and our general flexibility to get stuck into applying our skills to most problems. One of our clients, Alfresco, has come to trust us with more than just our Drupal wisdom. They wanted a new hub that would bring together documentation for a variety of their open source products, which are already on Github. Their documentation was written as Markdown files, so the hub would need to import and transform those files into HTML pages for the web. Essentially, a static site generator was needed.

GatsbyJS is one of these static site generators, based on the incredibly popular React javascript library. It's very easy to get it to work with Markdown files, it's blazing fast, and helps us fulfil a variety of other requirements too. As much as we love Drupal, it's not the right tool for every job. (That's a subject for another day!) A 'JAMstack' approach was appealing for this project because the site would be a clever front-end for content that already existed elsewhere. There would be no need to have user authentication, content workflow, moderation, or many other dynamic website features that Drupal excels at.

A documentation page on the completed Alfresco Builder Network.

For many of us at ComputerMinds, this was our first dive into working with React, let alone Gatsby itself. So the wonderful documentation and tutorials for Gatsby were a huge help. As our project was for displaying documentation, the Gatsby documentation site itself was a helpful influence on the eventual design and user interaction of our site.

The most obvious difference between Drupal and GatsbyJS, was that the coding was mainly in javascript, rather than PHP. It was great to use much more modern javascript code patterns and tools - even if it took a bit of head-scratching to get there! All credit should go to Steve Tweeddale in our Coventry office for his patience with my million questions as we worked towards building a Minimum Viable Product early on in the project timeline. As the site took shape beyond that, we drew on more and more concepts from the GatsbyJS and React ecosystems. Here's a few that were particularly interesting:

No doubt someday we'll be turning our hands to bring the best of Gatsby and Drupal together! 'Headless', or 'decoupled' Drupal has become one of the new trendy ways to use Drupal. This means using Drupal to define your app's information architecture or as a content repository. Other applications can then connect to it to display content to users. Those applications could be made in Gatsby, React, Drupal, or almost anything else. We still feel this is usually only worthwhile when Drupal is used as a back-end system for multiple front-ends. But regardless we are looking forward to using GatsbyJS again!

Jun 18 2019
Jun 18

Drupal is lucky to benefit from a very active community of developers meaning that there is a wide and varied range of contributed modules to extend the functionality of Drupal core. In this article we’ll take a quick look at 10 interesting contributed modules; some are well known whilst others are a little bit more obscure.

1. Admin menu (D7) / Admin toolbar (D8)

Out of the box the Drupal admin interface can be a bit unwieldy and whilst this has been significantly improved over the years, especially with the advent of Drupal 8, there’s still room for improvement. Enter admin_menu or admin_toolbar (for Drupal 8) which are two similar modules to make navigating the admin interface a whole lot easier by providing a neat little toolbar with drop downs so you can navigate the whole admin interface from any page of your site.

2. Kraken

The web service can be used to optimise images on your website. It reduces image size by around 50%, whilst maintaining visual quality - even up close, we can't tell the difference between a Kraken'd image and the original! The Kraken module for Drupal works by exposing an image style effect through the ImageAPI Optimize module which can be applied to image styles on your Drupal website.

3. Popup field group

This is a nice little module produced by ComputerMinds which gives the option to display the children of a field group in a popup overlay. Buttons are exposed to toggle the popup.

4. Flood unblock

Drupal 7 introduced a feature to prevent brute force attacks meaning that no more than five failed login attempts per user in any six hour period or no more than 50 failed attempts from an IP address in a one hour period are allowed. Failed login attempts are recorded in the flood table. This flood unblock module provides a user interface for administrators to easily unblock users that have exceeded these limits.

5. Paragraphs

Paragraphs give much greater control over content creation on your Drupal site. A paragraph is a set of fields which has its own theme associated with it to give much greater flexibility over how content gets rendered on your site. So for example you might have a paragraph which floats an image left and displays text on the right - the possibilities are endless. Take a look at tiles in action to find out more about working with paragraphs (we use the term tiles to mean the same thing!)

6. Stage file proxy

This module is useful when working on a development version of your Drupal site by providing a proxy to the production site’s files directory. When you need a production file the module maps this the production files directory and downloads it to your development files directory.

7. Field display label

This is a nice little module to allow you to change a field label on the edit form so that it’s difference to what’s rendered when the field is displayed. So for example you might have a name field labelled ‘what’s your name?’ on the edit form which just renders ‘name’ when it’s displayed.

8. Custom_add_another

Another simple module maintained by us at ComputerMinds which gives site admins the ability to customise the ‘add another’ text for multi valued fields.

9. Notice_killer

This is a nice little module that will split out PHP notices and warnings from other messages logged by Drupal and also logs a bit more information about each so that you can track them down and fix them more easily.

10. Rabbit_hole

This is a useful module that prevents certain entities from being viewable on their own page. So for example if you have an image content type which you never want to be accessible on node/xx then this is the module for you!

Jun 11 2019
Jun 11

Websites need to look pretty and be blazing fast. That often means lots of beautiful high-quality images, but they can be pretty enormous to download, making the page slow to load. Images are often one of the 'heaviest' parts of a website, dragging a visitor's experience down instead of brightening it up as intended. If a website feels even a tiny bit unresponsive, that tarnishes your message or brand. Most of us have sat waiting frustratedly for a website to work (especially on mobile), and given up to go elsewhere. Drupal can be configured to deliver appropriately-resized versions, but what's even better than that?

Lazy image loading

Don't send images to be downloaded at all until they're actually going to be seen! Browsers usually download everything for a page, even if it's out of sight 'below the fold'. We know we can do better than that on a modern website, with this technique called lazy image loading.

Lazily loading an image means only sending it for a user to download once they are scrolling it into view. Modern web browsers make this surprisingly simple to achieve for most images, although there are often a few that need special attention. When combined with optimisation from, and other responsive design tricks, performance can sky-rocket again. Check out our case study of for a great example using this.

Niquesa is a luxury brand for busy people, so the website experience needs to be smooth, even when used on the go over a mobile network. Perhaps more than that, SEO (search engine optimisation) is critical. Their bespoke packages need to show up well in Google searches. Google promotes websites that perform well on mobile devices - so if your site is slow, it needs to be sped up. It's not just that you'll lose out on competitive advantage and tarnish your brand: people simply won't find you.

You can see what Google thinks of your website performance by using their PageSpeed Insights tool. That gives you an overall score and lists specific improvements you can make. Niquesa asked us to boost their score, especially for mobile devices. So we looked to speed up anything slow, and to reduce the amount of things there are to download in the first place. Any website can use that approach too. Lazy image loading speeds up the initial page load, and reduces the amount to download.

This stuff should be standard on most websites nowadays. But many web projects began well before browsers supported this kind of functionality so still need it adding in. As an ever-improving platform, the internet allows you to continually improve your site. There's no need to feel locked in to a slow site! Get in touch with us if you're interested in improving your website with lazy loaded imagery. Who wouldn't want beautiful high-quality media and great performance on any device?

Can you teach me to be lazy?

Sure! Rather than using the normal src attribute to hold the image file location, use a data-src attribute. Browsers ignore that, so nothing gets downloaded. We then use the browser's Intersection Observer API to observe when the image is being scrolled up into view. Our javascript can jump in at this point to turn that data-src attribute into a real src attribute, which means the browser will download the real image.

On its own, that wouldn't take very long to set up on most websites. But on top of this, we often go the extra mile to add some extra optimisations. These can take up the majority of the time when applying lazy loading to a website, as they are a great improvement for the user experience, but usually need crafting specifically for each individual project:

  • Images defined via style or srcset attributes (rather than a src attribute) and background images in CSS files, need similar handling. For example, use a data-style or data-srcset attribute.
  • Images that we expect to be immediately in view are excluded from any lazy loading, as it is right to show them immediately.
  • It may be important to keep a placeholder in place of the real image, perhaps either to keep a layout in place or in case javascript is not running. Styling may even need to be tweaked for those cases. Sadly it's not unusual for third-party javascript out of your control to break functionality on a page!
  • Dimensions may need some special handling, as Drupal will often output fixed widths & heights, but responsive design usually dictates that images may need to scale with browser widths. If the real image is not being shown, its aspect ratio may still need to be applied to avoid breaking some layouts.
  • Some design elements, like carousels, hide some images even when they are within the viewport. These can get their own lazy magic. One of our favourite carousel libraries, Slick, supports this with almost no extra work, but many designs or systems will need more careful bespoke attention.

Here is a basic example javascript implementation for Drupal:

(function($) {
  // Set up an intersection observer.
  Drupal.lazy_load_observer = new window.IntersectionObserver(function(entries) {
    for (var i in entries) {
      if (entries.hasOwnProperty(i) && entries[i].isIntersecting) {
        var $element = $(entries[i].target);
        // Take the src value from data-src.
        $element.attr('src', $element.attr('data-src'));
        // Stop observing this image now that it is sorted.
    // Specify a decent margin around the visible viewport.
    rootMargin: "50% 200%"

  // Get that intersection observer acting on images.
  Drupal.behaviors.lazy_load = {
    attach: function (context, settings) {
      $('img[data-src]', context).once('lazy-load').each(function() {

(This does not include a fallback for older browsers. The rootMargin property, which defines how close an element should be to the edge of the viewport before being acted on, might want tweaking for your design.)

Drupal constructs most image HTML tags via its image template, so a hook_preprocess_image can be added to a theme to hook in and change the src attribute to be a data-src attribute. If required, a placeholder image can be used in the src attribute there too. We tend to use a single highly-cacheable transparent 1x1 pixel lightweight image, but sometimes a scaled down version of the 'real' image is more useful.

The lazy loading idea can be applied to any page element, not just images. Videos are a good candidate - and I've even seen ordinary text loaded in on some webpages as you scroll further through long articles. Enjoy being lazier AND faster!

Image: Private beach by Thomas

May 21 2019
May 21

Your current website/platform is built on Drupal 7 and news has hit your ears about 7’s end of life (EOL). Maybe your site is a Drupal 8 site and you want to know what the future has in store for you. Good news is, you don’t have to do anything immediately, but it is definitely a question that you want to start thinking about very soon.

This article is mainly aimed at Drupal 7 builds looking to upgrade to 8 or 9, and we will explore the pros and cons of each. However, if your current platform is based on Drupal 8 and you want to know what your options are with regards to Drupal 9, you can skip to here. Or, if like my fence (yes, that really is my fence and it is in desperate need of an upgrade), you require an upgrade, skip down to see what we can do for you.

Drupal 7’s EOL is November 2021, which is a little way off yet, but it’ll soon creep up on you (very much like my fast approaching wedding day… eek!)! So what does EOL mean? After End of Life in November 2021, Drupal 7 will stop receiving official support: this means no more security fixes and no more new features. Any new ‘Drupalgeddon’ security issues will be ignored, leaving Drupal 7 sites open to attack; we saw a minority of sites being used as bitcoin miners last year as a result of sites not being patched… this is just an example of what could/can happen!

There appear to be talks in the Drupal community about providing LTS (“Long Term Support”) programmes, offering continued support for Drupal 7. Community LTS tends to cover security issues mostly, with minor functionality updates. But if you want the longest term security support prospects, whilst also being on the platform best suited to growth, ongoing development and support we would advise that you don’t look to the D7 LTS programmes.

Crucially, a Drupal 7 to 9 upgrade will largely be the same as a 7 to 8 upgrade - you won’t be missing out on too much; you’ll just wait longer for the new goodness that 7 doesn’t have!

The best option, therefore, is to consider an upgrade. But which version do you go for? Should you go straight in for Drupal 8, or stay longer on your beloved Drupal 7 and switch over to 9 nearer 7’s end of life? Let’s look at the options.

Benefits of migrating to Drupal 8

Drupal 8 was a complete ground-up rewrite of Drupal 7. D7 was written without a known framework and without utilising Object Orientation, meaning a majority of Drupal 7 was written procedurally; this can make code hard to maintain and inefficient, resulting in a loss of page load performance. Using a framework that is built upon an OO philosophy ensures clearer code that it is easier to maintain, read, and modify - all of which results in a faster, more secure site.

Drupal 8 also runs on PHP 7 as standard; PHP 7 is *much* faster than its predecessor. In addition, D8 utilises a new theming engine called Twig - with benefits that include better security (Twig automatically sanitises user input, for example) and a big emphasis on separation of business logic and presentation as well as being fast and flexible. It also includes nice features such as inline editing, handy for those sites that have lots of ever changing content; this should save you a few clicks of a mouse and multiple page loads! Drupal 8 is also continuing to add new features, like the anticipated Layout builder (which you can read all about here), the newly incorporated media handling and big pipe!

Can’t I just upgrade straight to Drupal 9 when it is released?

Well you could, but we would recommend not to. Drupal 9’s release is only around the corner, much like Drupal 7’s EOL. 9 is due to be in our hands mid-late 2020. This overlap has been cleverly timed with Drupal 7’s EOL to allow Drupal 7 sites to upgrade to Drupal 9 with time to spare before EOL. So what is the difference between Drupal 8 and 9 and why do we recommend to upgrade to Drupal 8 first?

Drupal 9 - in Layman’s terms - will be a “cleaner” version of the latest Drupal 8 version; they’re essentially the same thing, minus some deprecated code that was present at the start of Drupal 8’s life. (Deprecated code is code that is due to be decommissioned and re-written/re-integrated at a later date. The code is still safe and usable, but if you upgrade in the future, you might have a job on your hands converting all the deprecated code with their replacements. Rule of thumb, always use the newer version if it’s available to avoid future headaches).

Crucially, a Drupal 7 to 9 upgrade will largely be the same as a 7 to 8 upgrade - you won’t be missing out on too much; you’ll just wait longer for the new goodness that 7 doesn’t have! Drupal 9 will also include updates to external libraries/dependencies, which it relies upon, not to mention any new features that’ll it have over 8. At time of writing, we are unsure of what these might be!

The only foreseeable benefit we can see for waiting for Drupal 9 (over 8) is that you get to stay on your tried and trusted Drupal 7 site for a year or two longer whilst giving you maximum time to plan for that eventual upgrade.

How easy is it to upgrade my site?

At the start of Drupal 8’s life, one of the major goals that 8 wanted to get right over D7 was migration. Therefore, a big emphasis was placed on making sure that migrations were easier and less painful to do, particularly since it (core) would have a completely different architecture. As a result, migrations from 7 to 8 (or 9) are somewhat nicer than those who upgraded from 6 to 7 (Checkout our Conservative Party case study where we migrated an entire Drupal 6 platform to 8)

Some of you who have been reading our articles for the last few years will notice we underwent a huge change - not only did we re-brand, but we also upgraded from 7 to 8 - all of which you can read about, including all the trials and tribulations we experienced.
Drupal 8’s built in migration tools are a huge improvement over 7 and the Drupal community has created some really neat modules too, some that will provide a good starting point for any migration mappings that you may need…  However, any custom code/custom field structures will need additional plugins and migration mappings without discounting an entire re-write of any HTML template files due to the new theme engine Twig.

As far as we are currently aware, a Drupal 8 to 9 migration will be *much* easier than a 7 to 8 migration. Those who have a Drupal 8 site see their minor version (8.5, 8.6, 8.7 etc etc) updated every 6 months with very little hiccups. A Drupal 8 to 9 migration will be not too dissimilar to the minor version updates that current Drupal 8 owners experience today. One of the very few exceptions being that any deprecated code (as explained above) will need converting; upgrading all contrib modules and performing a full custom code audit to make sure this works with the new version of Drupal. The data structures will remain intact as will the theme engine so no need to write new migration mappings or to convert any HTML templates - these tend to make up the bulk of the work for any Drupal migration.

What can ComputerMinds do for you?

We, at ComputerMinds, have experience of all the different types of upgrade paths that are available to you. Back in the day, we conducted full Drupal 6 to Drupal 7 migrations (no easy feat), but more recently, we have done Drupal 7 to Drupal 8 upgrades (as we mentioned before, in addition to upgrading some of our clients from 7 to 8, we also carried out our own upgrade for this very site) and even Drupal 6 to Drupal 8 upgrades. So we have a wealth of experience and skill.

Furthermore, choosing to upgrade your site is the perfect time to review your current brand, the look and feel of your site and to add any new features that may be in your pipeline. Any new/additional changes are a lot easier to incorporate during the upgrade phase than later down the line; it’s cheaper because we can do all the changes in development phase and it’ll also mean you’ll get shiny, new things quicker - so something definitely worth thinking about.

This article has been written to highlight what options are available now and in the future; we certainly won’t force anyone to upgrade their site, but will always do our best to make people aware of what is coming up in terms of Drupal’s life. With the e-commerce world always advancing, you definitely can’t stand still for long, so this topic will certainly need to be thought about. We also won’t be able to facilitate upgrading everyone in 2021, so please do plan ahead!

Drupal 8 is in the best possible place today. Most of the contributed modules have either been converted from 7 to 8, or brand new modules that have been created in emulate functionality from Drupal 7 that simply didn’t exist for 8. The webform module for example, a staple on 99% of our sites, wasn’t available for a long time due to the time it took to completely rewrite it - a worthwhile wait in our eyes as it is a big improvement over its 7 version. We are currently recommending any new site builds that come our way to start life as a Drupal 8 site. It gets a big thumbs up from us!

If you’ve liked what you’ve read and feel like you’re ready - or in a position - to start thinking about a site upgrade, why not start a conversation with us today either by way of live chat or using our contact form. We’d love to hear from you and look forward to seeing what benefits we can bring to your site.

Mar 07 2019
Mar 07

Last night saw the popular EU Cookie Compliance module fall from grace, as the Drupal community discovered that numerous inputs in the admin form were not being sanitised.

To me, this shows some serious failings in how our community is handling security awareness. Let's do some fixing :)

1) We need to make this OBVIOUS, with clear examples

One of the most important things when trying to get people to write secure code is making them aware of the issues. We need Drupalers of all levels of experience to know and understand the risks posed by unsanitised input, where they come up and how to fix / avoid them.

I did a little internet searching, and found that there's actually a great guide to writing Drupal modules on It covers a whole bunch of things, and is compiled really nicely.

I noticed that it says how to do forms, but it manages to NOT mention security anywhere. This should be a key thought right now, no? There is a guide to handling text securely, but it's just there and isn't really linked to.

Similarly, the page of Drupal 7 sanitize functions is easily findable, but only if you know to look for it in the first place

Guys and girls, if we're going to help our Drupalers to write secure code we simply have to make it obvious. We shouldn't be assuming young new Drupalers will think to browse around the internet looking for sanitization functions. We shouldn't be assuming they know all the niggly bits that present security issues. We shouldn't be assuming that anyone really know when to use tokens in URLs and when not to. We should be putting all these things right there, saying hey! don't forget to use these! here's why!. We should have articles and guides for writing forms that take the time to cover how to handle the security side of things.

In that vein, surely the Form API reference should surely have a reminder link? A little sidebar with some links to all these guides and articles on writing secure code?

I'm going to go start some conversations and some edits - Drupal documentation is maintained by us, the Drupalers, after all.
Who else out there wants to help move things in the right direction? :)

Update: Security in Drupal 7 - Handle text in a secure fashion is looking a good bit better. Input still welcome though!

2) We need to be aware of what we're installing

81,086 modules report use of the EU Cookie Compliance module. That's a whole bunch of blind installs! Nobody thought to check through the code? Nobody missed the lack of check_plain?

Well, you don't, do you? It's far too easy to assume that things are just fine. Our wonderful Open Source world, protected by our numbers, means that code is safe because it has a thousand people keeping eyes on it. Unless, of course, we're all assuming that somebody else is looking. In that case, as evident here, nobody really takes responsibility - and that's why we end up with module maintainers burning out trying to fight battles alone. In the presence of other people who we know could also do something, humans are significantly less likely to take responsibility.

I've said this before in my previous article discussing security risks to Drupal as we mature - if we took a little more of a moment to check through the modules that we install, we might catch a whole bunch of missed bugs!

I must make explicit that this call isn't just to the big bods and the experienced Drupalers. This task is for you, too, freelancers and small Drupal shops. We all have unique perspectives and unique opportunities that will allow us to see what others have missed - but if nobody is looking then nobody will see anything.

3) Contrib security reviews need help

Unless we're going to go through every module by hand, we need to think about writing some tool to do a basic sanity check on our contrib modules. How hard can it be to see if there's even one instance of a check_plain in a file?

It's admirable and encouraging to see the Drupal Security Team making huge progress on really key modules. Well done guys :) But, as far as I can guess, they're going through modules by hand, line by line. What other way is there?

If I had £50k going spare, I'd put a huge bounty out for anyone that can write an automated tool for spotting missing check_plains. Alas, I really don't have that! But I reckon there must be a decent tool for at least getting a start?

If we can solve this problem for contrib, then we can also solve it for every site's custom modules. And that will be of huge security benefit for Drupalers worldwide.

Huge publicity awaits whoever solves this problem, I'm sure.
Inventors and innovators in the Drupal world, this is your moment!

Feb 19 2019
Feb 19

Drupal empowers site builders and editors to configure their sites in settings forms. Configuration management lets developers push changes up to live sites to be imported. But developers have to be considerate to ensure imports will not wipe out those changes made directly through the live sites' settings forms. At the least, they have to export the changes before making further tweaks. But admins may make further changes in the meantime too, so developers can end up frequently pulling irrelevant changes back from live, which seems unnecessary.

Here's some examples of the kind of config that I'm thinking of:

  • The site email and Google Analytics account are usually managed by site admins, not developers. So developers should not be the ones to manage those settings.
  • Marketers may like tweaking the site name or slogan. That doesn't need to affect developers.
  • Contact forms contain labels and other text which may be key to the communication between a client and their customers.
  • Permissions - sometimes it's not clear where the lines are between editors/admins/etc, so why not allow some flexibility to reassign permissions directly on live without needing to touch the codebase?

We need an approach that allows for specific settings to be considered 'unmanaged' - so an import wouldn't touch whatever they have made to be on live. The Config Ignore project claims to solve this, but we already use Config split which is more powerful, more flexible and has a better user interface. (Although Config Ignore does allow targeting parts of config rather than whole config items.)

Config split is often used to create environment-specific sets of configuration, but its design means it can be used for separating config for other purposes. In this scenario, what's needed is a split that represents settings to be protected, which can be exported immediately before any import. Then when importing, Drupal only sees the preserved version of the settings, so won't change them, regardless of what is in the main configuration files.

The split, which I've called 'Unmanaged', needs to be set up like as follows (see screenshot):

  • Use a folder (directory) which already exists and is writable. I use ../config/unmanaged, so it matches the split name and is outside the webroot.
  • Set to active. I usually set all other splits to inactive, and only make them active in an environment's settings.php, but this split exists for the sake of workflow, not environment. For example, it can actually be useful locally, so I can tweak things for development without affecting what ends up in version control.
  • Have the largest weight of any split, so that it overrides any other exported version of config it contains.
  • Use the Conditional split section, not Complete split, to pick configuration to protect.
  • Do not tick either of the checkboxes in the conditional split section.

Once the split has been created, the container needs rebuilding for it to work. Run this, which includes exporting it for the first time:

drush cache-rebuild
drush -y config-split-export unmanaged

Now that it is exported, a .htaccess file will be have been added to the directory alongside the config. Add the following line to your project's .gitignore file, adjusting the directory location as appropriate. This ensures the directory will get created on live when changes are pulled from git (containing .htaccess), but deliberately without the exported config:


So now before running any imports, make sure to export the split:

drush -y config-split-export unmanaged
drush -y config-import

With this split and the export step in place in your workflow, you can be confident of allowing your developers and site admins to get on with their respective work, without getting in each others' way. This puts configuration splits to use for something beyond environment-specific overrides, which I think is exciting. I wonder what other useful purposes they may have?

Photo by Sascha Hormel from Pexels

Feb 05 2019
Feb 05

ABJS is a contrib Drupal module, and, without any requirements or ties to paid services, is as low cost as you can get. As we’ll see, it’s pretty basic but it really lets you get down to building your own understanding of how A/B testing works. The beauty of ABJS is in its simplicity. The settings pages are fairly self-explanatory, which is really helpful. Let’s set up a basic A/B test to show how things work.

Setting up our first experience

In our test, we’re going to split the site 50:50 in order to test an alternate homepage design. Go to /admin/config/user-interface/abjs and get a feel for things. See the tabs across the top? The best way to set up a new test is to work backwards. That’s because your Tests will need to reference your Conditions and Experiences - and you’ll need to create them before you can use them.

The ABJS admin interface tabs.

First up, create an Experience. Experiences make the actual A’s and B’s of your A/B tests. Go to the Experiences tab. Give your experience a very clear and helpful name. Our first one will be our normal homepage experience. Making a ‘normal’ experience allows us to explicitly log our page views for our analytics.

Our example ABJS normal Experience page

The Javascript we set up for our site looks like this:

if (typeof(ga) !== "undefined") {
  ga('set', 'dimension1', 'normal');
window.Drupal = window.Drupal || { 'settings': {}, 'behaviors': {}, 'locale': {} };
window.Drupal.behaviors.abtesting = {
    attach: function(context, settings) {
        jQuery('#request-a-quote-form').on('submit', function() {
            ga('send', 'event','Productfinder', 'ID search', 'Homepage');

1. We came to the Drupal Behavior format after realising that the ABJS scripts in the header would run before the Google Analytics scripts, and we would need the GA script to run before we could log our analytics. You could probably do this another way if you wanted, but this is easy enough to copy and paste for now.

2. Set our custom GA dimension.

3. This ends up happening before Drupal is actually ready to accept and execute behaviors, hence the very careful creation/copy of the window.Drupal variable.

4. Add a submit handler to our form, which will send an event to GA. This is the thing we’re trying to measure. Hopefully our alternate version of the page will result in more clicks on this button, and we’ll be able to track those in GA.

If you copy & paste the above, you'll want to make your tweaks:

1. Change the first call to ga() in line 2 to be a dimension you’ve set up in Google Analytics (Go do that now! Their support articles are really good, so I won’t explain that here).

2. Set the value for that call to be the value you want (‘normal’ may well be fine).

3. Change the event values in the final call to ga() to send the event values you want or need. They don’t need to be fancy, just unique - you just need to be able to track them in GA. Now, go create an “Alternate homepage experience” experience.

Set up your Alternate Experience

This is the Experience for the change / difference you're wanting to test.

Copy the JS from your ‘Normal’ experience, and tweak it to:

1. Have a different value for your GA dimension

2. Make an actual change to your page. Put this after the jQuery/ga call.

Now go create your condition(s).

All your experience Javascript will be added to every page, so this is where you make sure that you’re only running things where and when you really want to. Again, you write up some Javascript; it will return a Boolean value indicating to ABJS whether you want your test to be run. In our example, we’re just testing the homepage. So we’ll just do this:

return (window.location.pathname == "/");

Don't forget to set a helpful and clear name for your Condition, so it's easy to select later.

ABJS homepage condition page.

Test time!

At last, you can now go set up your Test.

1. Give your test a name. Make it helpful, so that you know which test does what when you have multiple tests later :) Perhaps name it after the change you’re making in your alternate test.

2. Select your two experiences in the two select boxes. Don’t let the fraction field confuse you - this is just the proportion of people you want to be diverted to each of your two experiences. This can be super helpful if you want, for example, to test something on a small proportion of users. For us doing our small, low-cost A/B test on our small client’s site, we want to maximise our data. So we’re doing 50:50 - this means fractions of 0.5 and 0.5 You can have multiple experiences here, which is pretty neat. So if you want to test multiple variations of your homepage, you can! Go for it!

3. Double check your Javascript :) Go execute it in the browser console or something, to make sure it works. No point taking your site down accidentally!

4. Set your test to 'Active'! This will add your tests to the site, so you can start collecting data! Now is the time to go to Google Analytics and watch the data pour in (for some definition of pour, depending on how busy your site is right now!).

ABJS Test edit page

Analysing your data

A key thing to remember when watching your analytics is that things change all the time, and sometimes randomness can be responsible for what you’re seeing. Or maybe there was a seasonal spike in sales which increased revenue that week, or maybe you’re not filtering your users to the right segment… A few recommendations for you:

  • Create segments for your two dimension values, so you can easily filter your data.
  • Data can be misleading. Always check other angles before declaring you’ve fixed the problem.
  • Run your numbers to check the Statistical Significance. If you don’t have tens of thousands of samples, your results may just be random chatter rather than necessarily related to the changes your Experience made. Either remember your A-Level statistics, or go use an online calculator. I recommend for a good explainer.
  • If your site is not high traffic, you may need to run your tests for weeks or even months to get enough data to clearly show whether there's a difference. Or, it may be worth deciding that there's not clearly a difference, so it's worth testing something else.
     An example Google Analytics graph

So, we’ve created some tests with ABJS. How did it go?

Overall, ABJS is nice because it feels like you’re in control. And all developers like to feel in control. It’s also nice because if you want to go set up a test, you can! It’s easy!

Where ABJS loses out is in the pile of lovely features that the big products out there can offer. Creating, managing, scheduling and analysing are all tasks that have been made a lot easier by some off-the-shelf products. But if you can’t afford that budget, or really rather enjoy thinking things through (or indeed love a bit of statistics!) then this is your lot - and it works well enough.

Later on in the series we'll be playing with Google Analytics' A/B testing suite and seeing how it compares. Stay tuned!

Jan 22 2019
Jan 22

We've heard of test-driven development, behaviour-driven development, feature-driven development and someone has probably invented buzzword-driven development by now. Here's my own new buzzword phrase: review-driven development. At ComputerMinds, we aim to put our work through peer reviews to ensure quality and to share knowledge around the team. Chris has recently written about why and how we review our work. We took some time on our last team 'CMDay' to discuss how we could make doing peer reviews better. We found ourselves answering this question:

Why is reviewing hard? How can we make it easier?

We had recently run into a couple of specific scenarios that had triggered our discussion. For one, pressure to complete the work had meant reviews were rushed or incomplete. The other scenario had involved such a large set of code changes that reviewing them all at the end was almost impossible. I'm glad of the opportunity to reflect on our practice. Here are some of the points we have come away with - please add your own thoughts in the comments section below.

1. Coders, help your reviewers

The person that does the development work is the ideal person to make a review easy. The description field of a pull request can be used to write a summary of changes, and to show where the reviewer should start. They can provide links back to the ticket(s) in the project's issue tracking system (e.g. Redmine/Jira), and maybe copy across any relevant acceptance criteria. The coder can chase a colleague to get a review, and then chase them up to continue discussions, as it is inevitable that reviewers will have questions.

2. Small reviews are easier

Complicated changes may be just as daunting to review as to build. So break them up into smaller chunks that can be reviewed easier. This has the massive benefit of forcing a developer to really understand what they're doing. A divide & conquer approach can make for a better implementation and is often easier to maintain too, so the benefits aren't only felt by reviewers.

3. Review early & often

Some changes can get pretty big over time. They may not be easy to break up into separate chunks, but work on them could well be broken up into iterations, building on top of each other. Early iterations may be full of holes or @TODO comments, but they still reveal much about the developer's intentions & understanding. So the review process can start as early as the planning stage, even when there's no code to actually review. Then as the changes to code take shape, the developer can continually return to the same person every so often. They will have contextual knowledge growing as the changes grow, to understand what's going on, helping them provide a better review.

4. Anyone can review

Inevitably some colleagues are more experienced than others - but we believe reviews are best shared around. Whether talking about your own code, or understanding someone else's code, experience is spread across the team. Fresh eyes are sometimes all that's needed to spot issues. Other times, it's merely the act of putting your own work up for scrutiny that forces you to get things right.

5. Reviewers, be proactive!

Developers like to be working, not waiting for feedback. Once they've got someone to agree to review their work, they have probably moved onto solving their next problem. However well they may have written up their work, it's best for the reviewer to chase the developer and talk through the work, ideally face-to-face. Even if the reviewer then goes away to test the changes, or there's another delay, it's best for the reviewer to be as proactive as possible. Clarify as much as needed. Chase down the answers. Ask seemingly dumb questions. Especially if you trust the developer - that probably means you can learn something from them too!

6. Use the tools well

Some code changes can be ignored or skipped through easily. Things like the boilerplate code around features exports in Drupal 7, or changes to composer.lock files. Pointers from the developer to the reviewer of what files/changes are important are really helpful. Reviewers themselves can also get wise as to what things to focus on. Tools can help with this - hiding whitespace changes in diffs, the files tab of PRs on github, or three-way merge tools, for example. Screenshots or videos are essential for communicating between developer & reviewer about visual elements when they can't meet face-to-face.

7. What can we learn from

The patch-based workflow that we are forced to use on doesn't get a lot of good press. (I'm super excited for the gitlab integration that will change this!) But it has stood the test of time. There are lessons we can draw from our time spent in its issue queues and contributing patches to core and contrib projects. For example, patches often go through two types of review, which I'd call 'focussed nitpicks' and the wider 'approach critiques'. It can be too tempting to write code to only fulfil precise acceptance criteria, or to pass tests - but reviewers are humans, each with their own perspectives to anticipate. Aiming for helpful reviews can be even more useful for all involved in the long-run than merely aiming to resolve a ticket.

8. Enforcing reviews

We tailor our workflow for each client and project - different amounts of testing, project management and process are appropriate for each one. So 'review-driven development' isn't a strict policy to be enforced, but a way of thinking about our work. When it is helpful, we use Github's functionality to protect branches and require reviews or merges via pull requests. This helps us to transparently deliver quality code. We also find this workflow particularly handy because we can broadcast notifications in Slack of new pull requests or merges that will trigger automatic deployments.

What holds you back from doing reviews? What makes a review easier?

I've only touched on some the things we've discussed and there's bound to be even more that we haven't thought of. Let us know what you do to improve peer reviewing in the comments!

Jan 02 2019
Jan 02

Anyone familiar with the Drupal core development lifecycle will know that presently the Drupal community supports two major versions at any one time: the current major release and its immediate predecessor. This means that at ComputerMinds we are currently helping our clients support and develop both Drupal 7 and Drupal 8 sites. So the obvious question that we get asked is ‘when is it time to upgrade’?

We can’t properly answer this question without bringing the next major release, Drupal 9, into the mix. So let’s look at the development timeline for these three versions. According to a blog post by Dries both Drupal 7 and 8 will have an end of life of no later than November 2021 with Drupal 9 being released roughly a year earlier in June 2020 to give site owners enough time to move over to Drupal 9. It is worth noting that from November 2021 only Drupal 9 will be supported. Dries outlines these dates with a whole bunch of details in this blog post.

Historically, migrating between major versions has been a considerable chunk of work as major versions aren’t backwards compatible; however, the good news is that migrating from Drupal 8 to Drupal 9 should be a very straightforward process - so long as you’ve kept your Drupal 8 site up-to-date! This is good news for anyone that’s already taken the plunge into the world of Drupal 8 as the migration process shouldn’t really be any more involved than a minor upgrade. This is because the only real changes will be to remove deprecated code and update dependencies, such as Symfony (Symfony 3 has an end of life of November 2021, hence this date being cut off for support for Drupal 8).

For site owners still using Drupal 7 the question of when to upgrade is slightly more complicated. Do you wait for Drupal 9 and skip Drupal 8, or should you upgrade now? As previously mentioned we can be reasonably confident that upgrading from Drupal 8 to Drupal 9 will be a straightforward process, so we don’t need to worry about having to redo lots of work a couple of years down the line if we do migrate to Drupal 8 now. So the question of when to migrate really varies depending on your current circumstance and preference.

Some site owners will want to benefit from new functionality added in Drupal 8 so will want to upgrade their Drupal 7 sites as soon as possible, whilst obviously factoring in how difficult and expensive the migration will be. Others will be perfectly happy sticking with Drupal 7 until support has ended, at which point they will have to port over in order to keep their site secure. Another piece of good news for anyone weighing up their options with Drupal 7 is that support for Drupal 7 will also be extended to November 2021 (previously support would have ended for Drupal 7 as soon as Drupal 9 was released) so this gives you another year to implement your migration to Drupal 9.

So the short answer of when to migrate your Drupal 7 site is really whenever is good for you. There’s no immediate rush and if you do opt to migrate to Drupal 8, as long as you keep your site up-to-date, upgrading to Drupal 9 when the time comes should be a cinch!

Jan 02 2019
Jan 02

At ComputerMinds we like to think that we’re all pretty good at what we do; however, nobody is perfect and this is why we always ensure that our code is properly peer reviewed as part of our quality assurance process.

Peer review is literally just what the name implies; we work together to review each other’s code to make sure that it all makes sense. This approach means that we’re able to spot obvious mistakes before they become a problem. It also has the huge advantage of allowing us to transfer knowledge between our team on a day-to-day basis.

Pull Requests

The primary way we peer our code is to make use of GitHub’s pull requests (PR) feature. This means that whenever we need to do some work on a Git repo we start by creating a new feature branch which will contain the chunk of work that we’re doing. Then once we are happy with the code we’ve written in this branch we’ll go over to GitHub and create a PR to merge our branch in with another branch which we know is stable, for example the master branch. Before this merge happens GitHub’s PR tool will show all the changes between the the two branches so that they can be reviewed by another developer.

At ComputerMinds we use pull requests a lot. We don’t like to work directly on a stable branch as this way there is much more chance the bugs might slip through the net. By using pull requests we can be sure that our code is properly sanity checked before it makes its way over to a stable environment, be that a client facing testing branch or the live branch. GitHub also makes it easy to add comments directly to the pull request so any issues are full documented and feedback is clearly displayed.

Face to face

When dealing with a more in-depth code change, it's particularly helpful to talk face-to-face, as it allows the original developer to talk you through their changes and the thinking behind them. This allows the reviewer to have a much better understanding of what the original developer was aiming to achieve and to sanity-check their thinking. A 'meatspace' chat can be more difficult to achieve than just getting some comments on a pull request, but it's often worth the effort.

Finding the right fit

Both of these methods have their strengths and weaknesses. Pull requests are quick and easy to use; however, when dealing with larger sets of changes things may get overlooked, or may not be properly understood without knowledge of the bigger picture. Face to face reviews obviously take up more resources to conduct the review but do allow for a more in-depth review where the bigger picture can be clearly explained by the original developer.

Obviously it goes without saying that these two approaches to peer review aren’t mutually exclusive - there are plenty of meatspace chats going on around the office about various PRs.

At ComputerMinds we're still working on how we do code review. There's always room for growth and for change, and we're actively promoting discussion amongst our team to see how we can do better.

How do you do quality assurance and review on your code? Share your thoughts and tips with us below!

Dec 18 2018
Dec 18

There's nothing like Drupal's stock AJAX spinner (this:    Drupal's default blue loading throbber graphic) to make you notice that a site's design hasn't been fully customised. The code from my previous article showing how to fetch a link over AJAX to open in a Foundation reveal popup would suffer from this without some further customisation. After clicking the 'Enquire' button, a loading icon of some kind is needed whilst the linked content is fetched. By default, Drupal just sticks that blue 'throbber' next to the link, but that looks totally out of place. Our client's site uses a loading graphic that feels much more appropriate in style and placement, but my point is that you can set up your own bespoke version. Since it's Christmas, let's add some festive fun! Here's a quick video showing what I'll take you through making:

[embedded content]

A few things are needed:

  1. Create a javascript method that will add a custom progress indicator
  2. Ensure the javascript file containing the method is included on the page
  3. Set a custom attribute on the link that will trigger the AJAX
  4. Override Drupal core's javascript method that adds the standard progress throbber, to respect that custom attribute

There are many ways to achieve points 1 and 2. Usually, you would define a library and add it with #attached. But I decided I wanted to treat my work as if it were part of Drupal's core AJAX library itself, rather than something to add separately. So I implemented hook_library_info_alter() in my theme's main .theme file:

 * Implements hook_library_info_alter().
function MYTHEME_library_info_alter(&$libraries, $extension) {
  // Add our own extension to drupal.ajax, which is aware of the page markup so
  // can add AJAX progress loaders in the page.
  if ($extension == 'core' && isset($libraries['drupal.ajax'])) {
    $libraries['drupal.ajax']['js']['/' . drupal_get_path('theme', 'MYTHEME') . '/js/ajax-overrides.js'] = [];

My ajax-overrides.js file contains this:

(function ($, window, Drupal, drupalSettings) {
   * Creates a new Snowman progress indicator, which really is full screen.
  Drupal.Ajax.prototype.setProgressIndicatorSnowman = function () {
    this.progress.element = $('<div class="ajax-progress ajax-progress-fullscreen ajax-progress-snowman">&nbsp;</div>');
    // My theme has a wrapping element that will match #main.
})(jQuery, window, Drupal, drupalSettings); 

My theme happens to then style .ajax-progress-snowman appropriately, to show a lovely snowman in the middle of the page, rather than a tiny blue spinner next to the link that triggered the AJAX. Given that the styling of the default spinner happens to make links & lines jump around, I've got the ajax-progress-fullscreen class in there, to be more like the 'full screen' graphic that the Views UI uses, and avoid the need to add too much more styling myself.

Part 3, adding a custom attribute to specify that our AJAX link should use a Snowman animation, is easily achieved. I've already added the 'data-dialog-type' attribute to my link, so now I just add a 'data-progress-type' attribute, with a value of 'snowman'. I want this to work similarly to the $element[#ajax]['progress']['type'] property that can be set on form elements that use AJAX. Since that only gets applied to form elements, not arbitrary links using the 'use-ajax' class, we have to do the work to pick this up ourselves.

So this is the last part. Back in my ajax-overrides.js file, I've added this snippet to override the standard 'throbber' progress type that AJAX links would otherwise always use. It falls back to Drupal's original method when the progress type isn't specified in a 'data-progress-type' attribute.

  // Override the progress throbber, to actually use a different progress style
  // if the element had something specified.
  var originalThrobber = Drupal.Ajax.prototype.setProgressIndicatorThrobber;
  Drupal.Ajax.prototype.setProgressIndicatorThrobber = function () {
    var $target = $(this.element);
    var progress = $'progressType') || 'throbber';
    if (progress === 'throbber') {;
    else {
      var progressIndicatorMethod = 'setProgressIndicator' + progress.slice(0, 1).toUpperCase() + progress.slice(1).toLowerCase();
      if (progressIndicatorMethod in this && typeof this[progressIndicatorMethod] === 'function') {

So there you have it - not only can you launch beautiful Foundation Reveal popups from links that fetch content via AJAX, you can now avoid Drupal's little blue throbber animation. If it's an excuse to spread some cheer at Christmas, I'll take it.

Happy Christmas everyone!

Spinning snowman animation

Dec 12 2018
Dec 12

After reading this from Ars Technica, which describes how a developer offered to 'help' the maintainer of an NPM module - and then slowly introduced malicious code to it - I can't help but wonder if the Drupal community is vulnerable to the exact same issue. Let's discuss!

Please, don't touch my package

NPM modules have been hacked at before, and it's not pretty when it happens. Because of the way we use packages, it's a lot easier for nasty code to get sucked in to a LOT of applications before anyone notices. Attacks on the code 'supply chain', therefore, have tended to be high-profile and high-damage.

NPM is used as a source for a huge number of code projects, many of which use other bits of code from other NPM packages. Even moderate applications for PC, mobile or web can have hundreds or thousands of NPM packages pulled in. It's common for packages to depend on other packages, which depend on other packages, which need other packages, which require... you get the picture? There are so many fragments, layers and extra bits that NPM is used for, that the developers of the applications don't necessarily know all the packages that are being pulled in to their application. It's so easy to just type "npm require somefancypackageineed" without thinking and without vetting. NPM will just go and get everything for you, and you don't need to care.

That's how it should be, right? We should be able to just add code and know that it's safe, right? In a perfect world, that would be fine. But in reality there's an increasingly large amount of trust being given when you add a package to your application, and developers don't realise it. It's events like this that are making people aware again that they are including code in their projects that they either do not scrutinise or do not know exists.

Drupal's moment will come

Fortunately, Drupal is a little different to NPM. Whilst modules are often dependent on other modules, we tend to have a lot less layers going on. It's much easier to know what modules and dependencies you're adding in when you include a new module. But that doesn't mean we're immune.

This particular incident came about when a tired, busy module maintainer was approached and offered help. It's a classic social engineering hack.

"Sure, I'll help you! [mwahaha]"

What struck me was that Drupal probably has hundreds of module maintainers in similar circumstances. Put yourself in those shoes, for a moment:
- You maintain an old Drupal 7 module
- It has a few thousand sites using it still
- You're busy, don't have time for it anymore

If somebody offered to sort it all out for you, what would you say? I'm pretty sure most would be ecstatic! Hurrah! But how would you vet your new favourite person in the whole world, before making them a co-maintainer and giving them the keys to the kingdom?

Alternatively, what of this:
- There is an old module, officially unmaintained
- It still has users
- The maintainer cannot be contacted

Drupal has a system for allowing people to be made maintainers of modules, when the original maintainer cannot be contacted. How are these people vetted? I'm sure there's some sort of check, but what if it's not enough?

In particular, I want to point out that as Drupal 7 ages, there will be more and more old, unmaintained and unloved modules still used by thousands of sites. If we forget them and fail to offer them sufficient protection, they will become vulnerable to attacks just like this. Drupal's moment will come.

This is an open source issue

It would be rather very easy to run away screaming right now, having decided that open source technologies sound too dangerous. So I'll put in some positive notes!

That Drupal should be increasingly exposed to the possibility of social engineering and malevolent maintainers is no new issue. There are millions of open source projects out there, all exposed to exactly these issues. As the internet grows and matures and ages, these issues will become more and more common; how many projects out there have tired and busy maintainers?!

For now, though, it must be said that the open source communities of the world have done what few thought possible. We have millions of projects and developers around the world successfully holding onto their trusty foundations, Drupal included. Many governments, enterprises and organisations have embraced the open source way of working on the premise that although there is risk in working differently, there is great merit in the reward. To this day, open source projects continue to thrive and to challenge the closed-source world. It is the scrutiny and the care of the open source community that keeps it clear and safe. As long as we continue to support and love and use our open source communities and contributions, they will stay in good repair and good stead.

If you were thinking of building a Drupal site and are suddenly now questioning that decision, then a read of Drupal's security statement is probably worthwhile.

Know your cattle by name

The key mitigation for this risk, it should be said, is for developers to know what code is in their application. It's our job to care and so it's our job to be paranoid. But it's not always easy. How many times have you installed a module without checking every line of code? How many times have you updated a module without checking the diff in Git? It's not always practicable to scan thousands and thousands of lines of code, just in case - and you'd hope that it's not necessary - but that doesn't mean it's not a good idea.

Using Composer with Drupal 8 makes installing new modules as easy as using NPM, and exposes the same problems to some extent. Add in a build pipeline, and it's very easy to never even see a single line of the new code that you've added to your project. Am I poking a paranoia nerve, yet? ;)

For further fun, think back to other attacks in the last year where sources for external JS dependencies were poisoned, resulting in compromised sites that didn't have a single shred of compromised code committed - it was all in the browser. How's THAT for scary!

In short, you are at risk if:
- You install a module without checking every line of code
- You update a module without checking every line of code / the diff
- You use a DEV release of a module
- You use composer
- Your application pulls in external dependencies

These actions, these ways of working all create dark corners in which evil code can lie undetected.

The light shall save you

Fortunately, it can easily be argued that Drupal Core is pretty safe from these sorts of issues. Phew. Thanks to the wide community of people contributing and keeping keen eyes on watch, Core code can be considered as well-protected. Under constant scrutiny, there's little that can go wrong. The light keeps the dark corners away.

Contrib land, however, is a little different. The most popular modules not only have maintainers (well done, guys!), but many supporting developers and regular release cycles and even official 'Security Coverage' status. We have brought light and trust to the contrib world, and that's a really important thing.

But what does 'Security Coverage' really provide? Can it fail? What happens if there is a malicious maintainer? I wonder.

When the light goes out

Many modules are starting to see the sun set. As dust gathers on old Drupal 7 modules and abandoned D8 alpha modules, the dark corners will start to appear. 'Security Coverage' status will eventually be dropped, or simply forgotten about, and issue lists will pile up. Away from the safety of strong community, keen eyes and dedicated maintainers, what used to be the pride of the Drupal community will one day become a relic. We must take care to keep pride in our heritage, and not allow it to become a source of danger.

Should a Drupal module maintainer be caught out by a trickster and have their work hacked, what would actually happen? Well, for most old D7 modules we'd probably see a few thousand sites pull in the code without looking, and it would likely take some time for the vulnerability to be noticed, let alone fixed.

Fortunately, most developers need a good reason to upgrade modules, so they won't just pull in a new malicious release straight away. But there's always a way, right? What if the hacker nicely bundled all those issues in the queue into a nice release? Or simply committed some new work to the DEV branch to see who would pull it in? There are loads of old modules still running on dev without an official release. How many of us have used them without pinning to a specific commit?

Vigilance is my middle name!

I have tried to ask a lot of questions, rather than simply doom-mongering. There's not an obvious resolution to all of these questions, and that's OK. Many may argue that, since Drupal has never had an issue like this before, we must already have sufficient measures in place to prevent such a thing happening - and I disagree. As the toolkit used by the world's hackers gets ever larger and ever more complex, we cannot afford to be lax in our perspective on security. We must be vigilant!

Module maintainers, remain vigilant. Ask good questions of new co-maintainers. Check their history. See what they've contributed. Find out who they really are.

Developers, remain vigilant. Know your cattle. Be familiar with what goes in and out of your code. Know where it comes from. Know who wrote it.

Drupalers, ask questions. How can we help module maintainers make good decisions? How can we support good developers and keep out the bad?

Some security tips!
- Always know what code you're adding to your project and whether you choose to trust it
- Drupal projects not covered by the Security Team should be carefully reviewed before use
- Know what changes are being made when performing module updates and upgrades
- If using a DEV version of a module in combination with a build process, always pin to a specific git commit (rather than HEAD), so that you don't pull in new code unknowingly

Dec 04 2018
Dec 04

Let me take you on a journey. We'll pass by Drupal content renderer services, AJAX commands, javascript libraries and a popular front-end framework. If you've only heard of one or two of those things, come lean on the experience I took diving deep into Drupal. I'm pleased with where my adventure took me to, and maybe what I learned will be useful to you too.

Here's the end result: a contact form, launched from a button link in the site header, with the page beneath obscured by an overlay. The form allows site visitors to get in touch from any page, without leaving what they were looking at.

Contact form showing in a Foundation reveal popup, launched from a button in the header. An overlay obscures the page behind.

Drupal has its own API for making links launch dialogs (leveraging jQuery UI Dialogs). But our front-end of the site was built with Foundation, the super-popular theming framework, which provides components of its own that are much better for styling. We often base our bespoke themes on Foundation, and manipulate Drupal to fit.

We had already done some styling of Foundation's Reveal component. In those places, the markup to show in the popup is already in the page, but I didn't really want the form to be in the page until it was needed. Instead, AJAX could fetch it in. So I wondered if I could combine Drupal's AJAX APIs with Foundation's Reveal markup and styling. Come with me down the rabbit hole...

There are quite a few components in making this possible. Here's a diagram:

Components involved in opening a Reveal popup containing the markup from a link's target.

So it comes down to the following parts, which we'll explore together. Wherever custom code is needed, I've posted it in full later in this article.

  • A link that uses AJAX, with a dialog type set in an attribute.
  • Drupal builds the content of the page that was linked to.
  • Drupal's content view subscriber picks up that response and looks for a content renderer service that matches the dialog type.
  • The content renderer returns an AJAX command PHP class in its response, and attaches a javascript library that will contain a javascript AJAX command (a method).
  • That command returns the content to show in the popup, and that javascript method name.
  • The javascript method launches the popup containing the HTML content.

Let's start at the beginning: the link. Drupal's AJAX API for links is pretty neat. We trigger it with two things:

  1. A use-ajax class, which tells it to open that link via an AJAX call, returning just the main page content (e.g. without headers & footers), to be presented in your existing page.
  2. A data-dialog-type attribute, to instruct how that content should be presented. This can be used for the jQuery UI dialogs (written up elsewhere) or the newer off-canvas sidebar, for example.

I wanted to have a go at creating my own 'dialog type', which would be a Foundation Reveal popup. The HTML fetched by the AJAX call would be shown in it. Let's start with the basic markup I wanted to my link to have:

<a href="" class="use-ajax" data-dialog-type="reveal">Enquire</a>

This could either just be part of content, or I could get this into a template using a preprocess function that would build the link. Something like this:

// $url could have come from $node->toUrl(), Url::fromRoute() or similar.
// For this example, it's come from a contact form entity.
$url->setOption('attributes', [
  'class' => [
  // This attribute tells it to use our kind of dialog
  'data-dialog-type' => 'reveal',
// The variable 'popup_launcher' is to be used in the template.
$variables['popup_launcher'] = \Drupal\Core\Link::fromTextAndUrl(t('Enquire'), $url);

After much reading around and breakpoint debugging to figure it out, I discovered that dialog types are matched up to content rendering services. So I needed to define a new one of those, which I could base closely on Drupal's own DialogRenderer. Here's the definition from my module's file:

    class: Drupal\mymodule\Render\MainContent\FoundationReveal
    arguments: ['@title_resolver']
      - { name: render.main_content_renderer, format: drupal_reveal }

Adding the tag named 'render.main_content_renderer' means my class will be picked up by core's MainContentRenderersPass when building the container. Drupal's MainContentViewSubscriber will then consider it as a service that can render responses.

The 'format' part of the tag needs to be the value that our data-dialog-type attribute has, with (somewhat arbitrarily?) 'drupal_' prepended. The arguments will just be whatever the constructor for the class needs. I often write my class first and then go back to adjust the service definition once I know what it needs. But I'll be a good tour guide and show you things in order, rather than shuttling you backwards and forwards!

Onto that FoundationReveal service class now. I started out with a copy of core's own ModalRenderer which is a simple extension to the DialogRenderer class. Ultimately, that renderer is geared around returning an AJAX command (see the AJAX API documentation), which comes down to specifying a command to invoke in the client-side javascript with some parameters.

I would need my own command, and my FoundationReveal renderer would need to specify it to be used. That only two functional differences were needed in comparison to core's DialogRenderer:

  1. Attach a custom library, which would contain the actual javascript command to be invoked:
$main_content['#attached']['library'][] = 'mymodule/dialog.ajax';
  1. Return an AJAX command class, that will specify that javascript command (rather than the OpenDialogCommand command that DialogRenderer uses) - i.e. adding this to the returned $response:
new OpenFoundationRevealCommand('#mymodule-reveal')

We'll learn about that command class later!

So the renderer file, mymodule/src/Render/MainContent/FoundationReveal.php (in that location in order to match the namespace in the service file definition), looks like this - look out for those two tweaks:


namespace Drupal\mymodule\Render\MainContent;

use Drupal\Core\Ajax\AjaxResponse;
use Drupal\Core\Render\MainContent\DialogRenderer;
use Drupal\Core\Routing\RouteMatchInterface;
use Drupal\mymodule\Ajax\OpenFoundationRevealCommand;
use Symfony\Component\HttpFoundation\Request;

 * Default main content renderer for foundation reveal requests.
class FoundationReveal extends DialogRenderer {

   * {@inheritdoc}
  public function renderResponse(array $main_content, Request $request, RouteMatchInterface $route_match) {
    $response = new AjaxResponse();

    // First render the main content, because it might provide a title.
    $content = drupal_render_root($main_content);

    // Attach the library necessary for using the OpenFoundationRevealCommand
    // and set the attachments for this Ajax response.
    $main_content['#attached']['library'][] = 'core/drupal.dialog.ajax';
    $main_content['#attached']['library'][] = 'mymodule/dialog.ajax';

    // Determine the title: use the title provided by the main content if any,
    // otherwise get it from the routing information.
    $title = isset($main_content['#title']) ? $main_content['#title'] : $this->titleResolver->getTitle($request, $route_match->getRouteObject());

    // Determine the dialog options and the target for the OpenDialogCommand.
    $options = $request->request->get('dialogOptions', []);

    $response->addCommand(new OpenFoundationRevealCommand('#mymodule-reveal', $title, $content, $options));
    return $response;


That AJAX command class, OpenFoundationRevealCommand sits in mymodule/src/Ajax/OpenFoundationRevealCommand.php. Its render() method is the key, it returns the command which will map to a javascript function, and the actual HTML under 'data'. Here's the code:


namespace Drupal\mymodule\Ajax;

use Drupal\Core\Ajax\OpenDialogCommand;
use Drupal\Core\StringTranslation\StringTranslationTrait;

 * Defines an AJAX command to open certain content in a foundation reveal popup.
 * @ingroup ajax
class OpenFoundationRevealCommand extends OpenDialogCommand {
  use StringTranslationTrait;

   * Implements \Drupal\Core\Ajax\CommandInterface:render().
  public function render() {
    return [
      'command' => 'openFoundationReveal',
      'selector' => $this->selector,
      'settings' => $this->settings,
      'data' => $this->getRenderedContent(),
      'dialogOptions' => $this->dialogOptions,

   * {@inheritdoc}
  protected function getRenderedContent() {
    if (empty($this->dialogOptions['title'])) {
      $title = '';
    else {
      $title = '<h2 id="reveal-header">' . $this->dialogOptions['title'] . '</h2>';

    $button = '<button class="close-button" data-close aria-label="' . $this->t('Close') . '" type="button"><span aria-hidden="true">&times;</span></button>';
    return '<div class="reveal-inner clearfix">' . $title . parent::getRenderedContent() . '</div>' . $button;

Now, I've mentioned that the command needs to match a javascript function. That means adding some new javascript to the page, which, in Drupal 8, we do by defining a library. My 'mymodule/dialog.ajax' library was attached in the middle of FoundationReveal above. My library file defines what actual javascript file to include - it is mymodule.libraries.yml and looks like this:

  version: VERSION
    js/dialog.ajax.js: {}
    - core/drupal.dialog.ajax

Then here's that actual mymodule/js/dialog.ajax.js file. It adds the 'openFoundationReveal' method to the prototype of the globally-accessible Drupal.AjaxCommands. That matches the command name returned by my OpenFoundationRevealCommand::render() method that we saw.

(function ($, Drupal) {
    Drupal.AjaxCommands.prototype.openFoundationReveal = function (ajax, response, status) {
    if (!response.selector) {
      return false;

    // An element matching the selector will be added to the page if it does not exist yet.
    var $dialog = $(response.selector);
    if (!$dialog.length) {
      // Foundation expects certain things on a Reveal container.
      $dialog = $('<div id="' + response.selector.replace(/^#/, '') + '" class="reveal" aria-labelledby="reveal-header"></div>').appendTo('body');

    if (!ajax.wrapper) {
      ajax.wrapper = $dialog.attr('id');

    // Get the markup inserted into the page.
    response.command = 'insert';
    response.method = 'html';
    ajax.commands.insert(ajax, response, status);

    // The content is ready, now open the dialog!
    var popup = new Foundation.Reveal($dialog);;
})(jQuery, Drupal);

There we have it - that last bit of the command opens the Foundation Reveal popup dialog!

I should also add that since I was showing a contact form in the popup, I installed the Contact ajax module. This meant that a site visitor would stay within the popup once they submit the form, which meant for a clean user experience.

Thanks for following along with me!

Nov 09 2018
Nov 09

I'll keep this short and sweet, but we thought this would be a useful tip to share with the world as a potential security issue with the combined use of File::getFileUri() and FileSystem::realpath().

Consider the following code excerpt :

$file = File::load($some_file_uri);

if ($file) {
  $uri = $file->getFileUri();
  $file_realpath = \Drupal::service('file_system')->realpath($uri);

Seems pretty harmless right? Load up the file from $some_file_uri , If we have a valid file then get the URI and then grab the real path.

Wrong (potentially, depending on what you do with $file_realpath).

If $file is a valid file, but for whatever reason the file is no longer physically located on disk, then $file->getFileUri() will return a blank string.

It turns out that passing this blank string $uri into \Drupal::service('file_system')->realpath($uri) will return the full webroot of your site!

Depending on what you were doing with said $file_realpath, it could then be a security issue.

We were handling a user webform submission and then sending the submission over to a CRM system... because $file_realpath was now the webroot of the site, then code that followed to archive the user submitted file ended up archiving the entire webroot and sending this over to the client's CRM system. 

Luckily in this instance, the archive was only ever available temporarily server side and then went directly to the clients own CRM system, but in another circumstance this could have easily been a very serious security issue.

Fortunately the fix is quite simple, ensure the the $uri returned from ->getFileUri() is valid by some method, before passing through realpath(). Here, I now validate the uri matches what I know it should be for the current webform submission.

if ($file) {
  $uri = $file->getFileUri();
  $webform_id = $webform->get('id');
  $submission_id = $webform_submission->get('sid')->getValue()[0]['value'];
  $valid_file_scheme = strpos($uri, 'private://webform/' . $webform_id . '/' . $submission_id . '/') !== FALSE;

  if ($valid_file_scheme) { 
    // Proceed with the rest of the code..
Oct 18 2018
Oct 18

If you've got a Drupal site, which you need to update quickly (for example, to address last night's security advisory!), here's a tip. Run this from the command line:

curl '' | patch -p1

This assumes your codebase was on Drupal 7.59 and you're currently in Drupal's root directory. If you're currently on a different version, adjust the numbers in the patch URL accordingly.

Don't forget to still run database updates via /update.php or drush updatedb !

The Drupal repo on github is a verbatim mirror of the official Drupal repo from Github supports comparing arbitrary git references, with the /ORGANIZATION/REPO/compare/GITREF..GITREF URL, where the reference can be a tag, branch or revision. Adding '.patch' to the end of a github URL formats the page as a patch. So I've made use of these three things to give me an exact patch of the changes needed to update Drupal core's code.

We normally use Composer (especially for Drupal 8) or Drush make to manage our codebases, including core, which is definitely the ideal, but sometimes projects aren't organised this way for whatever reason. A simple patch like this avoids the need for drush, or even potential mistakes that can be made when running drush pm-updatecode (such as removing any customisations within core directories).

This method is even compatible with any core patches you already have in place, which normally have to be to re-applied when upgrading core by other methods. If you have any existing changes to core that are incompatible, you'll get errors about not being able to apply anyway, which you can then resolve manually.
(Any patches/hacks you make to core should be documented clearly somewhere, so drush make or composer-patches would be better in that scenario though!)

You can use this method to patch from github even if your core codebase is not in version control. But if it is... always check your diffs before committing! :-)

Aug 14 2018
Aug 14

A client noticed the dates on their news articles were not being translated into the correct language. The name of the month would always appear in English, even though all the month names had themselves been translated and showed correctly elsewhere. The problem turned out to be down to the twig filter being used in the template to format the date. This is what we did have:

{% set newsDate = node.getCreatedTime|date('j F Y') %}
{% trans %} {{ newsDate }}{% endtrans %}

So this would produce something like '1 March 2018' instead of '1 März 2018' when in German. This was because twig's core date() filter simply isn't aware of Drupal's idea of language.

I switched out the date() filter and used Drupal core's own format_date() filter instead, which uses Drupal's own date.formatter service, which is aware of language. It ensures the month name is passed through t() to translate it, separately to the rest of the numbers in the date. So it now looks like this:

{% set newsDate = node.getCreatedTime|format_date('custom', 'j F Y') %}
{% trans %} {{ newsDate }}{% endtrans %}

Having done that, I realise the original code was the equivalent of doing this:

t('1 March 2018');

(I'm less of a front-end coder, so putting it in PHP makes it more obvious to me than twig code!)

So the date actually was translatable, but only as the whole string -- so unless the translators had gone through translating every single individual date, the German month name was never going to show!

What I needed was the twig equivalent of using placeholders like this PHP example:

t('My name is @name, hello!', ['@name' => $name]);

When you use variables between twig's trans and endtrans tags, they really are treated by Drupal as if they were placeholders, so this is the twig version:

{% trans %} My name is {{ name }}, hello! {% endtrans %}

This helped me understand twig that little bit more, and appreciate how to use it better! Regardless of formatting dates, I now know better how to set up translatable text within templates, hopefully you do too :-)

Aug 09 2018
Aug 09

The Problem

I imagine many of us have been there: there’s some CSS class in your markup, and you need to do something with it. Maybe you want to remove it, change it, or perhaps alter its style declarations. “Easy peasy,” you think, “I’m a developer. I got this.” And so you should.

Next, if you’re anything like me, your first instinct is to fire up your search tool of choice and search your codebase for that string. You’d expect that would lead you to where that class is getting added to your markup, along with anywhere CSS rules are applied to it… right?

Except it doesn’t. Phooey. That class string doesn’t appear anywhere except in your browser's dev tools. At this point, you either toss your developer pride overboard and hack a fix in some other way, or you search for assorted variations of your string in ever shorter segments until you find something resembling this:

$classes_array[] = 'some-' . $class;

Aha! It was helpfully obfuscated for you. And of course, you could hardly expect to simply search for that class name and find its CSS rules. That would be too easy! So naturally, they were written in SASS like this:

.some-#{$class} {
   // Some declarations…

Now that’s just what it might look like in PHP and SASS, but I’m sure you can imagine what it might look like in your templating language, javascript, or whatever CSS-pre/postprocessor you might abuse.

The point is, you’ve gotta slow down and tread a little more carefully here; this isn’t a simple find-and-replace job anymore. There are a few reasons why such code might have been written:

  • The latter half of that class might originate from a fixed list of options exposed to your content editors.
  • Perhaps there’s some other logic in play, that has intentionally been kept out of the CSS: your element gets a class based on its region or container, for example.
  • Your colleagues are actively trying to make your life difficult.

If you’ve never been in this situation - good for you! Future-you called and asked that you avoid munging together parts of a CSS class like this if you possibly can. Do it for future-you. Don’t let them inadvertently introduce bugs when they fail to spot your class-munging!

The solution

“But what if,” I hear you cry, “I need to generate a class dynamically. How can I keep future-me on side?”

Well, dear reader – I hear you. Sometimes you really don’t want to explicitly list every possible variation of a class. Fair enough. So I have a proposal, one that I’d like a nice name for but, y’know, naming things is hard. Maybe the “Searchability class” pattern. Or “CSS search placeholder classes”. Or “CSS class search flags”. Suggestions on a postcard.

Anyways, returning to our earlier PHP example, it looks like this:

$classes_array[] = 'some-%placeholder'
$classes_array[] = 'some-' . $class;

Producing markup like this:

<div class="some-%placeholder some-arbitrary-suffix"></div>

That is: wherever you add a dynamic class into your page source, additionally put a recognisably formatted, static version of that class alongside it. That would also include anywhere you generated classes in JavaScript or any CSS-pre/post-processing madness.

Obviously, you don’t need these placeholder classes in your actual CSS (if you wanted to edit the static CSS, the regular class will already show up in searches) but if you are doing this in some dynamically generated CSS, then you’ll want to drop the static version of the class in as a comment. So our Sass example would become:

// .some-%placeholder
.some-#{$class} {
  // Some declarations…

Once this becomes an established practice within your team, instead of fumbling around trying to find where a given class may have come from, you’ll be able to spot those placeholder strings and search for those, and relatively quickly find all the relevant bits of code.

So I think this is something we’re going to try to adopt/militantly enforce upon ourselves at ComputerMinds. As a Drupal shop, something like %placeholder makes sense, as that syntax is used elsewhere in core to denote dynamically replaced parts of a string. It also has the advantage of being slightly tricky to actually use in a CSS selector (if you don’t already know, I’m not going to tell you). You really don’t want any styling attached to these.

So there you have it – the “Searchable CSS class placeholder flags for generated class names” pattern. We’ll keep working on the name.

Jun 26 2018
Jun 26

Let's say you've built a custom form for your Drupal 8 site. It contains various elements for input (name, email address, a message, that kind of thing), and you want to send the submitted values in an email to someone (perhaps a site admin). That's a pretty common thing to need to do.

This could be done with Drupal's core contact forms, webforms, or similar -- but there are cases when a bespoke form is needed, for example, to allow some special business logic to be applied to its input or the form presentation. The drawback of a custom form is that you won't get nice submission emails for free, but they can be done quite easily, with the token module (you'll need that installed).

In your form's submission handler, send an email using the mail manager service (I'll assume you can already inject that into your form, read the documentation if you need help with that):


  $params = [
    'values' => $form_state->getValues(),
  // The 'plugin.manager.mail' service is the one to use for $mailManager.
  $mailManager->mail('mymodule', 'myform_submit', '[email protected], 'en', $params);

Then create a hook_mail() in your .module file, with a matching key ('myform_submit' in my example):


 * Implements hook_mail().
function mymodule_mail($key, &$message, $params) {
  switch ($key) {
    case 'myform_submit':
      $token_service = \Drupal::token();
      $token_data = [
        'array' => $params['values'],

      // In this example, put the submitted value from a 'first_name' element 
      // into the subject.
      $subject = 'Submission from [array:value:first_name]';
      $message['subject'] = $token_service->replace($subject, $token_data, ['clear' => TRUE]);

      // Each submitted value can be included in the email body as a token. My
      // form had 'first_name', 'last_name', 'color' and 'birthdate' elements.
      $body = <<<EOT
The mymodule form was submitted by [array:value:first_name] [array:value:last_name].
They like the colour [array:value:color], and their birthday is [array:value:birthdate].
      $message['body'] = [
        $token_service->replace($body, $token_data, ['clear' => TRUE]),

Spot the [array:value:thing] tokens! Using these 'array' tokens makes it really easy to include the whatever input gets submitted by visitors to this custom form on your Drupal site. This is most useful when the email body is configurable, so editors can move the tokens around as placeholders within the content, and regardless of the form's structure. By the way, the submitted values do not get sanitized - although if your email is just plain text, that's probably not a problem.

There are more array tokens you can use too, such as ones to return a comma-separated list of all items in an array, a count of items, or just the first/last item. See the original issue for examples. These tokens are available in Token's Drupal 7 version too!

Jun 11 2018
Jun 11

I volunteered to carry out the migration for the new ComputerMinds site as migration was one of the very few areas of Drupal that I hadn’t delved into thus far. With Drupal 8 becoming more and more popular, now was a great opportunity to learn the migration ropes. Luckily, Drupal 8’s migration has greatly improved since Drupal 7 so my life was made somewhat a little “easier”!

This article will be aimed at some of my finds and processes, rather than a “How to do a D8 migration”.

Since our new site was very different to our old one in terms of content, we had to be quite choosey in exactly what was needed. We decided that we only really needed the articles; pretty much everything else was a fresh start. We would be manually carrying over users; as that would be too little work to warrant writing a migration for.

In order for us to get our articles over from the old site, we would need to migrate the current taxonomy terms, URL aliases (this would come back to bite me hard!), files and last but not least, the article nodes themselves. Migrating just a node seemed simple enough, but you quickly forget that it is more than just a node. All the stuff attached to the node has to be carried over.

Modules like Migrate plus and Migrate tools are great additions to the migration family and I can highly recommend them; they make life so much easier! Migrate plus “basically” writes the migration for you :)

With Migrate plus doing the bulk of the work for me, the only PHP code I needed to write was to map our old User ID’s to the new ones, so original authors would be retained. Otherwise I could take all the credit for every single article ComputerMinds has ever written in the past (mwahah!). This can be easily achieved using a simple process plugin.

 * This plugin will tie a piece of content with an existing user.
 * @migrateProcessPlugin(
 *   id = "user_id_mapper"
 * )
class UserIdMapper extends ProcessPluginBase {
   * {@inheritdoc}
  public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) {
    $mapping = [
      'oldID' => 'newID',
    if (!empty($value)) {
      $value = $mapping[$value];
    return $value;

We had some term reference fields, and like Drupal 7, Drupal 8 will reduce your potential workload - it creates taxonomy terms for you if those terms are missing from your new site. Nice and easy.

The biggest remaining hitch was extracting the three components from a body field. These are value, summary and format. Summary and format were fairly straight forward, but attaining the value component was a real pain (the code below will show you otherwise). Straight away you’ll notice inconsistencies with the “keys”. I would have expected the format to have been body/value, body/summary and body/format, but alas this was not the case.

body: body
  source: teaser
  plugin: static_map
  source: body/0/format
    markdown: markdown
    full_html: basic_html
    filtered_html: restricted_html

This took a few painful hours to debug and figure out, to this day I still do not know why! At least this being documented here can save others some pain and time.

With all migration finished and the site ready to be shipped out, what came apparent is that (as mentioned very briefly earlier) I had not accounted for some URL aliases (I had used a process plugin to pinch only the aliases we needed). I’d assumed, yes assumed (naughty developer), that ALL articles had the SAME URL path auto pattern. Big, big boo boo. What I didn’t know was that some articles on our old side had been migrated from an even older site and these article URLs came in all shapes and sizes; shapes and sizes that do not match our current path auto pattern. I’ve been fixing redirects and 404’s since :)

Lesson of the day

Do not ASSUME everything is the same. Do go check everything is how you expect it to be before migrating content.

Jun 01 2018
Jun 01

Let's have a quick look through our development process on this project and pick out some of the more interesting bits. As briefly mentioned in the last article we are using a composer set up and all code is version controlled using git on github. All pretty standard stuff.


In the previous article I briefly discussed how we set up Pattern Lab. Before getting stuck in to the components that would make up the pages of the site, we first needed to set up some global variables and grid. Variables allow us to reuse common values throughout the SCSS and if we need to make a change we can do so centrally. After adding variables for each of the colours and also a colour palette mapping which would allow to loop through all colours if we needed to throughout the project, we added variables for padding that would be used throughout and also font styles, after importing from Google Fonts.

CSS Grid

Although still relatively new, CSS Grid is a web standard and works in all modern browsers. So much simpler than using grid libraries like Susy we were keen to start using it on our projects and this was the perfect one on which to try it out. Set up was simple, partly due to the simple grid in the designs but mostly due to the simplicity of CSS Grid itself. A few lines of SCSS and the grid wrapper was set up:

.grid {
  display: grid;
  grid-auto-rows: auto;
  grid-gap: 20px;
  grid-column-gap: 20px;
  grid-template-rows: minmax(0, auto);

This declares the grid, sets a consistent gap of 20px and sets a broad size range for the rows. As well as adding the .grid class to the wrapper of where we'd like a grid, we also need to add another class to define how many columns that grid should have. Defining, in SCSS, a simple mapping allowed me to create a loop to generate the column classes we needed:

// Column mapping
$columns: (
  one: 1,
  two: 2,
  three: 3,
  four: 4,
  five: 5,
  six: 6,

// Generate column classes
@each $alpha, $numeric in $columns {
  .grid--columns-#{$numeric} {
    grid-template-columns: repeat(#{$numeric}, 1fr);

    @include to-large {
      grid-template-columns: repeat(1, 1fr);

This loop generates a class for each of the potential number of columns we might need. The last @include in the above code simply resets the column definition, making all columns full width on smaller screens. Now, all we needed to do was add 2 classes and we'd have a grid!

Occasionally, we'd have a need for grid items to to span more than one column. Using the same mapping as before, I created a simple loop that would generate classes to define different column spans. These classes could then be applied to the immediate children of the grid wrapper.

.grid__item {
  @include from-large {
    @each $alpha, $numeric in $columns {
      &--span-#{$alpha} {
        grid-column: auto / span #{$numeric};

Now we have complete control over our grid. Here's a example of how it's used.

<div class="grid grid--columns-6">
  <div class="grid__item">
    First item
  <div class="grid__item grid__item--span-two">
    Second item spanning two columns
  <div class="grid__item grid__item--span-three">
    Third item spanning three columns
Pattern Lab

In the previous article I mentioned the setup of Pattern Lab and Emulsify but didn't look in to the actual development, so let's do that now! Although we're used to coding SCSS in a modular way here at CM, with Pattern Lab's stand alone components essentially working like modules we actually don't need to take too much care to produce nice clean code. Each SCSS file is only catering for a small component on the page and as such is usually small and specific.

But, as well as including our pattern specific code within each component's directory we needed to ensure that we also considered working in a SMACSSy way to reduce the CSS we were generating. We didn't want multiple classes applying the same styling, so any rules that would be reused and consistent, like padding, were placed inside the Base folder in a Base SCSS file.

Of course, once we had defined our classes we needed to get them in to the Pattern Lab Twig templates. As components will have variations we can't just hard code the classes in to the templates, we need to pass them in as variables. Passing variables to Twig files is super simple and with Emulsify 2.x there's now even Drupal Attributes support with the addition of the BEM Twig extension. As we are likely wanting to pass multiple classes to the same element we can pass in a simple array of modifiers and render it out in the Twig template. So in a Drupal preprocess we can prepare some modifiers (we'll look at passing these on to the Pattern Lab Twig files later):

$variables['heading_modifiers'] = ['centered', 'no-space'];

And then in our Twig file we pass this through the BEM function:

{% set heading_base_class = heading_base_class|default('h' ~ heading_level) %}
<h{{ heading_level }} {{ bem(heading_base_class, heading_modifiers) }}>
    {{ heading }}
</h{{ heading_level }}>

Which renders the markup as:

<h1 class="h1 h1--centered h1--no-space">


The beauty of using Pattern Lab is the ability to work simultaneously on frontend and backend development. Before bringing more hands on deck I was able to begin the backend of the site before getting even close to completing the frontend. As mentioned earlier, the codebase was set up before the Front End work began so we could jump straight in to the Emulsify theme. Using composer allowed us to quickly get Drupal 8 and a bunch of contrib modules we needed so when we were ready to start on the backend we could jump straight in.

This site required nothing too complex in terms of backend development and the work was more a task of building content types and views to display content as per the designs. That said, we did utilise the Paragraphs module allowing us to create reusable entities, or tiles as we're used to calling them, as they are used extensively throughout the designs.


Something that hasn't been standard in our Drupal 8 builds since the release is configuration. Gone are the days of bundling settings in to features, Drupal 8 Core comes with configuration management tools. In the early days, one of our senior developers created cm_config_tools - a module to give developers precise control over what config to export. Drupal 8 has progressed since then and the timing of this project allowed us to use a new module, Configuration Split.

Configuration Split builds on Drupal Core's configuration management ability to export a whole set of a site's configuration by allowing us to define sets of configuration to be exported to separate directories. It's then possible to define in settings.php which directories to include when importing/exporting. As we were committing settings.php to git we could include the main config directory here and then have a local.settings.php (not committed to git) to define the database and any other config directories to include:

## Enable config split settings
$config['config_split.config_split.local_dev']['status'] = TRUE;
$config['config_split.config_split.local_overrides']['status'] = TRUE;

This means we can have configuration solely for use when developing (things like Devel and Field_UI). It's also possible to override settings that are included in the main config export, locally. This allows us to run local environments without fear of interfering with live functionality, like affecting comments by changing the Disqus Domain, for example.

Importing and exporting works the same way as Core's configuration management, by using Drush commands:

Drush cim
Drush cex


In a normal Drupal project, the markup (Twig files) would be within Drupal's templating system with prepared variables being rendered out where they were needed to be. With our component based Pattern Lab, all of our markup was within the Patten Lab structure, away from Drupal's /templates directory. Fortunately, including them is simple enough. First we needed to download and install the Components Libraries module. This allowed us to specify a different directory for our Twig files and also register Twig namespaces for those files. We do this in the theme's .info file:

      - components/_patterns/00-base
      - components/_patterns/01-atoms
      - components/_patterns/02-molecules
      - components/_patterns/03-organisms
      - components/_patterns/04-templates
      - components/_patterns/05-pages

Now our Pattern Lab Twig files were included, we could begin to link them up to Drupal's templating system. Linking them is as simple as choosing which components you want to display and then calling that Twig file from your Drupal template. When you call the component's Twig file you just need to pass in the variables from Drupal.

So if we wanted to display a page title as an H1, within page-title.html.twig inside Drupal's template directory we would call our Pattern Lab's heading component passing in the title and heading level:

{{ title_prefix }}
    {% if title %}
        {% include "@atoms/02-text/00-headings/_heading.twig"
            with {
                "heading": title,
                "heading_level": 1,
    {% endif %}
{{ title_suffix }}

If we wanted to change the style of the heading we could pass in an array of modifiers, as shown in the example further up the page, too. For more complex page components we can also pass in an array to be looped over inside the component's Twig file. For example, if we wanted a listing of cards we could pass an array to a listing component Twig template and within that loop through the array each time calling another component's Twig template:

<div class="grid grid--columns-3">
  {% for item in content_array %}
    <div class="grid__item grid__item--{% if loop.index is divisibleby(2) %}right{% else %}left{% endif %}">
      {% include "@molecules/card/01-card.twig"
        with {
          "card_img_src": item.image,
          "card_title": item.title,
          "card_body": item.body,
          "card_button_content": item.button_text,
          "card_button_url": item.button_url,
          "card_button_modifiers": item.button_mods,
          "card_url": item.url,
          "card_img_alt": item.image_alt,
  {% endfor %}

This is just a brief overview and a look at some interesting parts, there was obviously a lot more work that went in to the site build! Now, as this website was being built to replace our old site, we needed the content from old site to be moved over. In the next article Christian is going to talk through this process.

May 30 2018
May 30

The new GDPR laws are here, hurrah!

Having a number of developers handling databases from a number of client sites could easily be a nightmare, but we at ComputerMinds spent quite some time thinking about how to get and keep everybody safe and squeaky clean on the personal data front.

Here's a quick run-down of the key things to be aware of - and a pretty poster to help you keep it all in mind :)

Remove personal data from your system

  1. Review all databases on your computer, making sure to consider also those .sql dump files still sat in your downloads directory or your Recycle bin/trash.
  2. If there are databases that you need to keep on your system, then you must sanitize them by encrypting, anonymizing or removing personal data.
  3. Review all testing / UAT environments and ensure they're running off sanitized databases where possible.

Stay clean by using sanitized databases

Set up some _drush_sql_sync_sanitize() hooks to deal with personal data stored on your site. Then either have your Jenkins server use it to provide sanitized dumps, or ensure that your developers use it to sanitize databases immediately after importing.

When setting up your hook, make sure to consider things like:

  • User table - clear out email addresses, usernames etc.
  • Custom fields on users - names, telephone numbers etc. that you've added.
  • Webform / contact form submissions - make sure that your Webform / contact form data gets cleared out. Webform 7.12 and above has these hooks included, but it's good to double-check.
  • Commerce order table - you'll need to remove personal data from the commerce orders.
  • Commerce profile tables - make sure that the personal data in the profiles gets anonymized or removed.
  • Commerce payment gateway callback tables - these will have detailed payment transaction data, and absolutely must be cleared out.
  • URL aliases & redirects - by default Drupal sets up aliases for users' usernames, so you'll need to review those tables.
  • Comments - these usually have name, email and website fields that will need clearing out. But their body content may also have personal data in too, so you might be better off just binning the lot.
  • Watchdog / logging tables - these take up lots of space, so you probably don't want to export them off the live site anyway, but think seriously about the personal data inside if you do decide you want to import them elsewhere. Truncate recommended.
  • Cache tables - these can be huge, so you probably don't want to export them off the live site anyway, but think seriously about the personal data inside if you do decide you want to import them elsewhere. Truncate recommended.

This is certainly not a complete list, but we can't tell you what custom fun you've implemented on your site - so its' down to you to go check your tables!

Stay vigilant

  • Ensure future development environments and UAT/test environments are built using sanitized databases.
  • If you receive user data via email, immediately delete the email and attachments and reprimand the sender!
  • Talk to your clients about changes that need to be made to their sites.

The "GDPR and You" poster for developers, by ComputerMindsPDF download link below!

May 14 2018
May 14

We ran into an obscure error recently, when saving a view that used a custom views plugin. It was supposed to be a very simple extension to core's bundle (content type) filter:

InvalidArgumentException: The configuration property display.default.display_options.filters.bundle.value.article doesn't exist. in Drupal\Core\Config\Schema\ArrayElement->get() (line 76 of [...]/core/lib/Drupal/Core/Config/Schema/ArrayElement.php).

Several contrib projects ran into this issue too: Drupal Commerce, Search API and Webform Views integration. There's even a core issue that looked relevant... but it turned out to be a simple, if perhaps surprising fix. If you ever run into it, it will have a different property (i.e. due to whichever plugin or default value are used).

Our filter was little more than a simple subclass of \Drupal\views\Plugin\views\filter\Bundle, declared for a specific entity type in a very ordinary hook_views_data() (which had even been autogenerated by Drupal Console, so we were fairly confident the problem wasn't there). It just tailored the options form a little to work well for the entity type.

Views plugins all have their own configuration schema - for example, the bundle filter is declared in views.filter.schema.yml to use the 'in_operator', because multiple bundles can be selected for it. When we subclass such a plugin, we do not automatically get to inherit the configuration schema (as that is not part of the PHP class or even its annotation). (Perhaps core could be 'fixed' to recognise this situation ... but there are more important things to work on!)

The solution is to simply copy the schema from the plugin we've extended - in our case, that was 'views.filter.bundle', found in core's 'views.filter.schema.yml' file within views' config/schema sub-directory. Wherever it is, it's probably named 'views.PLUGIN.ID', where 'PLUGIN' is the type of your plugin (e.g. field, filter, area), and 'ID' is the ID in the class annotation of the class your plugin extends. We pasted the schema into our own schema file - which can be named something like /config/schema/mymodule.schema.yml, within our module's directory:

# Replace 'mymodule_special_type' with the ID in your plugin's annotation.
  type: views.filter.in_operator
  label: 'Customised type selector'

Once that file is in place correctly, I expect you just need to rebuild caches and/or the container for Drupal to happy again. Re-save the form, the error is gone :-)

Configuration schemas should normally help development by catching errors, but as I've written before, an incorrect schema can make things surprisingly difficult. I hope someone else finds this article solves their problem so it doesn't take them as long to figure out! I haven't used it, but it's possible that the Configuration inspector project could help you identify issues otherwise.

May 04 2018
May 04

A super quick blast from the past today; a Drupal 7 based article!

I had some work recently to create a new "setting" variable for one our Drupal 7 multilingual sites, which meant creating multilingual versions of those variables. I soon found out that there is very much a correct way - or order - to achieve this as I got this one very wrong (I had to re-instate my DB!). So here I am writing a very quick guide to help those from my wrong doings.

(This guide assumes you have a multilingual site setup with i18n's Variable translation module.)

Four simple steps to achieve a multilingual variable:

  1. Declare your new variables via hook_variable_info
function your_module_name_variable_info($options = array()) {
  $variables['your_variable_name'] = array(
    'title' => t('Foo'),
    'description' => t('A multi-lingual variable'),
    'type' => 'string',
    'default' => t('Bar'),
    'localize' => TRUE,

The options you can set in this hook are well documented - start reading from the Variable module's project page.

  1. Flush the variable cache and get your new variables registered using an update hook. The meat of the update hook is below -- note that this assumes you want all of the possibly-localizable variables to be made translatable:
  /** @var VariableRealmControllerInterface $controller */
  if ($controller = variable_realm_controller('language')) {
    $variables = $controller->getAvailableVariables();
    $controller->setRealmVariable('list', $variables);
  else {
    throw new DrupalUpdateException('Could not set up translatable variables. Try manually setting them.');

  1. Create or alter your settings form (I'm assuming it uses system_settings_form() or is already recognised by the i18n/variable systems as a form containing translatable variables) and add your new form elements. Make sure the element(s) are the same as your newly created variable(s) - I use a $key variable to avoid any mistakes there!
$key = 'your_variable_name';
$form[$key] = array(
  '#type' => 'textfield',
  '#title' => t('Foo'),
  '#default_value' => variable_get($key, 'Bar'),

Head over to /admin/config/regional/i18n/variable or your settings form to see your new multilingual variable in all it's glory!

May 03 2018
May 03

We didn’t see this project solely as a chance to rebrand and rebuild for ourselves, it was also an opportunity to try something new and expand our collective knowledge with the potential for using with clients in the future. We had been discussing using Pattern Lab for Front End development for some time and this was the perfect opportunity to try it out.

Patten Lab allows the creation of component-driven user interfaces using atomic design principles. This means we can create modular patterns all packaged up nicely that can be assembled together to build a site. Plus we can use dynamic data to display the patterns in a live style guide which makes viewing each component quick and easy. And nobody is viewing out of date designs or code - an issue we had on a recent project. With Pattern Lab, we would have a central place to view all of the actual components that will be used on the live site.

The guys over at Four Kitchens made implementing a Pattern Lab with Drupal super easy, by creating Emulsify. Serving as a Drupal 8 starter kit theme, Emulsify allows you to build and manage components without using Drupal's template names but by using custom template names that make sense to everyone, instead. When you're ready, you can easily connect them up to Drupal.

Building the frontend in this way allows it to be built separately from the backend development. It's possible to create the whole of the front end before even touching Drupal. If needs be, it also allows developers to work on the frontend and backend at the same time without stepping on each other's toes.

Because we were to be using Emulsify, we quickly set up a Drupal codebase (using composer) which allowed us to then jump in and clone the Emulsify theme and begin working on the Front End. Once we had the theme, it was real easy to get the sass and javascript compiling, set up a local server (to view the style guide) and watch for changes, with one command:

npm start

As well as compiling, this command also provides a url for the local server. Open it up in a browser and you can see your live style guide!

Now for the actual components. These are filed in the theme, inside:


As we're working with Atomic Principles, the smallest components are filed first building up to the biggest. The beauty of pattern lab is nesting - it's possible to include patterns inside each other to make larger components. Although it's not necessary to use Atomic Design naming conventions when organising patterns, it does make sense. These levels are:

  1. Atoms
  2. Molecules
  3. Organisms
  4. Templates
  5. Pages

Atoms are the basic elements of the site like HTML tags and buttons. Molecules combine these Atoms to create larger components like a card and then Organisms can combine Molecules to create more complex page components and so on...

Numerics are added to ensure they are listed in the correct order and also a Base folder is added to house variables, breakpoints, layouts and global sass mixins. So, this is how our file structure looks inside that _patterns folder:

- _patterns/
    - 00-base
    - 01-atoms
    - 02-molecules
    - 03-organisms
    - 04-templates
    - 05-pages

Within each of the Atomic Design folders is a set of components. Each component comprises of a Twig file, a Sass file and in some cases a Javascript file. These files contain code specific to that component. Having all the code for each component organised this way makes it really easy and fast to edit components, and also add new ones.

Having just the code that makes the component isn't enough for us to view it in our style guide. In addition to the files that make up the component, we can also include files to give the component context. A Markdown file allows us to give the component a title and description, which are used in the navigation and style guide. To each component folder we can also add a YML file which holds filler content solely for use for the style guide. We basically just take each variable name from the twig file and provide some content for each.

So, a typical component file structure might look like this:

 - card
    - _card.scss
    - card.yml
    - card.js

Once we had a full understanding of the structure and had added our colour palette and set up our grid and breakpoints it was a case of working through the designs to determine what the components were and of which size. Then starting with Atoms and working up we could build each component. We'll look at the actual development in the next article in the series.

Apr 10 2018
Apr 10

Once upon a time, we wrote an article about how to render fields on their own in Drupal 7, which was really handy because Drupal 7 wasn't always intuitive. It's common to want to display a field outside of the context of its main entity page, like showing author information in a sidebar block or in a panel, but you had to just know which functions to use. Drupal 8 has come along since then using 'grown up' things like objects and methods, which actually makes the job a little easier. So now we have this:

The short answer


Quick reference

I'll cover these in detail below, but here are the things you might want to be doing:

  1. Render a field as if it were on the node/entity page (e.g. with the markup from the normal formatter and full field template)

Drupal 7: field_view_field(), optionally passing a view mode string, or formatter settings array.
Drupal 8: $node->field_my_thing->view(), optionally passing a view mode string, or formatter settings array.

  1. Just get a single value out of a field (i.e. raw values, usually as originally inputted)

Drupal 7: field_get_items() and then retrieve the index you want from the array.
Drupal 8: $node->field_my_thing->get(0)->value, passing just the index you want to get(). Properties other than 'value' may be available.

  1. Render a single value as if it were on the node/entity page (e.g. with the normal formatter's markup, but not all the wrappers that Drupal's field templates give you)

Drupal 7: field_view_value(), optionally passing a view mode string, or formatter settings array, but always passing the actual items array.
Drupal 8: $node->field_my_thing->get(0)->view(), passing just the index you want to get() and optionally passing a view mode string, or formatter settings array to view().

The long answer

Now that entities like nodes are properly classed objects, and fields use the fancy new Typed Data API, we don't need to care about the underlying data structure for nodes or their fields, we can just call the method to perform the operation we want! You know, just what methods are supposed to be for! You want to view a field? Just call its 'view' method.

The output will be automatically sanitised and goes through the normal formatter for the field, as well as the regular field template. You can specify whether you want it rendered as if it were in a teaser or other view mode, by passing in the view mode string, just as we did with field_view_field(). (Or you might have used something like $node->field_my_thing['und'][0]['value'] - in which case, go back and read our article for Drupal 7!)


Or even override the formatter to be used altogether (which is handy if the field would normally be hidden in any view mode):

  'type' => 'image',
  'label' => 'hidden',
  'settings' => array(
    'image_style' => 'larger_thumbnail',
    'image_link' => 'content',

This does assume that your field ('field_my_thing') in my examples does at least exist on your node (even if it's empty). You may want to wrap the whole code in a try/catch block, just in case it might not.

For bonus points, you could load up the normal formatter settings, and tweak them:

use Drupal\Core\Entity\Entity\EntityViewDisplay;

// If you have the entity/node you want, use collectRenderDisplay() as it may
// already be statically cached, plus it goes through various alter hooks.
$display_options = EntityViewDisplay::collectRenderDisplay($node, 'teaser')

// Otherwise - or if you intentionally want to re-use the settings from another
// unrelated entity type, bundle or display mode - just load the display config.
$display_options = EntityViewDisplay::load('pagaraph.media_pod.teaser')

// Then tweak those display options and view the field.
$display_options['settings']['image_style'] = 'even_bigger_thumbnail';

This all assumes you've at least loaded your node, or other entity. (It works with any content entity, like terms, paragraphs, etc!) You'd probably be putting the result of the view() method (which will be a render array) into a variable to be used in a twig template via a preprocess hook. Or maybe you're just adding it into an existing render array, like a custom block plugin. (Either way, you probably shouldn't then be rendering it into a string yourself, let Drupal do that for you.)

You can even just view a single item within a multi-value field like this, here using an 'if' condition to be a bit more graceful than a try/catch. This is the equivalent of using field_view_value() from Drupal 7, so also skips Drupal's full field template, though includes markup produced by the field's formatter:

// View the third value, as long as there is one.
if ($third = $node->field_my_thing->get(2)) {
  $output = $third->view();
else {
  $output = [];

That helps you see how you might get a single value too, with the get() method, though note that it still returns a classed object. To just get a raw value, without any wrapping markup or value processing/sanitisation, you might want to use something like this, instead of Drupal 7's field_get_items() :

$text = $node->field_my_thing->get(0)->value;

// If the field is just a single-value field, you can omit the get() part.
$value = $node->field_single_thing->value;

// Some types of field use different properties.
$url = $node->field_my_link->uri;

// You can use getValue() to get all the properties (e.g. text value + format).
$text_values = $node->field_formatted_text->getValue();

// References can be chained!
$referenced_title = $node->field_my_reference->entity->field_other_thing->value;

In Drupal 7, there was also confusion around what to do for multilingual content. In Drupal 8, as long as you've got the translation you want first, all the methods I've discussed above will get you the appropriate values for your language. To get a specific translation, use:

if ($node->hasTranslation($candidate)) {
  $node = $node->getTranslation($langcode);

This Rocks.

You get to use proper modern methods on a proper typed data API. Sanitisation is done for you. You don't need to care what the underlying data structure is. And you don't need to remember some magic global procedural functions, because the methods are obvious, right there on the thing you want to use them on (the field item class). If the Drupal 7 version of this was brilliant, that makes the Drupal 8 version of this even better. Brilliant-er?

Apr 03 2018
Apr 03

If you don't have access to the file system on the server for a Drupal site, when a security issue like Drupalgeddon2 comes along, you are entitled to panic! Many sites are run by a combination of teams, so sometimes you really don't have control over the server... but that might even mean there is another way to apply fixes. If you've been tasked with updating such a site (I was!), it's worth checking if the server has been misconfigured in such a way to actually allow you to patch Drupal, via Drupal!

A heavy caveat first: we would never recommend servers to be set up like this, nor that you try this without extreme care!

Instead, you should do everything you can to get secure access to the server and communicate with those responsible for managing it. But I realise that sometimes you just have to be practical and improvise, building on many years of experience. For special occasions like this, I suggest making it very clear to anyone around you that you are now proceeding to do something very dodgy. Perhaps wear an enormous sombrero, or play overly dramatic music across the office?

A developer wearing a comic sombrero to indicate they are directly editing code on a live site.

So, assuming you can at least log in as an admin, first get the Devel module installed. It can slow site down, and may be a security risk in itself, so it shouldn't normally enabled on a production site, but it's common to at least be in the codebase, since it's so useful during development. If it's not listed on the modules page, maybe your site allows modules to be uploaded through the interface at /admin/modules/install ? (That functionality always scares me...) Anyway, get Devel installed if you can, and ensure you have 'Access developer information' and 'Execute PHP code' permissions.

That last bit hints at where we're going with this... head to /devel/php and you'll find you can run PHP code from a text box! Now, the next bit relies on a server misconfiguration. Web servers "should not have write permissions to the code in your Drupal directory" ... but maybe yours does. Find out by attempting to modify the code of one of Drupal's files. For Drupal 7, that means putting the following code into the box and submitting it. For Drupal 8, you can do something similar, but the text being replaced here will need to be something that else that does exist in a file.

// Load up Drupal's bootstrap file.
$file = DRUPAL_ROOT . '/includes/';
$content = file_get_contents($file);
// Add a comment to the end of one of the lines.
$hacked = str_replace('drupal_settings_initialize();', 'drupal_settings_initialize(); // james was here', $content);
// Replace the file.
file_put_contents($file, $hacked);

// Print the contents of the file.

If you see 'james was here' in the response, you will be able to hack a patch into Drupal a bit like this! If not, congratulations, your web server is correctly configured not to allow this!

I won't put the actual fix here, as you can probably figure it out from there - you'll probably want to use str_replace() similarly on the appropriate file(s) that needs patching, for example. (If you can't figure it out, then you probably shouldn't trust yourself with this level of 'hacking' a fix onto a site!)

To verify your fix, you can do the following:

  1. Go to /devel/php?%23that=thing&other=this
  2. Run dpm($_GET);
  3. The resulting array should only have the 'q' and 'other' parts, as the '%23that=thing' part should have been stripped by the security fix.

If you really want, you can add $conf['sanitize_input_logging'] = TRUE; to your site's settings.php file (or $settings['sanitize_input_logging'] = TRUE; for Drupal 8) to log any unsafe inputs that get tried against your site (whether you've applied the fix the proper way or like this). That would show the '%23that=thing' up under /admin/reports/dblog , if you've got database logging enabled.

Apr 03 2018
Apr 03

Drupalgeddon2 happened! We got all but two of our projects updated within an hour, with those remaining trickier two fully patched another hour later. The key was planning the right process using the right tools. We actually use these tools for regular deployments every day, but speed was essential for this security update. Here's what we did, since some of you may be interested.

  1. Our on-call developers split up the various sites/environments/projects that would need updating amongst themselves, using a simple online shared spreadsheet.

  2. Ahead of time, we prepared pull requests for sites that simply use Drush make files to specify core versions. (We prefer to keep 3rd-party code, including Drupal core, out of our main project repos, by using composer or Drush make to bring in those dependencies during a build step, executed by Jenkins.) The diff for these would be something as simple as this:

diff --git a/build/stub.make b/build/stub.make
index 03599c39..ff084895 100644
--- a/build/stub.make
+++ b/build/stub.make
@@ -3,7 +3,7 @@ api = 2
 ; Drupal core.
 projects[drupal][type] = core
-projects[drupal][version] = 7.57
+projects[drupal][version] = 7.58

  1. We monitored all the potential places that the security fix could become available. We noticed it first appear from (we just guessed the filename, as we knew what the release number should be). This meant we were even a few minutes ahead of public announcements actually being made.

  2. We assessed the changes that make up the security fix immediately, so we knew whether to fast-track patches directly onto live servers, or if we can use our normal build process. We decided almost all of our sites could be done normally, and we kept an eye out for whether the build steps might become a bottleneck (but we knew that would probably be fine, which was indeed the case). The fix was trivial enough that we didn't need to add any testing to what has already been done by those producing it for Drupal core. In any case, this security risk was so critical, that it was better to risk breaking site functionality than to delay applying the fix. We now know that there were no issues introduced by the changes anyway.

  3. We merged any of those pre-prepared pull requests that could be done immediately.

  4. Projects with a composer-based workflow (i.e. Drupal 8 ones) would include a hash change in the composer.lock file, so they couldn't be done ahead of time. As soon as the update was available, we ran this command for each project, which relied on correct version constraints already being in place (e.g. "drupal/core": "~8.4.0":

composer update drupal/core symfony/* --with-dependencies

We included all symfony dependencies because we were aware that they can block updates - see Jeff Geerling's excellent write-up. For most sites, that simply changed the composer.lock file to use the new Drupal version, so that could be committed & pushed immediately. Some sites would get a files from core updated, which we wanted to avoid changing (e.g. robots.txt), so we'd revert those.

  1. A few of our projects (usually ones adopted from previous owners) keep the whole site codebase (bespoke code + Drupal core & all dependencies) within their main repo. The changes would be very quickly reviewed manually then committed & pushed. For those, the build process is then mostly about verification & deployment.

  2. Most of our projects have a build process (using Jenkins) that gets immediately triggered on new commits being pushed to their branches that production/staging environments use. This usually means building a 'full' codebase that includes our bespoke repo, together with the dependencies specified by the composer or drush make files. We use Docker containers and appropriate version constraints to ensure the build process are correct for each project's actual target servers. (For example, building for the appropriate version of PHP, as some dependencies may be different for PHP 7.x or 5.x.) As soon as the full codebase is built, that usually gets automatically deployed to the live sites, and then scripts are ran to apply any necessary updates or clear/rebuild caches. (These scripts, as well as the build tasks themselves, are all part of each project's main code repo.) Builds all run in parallel, so we could update many many sites at the same time.

  3. We use Aegir to manage some projects - in particular, one hosts around 1000 sites. Doing the 'correct' thing and building a new platform and migrating each site would have taken too long, so we went ahead and directly applied the patch to the platforms of code from the command line.
    We have since updated the git repo for the project as above, and done a 'correct' migration of the entire platform in Aegir.

  4. We would then quickly check the live site is actually running the updated version of Drupal, which is reported under /admin/reports/status .

Using this process, all but two of our sites were updated within an hour. As you can see, this relies on having a strong and reliable build/deploy process set up. Admittedly, there's a few sites which we have to do a little more manually as we don't have the same level of control over their servers (e.g. where we work together with a team at a client that manages that part of a project). Most of those required SSH and a git pull.

We also encountered these extra issues along the way. What issues did you run into? Let us know in the comments, let's all learn from each other in case there's ever another tricky core update!

  • A site that we consult for, but aren't the primary developers for, uses some core patches, which would need some re-rolled versions to be compatible with the newer version of core. Those patches were just removed initially to get the security fix out quickly, as that was way more important than the functionality that those patches were fixing.

  • We use composer gavel to ensure we are all running the latest version of composer, to reduce merge conflicts in composer.lock . That requires PHP 7, but we had one D8 site on PHP 5.6 (enforced via the platform section of composer.json), so I had to find a workaround, now fixed here.

  • We had to be creative to deploy the fix to one site, which we didn't have proper server access to. The client's own team manages its deployments, but they were not available. The next article in this series covers that special case, if you're interested! It wasn't one for the faint-hearted...

Mar 14 2018
Mar 14

We've settled on what we think is a best practice for class naming for Javascript in Drupal – let me explain what I mean and then talk you through our reasoning.

Say we have a Drupal Javascript behavior along the lines of:

(function($, Drupal) {

Drupal.behaviors.computermindsBehavior = {
  attach: function(context) {
    $(context).find('.js-computerminds-behavior').once('computerminds-behavior').each(function() {
      // @TODO: do something really useful with $(this).

})(jQuery, Drupal);

The bit that I want to discuss today is the selector in the .find method, i.e. this bit: '.js-computerminds-behavior' 

Our best practice is:

  1. The class name is something I've decided upon myself, and I'm not using a class that Drupal drops onto an element for me.
  2. I've used a js- prefix on my selector/class name.

A custom class

I've used a custom class that I'll need to add manually to my elements in HTML so that I have a much looser coupling between the Drupal PHP output and the frontend functionality.

If you're using Drupal 8 with the classy theme (or in Drupal 7 by default) you get a lot of classes added automatically – this is so that you don't need to go around overriding template files all the time. This arguably makes writing CSS and JavaScript easier, however you'll end up with CSS and JavaScript that's not very reusable because it'll be tightly coupled to the Drupal systems generating the output.

As an example, recently my colleague - let's call him 'ComputerMinds Developer A' - added a block to a custom module and wrote some javascript similar to the behavior outline above, but with a Drupal generated selector:

(function($, Drupal) {

Drupal.behaviors.tightlyCoupledBehavior = {
  attach: function(context) {
    $(context).find('.block--custom-module--block-1').once('computerminds-behavior').each(function() {
      // @TODO: do something really useful with $(this).

})(jQuery, Drupal);

The block worked wonderfully, the client was very happy, the client wanted a second, similar block, but styled a little differently.

Enter 'Computerminds Developer B' who essentially did a marvellous copy/paste of the PHP and tweaked a few strings here and there. The second block was lovely. It needed to look a tiny bit different, so some tweaked CSS was applied, but functionally it should have been identical to the first block.

But it wasn't.

Because of the class the JS was attached to, Developer B would have had to go and modify the JavaScript behavior:

(function($, Drupal) {

Drupal.behaviors.tightlyCoupledBehavior = {
  attach: function(context) {
    $(context).find('.block--custom-module--block-1, .block--custom-module--block-2').once('computerminds-behavior').each(function() {
      // @TODO: do something really useful with $(this).

})(jQuery, Drupal);

Which isn't too bad, until the client wants an additional 6 extra blocks… eek.

Sadly this was missed, and 'Computerminds Developer B' didn't add the additional selectors to the JS. Oh and 'ComputerMinds Developer B' is actually 'ComputerMinds Developer A', but a few months older. Whoops.

The prefix

Class names on the web are used for two main things: styling via CSS and hooks for JavaScript. I'd argue that keeping those domains separate is a good thing. All of your visual styling comes from CSS, and your dynamic functionality comes from Javascript. Using a js- prefix in our selector here just reinforces that separation.

If I see a js- prefix in a template or in some HTML, I instantly know that this is do with some special functionality that only JavaScript can provide.

Keeping the js- prefixed classes out of our styling/CSS also means that we can freely add that class to other elements on the page that need the behaviour provided by JavaScript and know that the styling of the element won't change at all.

Explicit wins for us

By being explicit with the class names we're using in our JavaScript we can hopefully make it more obvious and explicit that something interesting is happening to a set of elements on the page.

It creates a separation between the thing generating the markup and the JavaScript system applying functionality to that markup.

With this looser coupling between the data structures in Drupal and the frontend markup it can be significantly easier to refactor either side without huge unknown breakage.

And by going further and adding a js- prefix we're declaring to the world that this is a class only for JavaScript and not CSS: hands off!

Mar 12 2018
Mar 12

Extra quick tip for Drupal developers: Adding views to your page content is now incredibly straightforward:

$content['editor_tools']['view'] = [
  '#type' => 'view',
  '#name' => 'editor_tools',
  '#display_id' => 'embed',
  '#arguments' => [

And that's it! $content is my render array which I'll return and let Drupal render. I suspect most of the bits there are self-explanatory, but in case they aren't:

  • '#type' => 'view' is the magic that lets Drupal's rendering system know that this array represents a view to render.
  • '#name' is the machine name of the view you want to render.
  • '#display_id' is the display ID within the view to render.
  • '#arguments' is an optionally array of arguments to pass to the view.

If you want it even easier than that, then using the views_embed_view() function will return you the render array and complete access checks etc:

$content['editor_tools']['view'] = views_embed_view('editor_tools', 'embed', 123);

Dec 07 2017
Dec 07

Ten years ago today Adrian Rossouw committed the first code for the Aegir project. ComputerMinds have been involved in Aegir for many of those years, particularly one of our senior developers: Steven Jones. We asked him some questions about it to mark the occasion.

When did you get involved in the Aegir project?

I went to a session at Drupalcon Washington DC in March 2009 and Adrian was presenting this mad thing: Drupal managing other Drupal sites. At ComputerMinds we were looking for something to help manage our Drupal hosting and help update sites and generally make things reliable. Aegir seemed like it might work, but after having a play with the 0.1 version it didn’t seem to actually make anything easier!

About a year later I installed it again, and it seemed much more mature and usable at that point. It was working well and we managed and many other sites via Aegir.

I continued to make changes and additions to Aegir and in March 2011 I was added as an Aegir core maintainer.

What has Aegir allowed you to do that you wouldn’t have been able to do otherwise?

We landed a project to build a platform for a UK political party that would host almost all of their websites, basically a site for every one of their parliamentary candidates, and suddenly we needed to be able to create 300+ sites, run updates on them and generally keep them working.

We knew that Aegir would be able to handle managing that number of sites, but we needed a quick and easy way to create 300 sites. Because Aegir was ‘just’ Drupal and sites were represented as nodes, we wrote a simple integration with feeds module: Aegir feeds and imported a CSV of details about the sites into an Aegir instance. A few moments later we had 300 sites in Aegir that it queued up and installed.

If we’d have written our own hosting system, then there’s no way we’d have had the reliability that Aegir offered, catching the odd site where upgrades failed etc. Aegir affords a Drupal developer like me a wonderful ease of extending the functionality, because, deep down, Aegir is ‘just’ Drupal.

What parts of the Aegir project have been particularly valuable/useful to you?

Aegir has had weekly scrum sessions on IRC for as long as I’ve known it, mostly for the maintainers to talk about what they’re doing and plan. They were a great way to get a feel for what the project was about and to get to know the people involved.

Mig5’s contributions, especially around documentation, have been incredibly valuable and even inspirational: topics like continuous deployment were not just dreams but mig5 laid out a clear path to achieve that with Aegir.

Aegir forced a shift in thinking, about what exactly your Drupal site was, what an instance of it was, what parts of your codebase did what and so on.

It’s been fun to do the open source thing and take code from one project and use it in Aegir. For example, I was always bothered that I had to wait for up to 1 minute for tasks to run to create a site, or migrate/update a site. I spotted that was using the waiting queue project to run tasks off a queue as quickly as possible, so I nabbed the code from there, wrote about 10 extra lines and Hosting queue runner was created. Tasks then executed after a delay of at most 1 second.

Aegir has always been open to contribution and contributors are welcomed personally, being helped generously along the way.

How has your use of Aegir changed since you started using it?

We’ve moved away from using Aegir to host sites for most of our projects, but are still using it to manage over 1,000 sites. It's still essential for managing 'platform' projects that cover hundreds of sites.

It’s hard to pick an exact reason of why we moved away from it, but mainly we’ve reverted to some simpler upgrade scripts for doing site deployments, using tools like Jenkins.

We’ve compromised on the ‘site safety’ aspects of deployments, we no longer have a pretty web UI for things and we now lean heavily on the technical expertise of our developers, but we have gained significant speed and comprehensibility of deployments etc.

What Aegir does during a Migrate task is a little opaque, whereas having a bash script with 20 lines of code is easier to understand.

Aegir is still awesome when you want to change how something works about the hosting environment over lots and lots of site like:

  • Configurable PHP versions per site
  • Lets Encrypt support
  • Writing custom config for Redis integrations

What do you think the Aegir project could look like, 10 years from now?

Aegir will become ‘headless’ and decentralised, having lots of instances working together to record, share and manage config. The Drupal UI will be a lightweight frontend to manage the Aegir cluster.

Alternatively, Aegir will become some sort of AI and start provisioning political cat gif websites for its own diabolical ends!

Aug 29 2017
Aug 29

In Drupal 7, we had menu_get_object() to get the object in code for whichever node, term, or whatever entity makes up the current page. But in Drupal 8, that has been replaced, and entities can usually be operated on in a generic way. If you know the type of entity you want, it's pretty simple. But I keep running into different suggestions of how to get the entity object when you don't know its type. Given that D8 allows us to deal with entities in a unified way, I thought there must be a good way! The code of core itself is usually the best place to look for examples of how to do common things. I found that the content translation module has a smart way of doing it, which I've adapted here for more general use:

  * Helper function to extract the entity for the supplied route.
  * @return null|ContentEntityInterface
function MYMODULE_get_route_entity() {
  $route_match = \Drupal::routeMatch();
  // Entity will be found in the route parameters.
  if (($route = $route_match->getRouteObject()) && ($parameters = $route->getOption('parameters'))) {
    // Determine if the current route represents an entity.
    foreach ($parameters as $name => $options) {
      if (isset($options['type']) && strpos($options['type'], 'entity:') === 0) {
        $entity = $route_match->getParameter($name);
        if ($entity instanceof ContentEntityInterface && $entity->hasLinkTemplate('canonical')) {
          return $entity;

        // Since entity was found, no need to iterate further.
        return NULL;

// Example usage:
$entity = MYMODULE_get_route_entity();
$entity_type = $entity ? $entity->getEntityTypeId() : NULL;

Routes for entities keep the entity under a parameter on the route, with the key of the parameter being the entity type ID. Looping through the parameters to check for the common 'entity:...' option finds identifies the parameters that you need. The above code will only work for the current path. But if you can get hold of the route object for a given path/URL, then you could do something similar.

It seems a shame to me that this isn't any simpler! But hopefully this at least helps you understand a bit more about routes for entities.

Jun 20 2017
Jun 20

TL;DR: Define a schema for any bespoke configuration, it's not too hard. It's needed to make it translatable, but Drupal 8 will also validate your config against it so it's still handy on non-translatable sites. As a schema ensures your configuration is valid, your code, or Drupal itself, can trip up without one.

Recently I updated a simple configuration setting via some PHP code, and Drupal immediately threw up an obscure error. It turned out to be because the custom setting that I updated didn't have a configuration schema.

Schemas are used to instruct Drupal 8 what format every piece of a configuration object should be in. For example: text, integers, an array mapping, etc. That means they can be used to validate configuration, keeping your code and Drupal's handling of the settings robust and predictable. It also allows the configuration to be translated, but that wasn't on my radar for this particular project.

If you're aware of the variable module for Drupal 7, then you can think of configuration schemas as the equivalent of hook_variable_info(), but written in YAML.

Here's an example of the sort of thing that's needed, to go in my_module/config/schema/my_module.schema.yml:

  type: config_object
  label: 'My custom settings'
      type: text
      label: 'Message to display on page XYZ'
      type: 'integer'
      label: 'Behave differently when this is 0, 1, or 2'

The pragmatic reality is that you may well get away fine without giving your configuration a schema. But you may also run into obscure issues like I did without one, which are very hard to understand. I could get around the problem by skipping the validation by passing the 'has_trusted_data' parameter to my $config->save() call as TRUE. Clearing caches may also resolve your problem.

But ultimately, those workarounds just plaster over the cracks. Set up a schema and you avoid those problems, and get robust validation for free. Hopefully my example YAML above shows how it can be quite simple to do.

As a bonus, PhpStorm can even give you autocomplete suggestions for configuration names that have a schema defined for them, when typing \Drupal::config('...!

Mar 07 2017
Mar 07

This post is the first in a series about getting Drupal to run as a persistent server, responding to requests without bootstrapping each and every time.

This is how many other application frameworks and languages run: nodejs, Rails etc.

In those systems you start some instances of your application and then they do whatever bootstrapping they need to do and then they enter an endless loop waiting for requests.

I recently upgraded our internally hosted Redmine server to run on Ruby 2.3 and during that upgrade took a look at our NewRelic monitoring for the application. Some of the page requests had an average response time of 8ms. These were page requests for logged in users. That would be amazing performance for a Drupal page.

Performance like this changes the way you write applications. I wondered if Drupal could provide similar performance.

This series hasn't been written yet, I'm going to set out what I want to do with it, and we'll see where we get to. Feel free to post things in the comments etc. as we go along.


  1. Get DaaS working for just Drupal core. is going to provide the starting point for this, as it looks like other people have already done all the heavy lifting, hurrah for open source.
  2. Take one of our more complicated client websites and get that running DaaS too.
  3. Test that the output from a 'normal' Drupal and DaaS is identical. This will likely require writing some sort of test that goes and makes lots of page requests, changes settings etc.
  4. Run some performance tests to see how fast DaaS can be.
  5. See what issues crop up, memory leaks etc. and see what things can be done to mitigate those.
  6. Work out how deployments and distribution of the application would work.
Jan 17 2017
Jan 17

I've previously written about dynamic forms in Drupal, using #states to show or hide input boxes depending on other inputs. Since then, Drupal 7 and 8 have both got the ability to combine conditions with OR and XOR operators. This makes it possible to apply changes when a form element's value does not equal something, which is not obvious at all. It's easy to show something when a value does match. This example shows a text input box for specifying which colour is your favourite when you chose 'Other' from a dropdown list of favourite colours.

$form['other_favorite'] = [
  '#type' => 'textfield',
  '#title' => t('Please specify...'),
  '#states' => [
    'visible' => [   // action to take.
      ':input[name="favorite_color"]' // element to evaluate condition on
        => ['value' => 'Other'],  // condition

But there's no 'opposite' of the value condition, e.g. to show an element when a text box has anything other than the original value in it. Using the 'XOR' condition (which only passes when some conditions are passed, but not all of them), you can combine the value condition with something that will always pass, in order to simulate it instead. Something like this, which shows a checkbox to confirm a change when you enter anything other than your previous favourite food:

$form['confirm_change'] = [
  '#type' => 'checkbox',
  '#title' => t('Yes, I really have changed my mind'),
  // Use XOR so that current_pass element only shows if the email
  // address is not the existing value, and is visible. States cannot
  // check for a value not being something, but by using XOR with a
  // condition that should always be true (i.e. the latter one), this
  // will work. Note the extra level of arrays that is required for
  // using the XOR operator.
  '#states' => [
    'visible' => [
      [':input[name="favorite_food"]' => [
        'value' => $original_favorite_food,
      [':input[name="favorite_food"]' => [
        // The checked property on a text input box will always come
        // out false.
        'checked' => FALSE,

Hopefully the comments in that snippet make sense, but essentially it's about picking a second condition that will always pass, so that the combination only passes when the first condition fails. So in this case, whenever the value does not equal the original value. By the way, you can do much more than just show or hide form elements, for example, enabling or disabling - see the full list.

Jan 10 2017
Jan 10

In Drupal 8, there are many ways to interact with configuration from PHP. Most will allow you to write the config values, but some are read-only. Some will include 'overrides' from settings.php, or translations. There are many layers of abstraction, that each have different purposes, so it can be easy to run into subtle unintended problems (or more obvious errors!) if you pick a method that doesn't quite suit what you actually need to do. Here I'm going to outline some of them and when you might pick each one.

For almost all of these suggestions, you can call the ->get($sub_value) method on them to get specific sub-values out of the returned object/data. Also, wherever I have written $config_factory, that can be set in a number of ways. For example, either \Drupal::configFactory() (for global code, such as in module hooks), or preferably via dependency injection (which probably means passing the 'config.factory' service into the constructor to set a protected property to access it from). The $config_storage variable is similar, corresponding to the '' service.

Here's a summary of my suggestions, by when you might want to use them:

  Even if inexistent Only if existent With overrides (read-only) $config_factory
->loadMultiple([$id]) No overrides (read-only) $config_factory
->getRawData() Check !$config->isNew()
or use $config_storage
->read($id) first Writable (no overrides) $config_factory
->getEditable($id) Check !$config->isNew()
or use $config_storage
->read($id) first

That table deliberately leaves out a suggestion for getting writable config with overrides. That is because any type of config override may be updated in different ways. You may have to change values in the settings.php file, which you cannot do from the UI. Or use the core Configuration Translation module. Writing to config, including their overridden values, is outside the scope of this article ... and is outside the way that config is supposed to be used anyway! Read the documentation on for more details, which includes how to deal with specific sets of overrides, like specific languages, or from modules.

Let's go into some detail about these ways of getting configuration.

1. $config_factory->get($id)
(or \Drupal::config($id) or $this->config($id))

$config_factory->get($id) (and its equivalents - see note below) is the simplest method, which will give you the current actual values of the config, including any translations or overrides from settings.php. But it gives you an 'immutable' configuration object, that can only be read, so is no good for updating. That is deliberate, since at least some sorts of config overrides can't be directly updated anyway. If you try to get a piece of config this way that doesn't exist, the config object you get can still be used, but won't hold any values. You can test for this by either checking that $config->isNew() is FALSE or simply that $config->get() is empty.

You can exclude overrides using the ->getRawData() method on the config object if you really want an immutable version without them.

Note: $config_factory->get($id) is also identical to \Drupal::config($id), which is even simpler, but should only be used from 'global' code like hooks in .module or .theme files.
Controllers that extend ControllerBase or forms that extend FormBase have ->config($id) methods that do almost the same thing too.
Forms that extend ConfigFormBase are a bit special though, in that config returned by that method for any names returned by their ->getEditableConfigNames() implementations will be editable.

2. $config_factory->getEditable($id)

This will get you an 'editable' (mutable) piece of config, so guarantees you can set & save values on it (or even delete it). That does mean it won't contain any overrides, so may not be the actual values that would get used for pages, e.g. from settings.php or translations. However, note that as it guarantees an editable object, so you can call it for config that doesn't actually exist, which you might not want. See the above section about \Drupal::config() for checks you can use, or skip to my next suggestions.

On this editable config object, you can even call the ->getOriginal() method to get back to the original loaded data from before any values were changed, using its parameters to do so for specific sub-values or to ignore any overrides.

3. $config_factory->loadMultiple([$id])

If you want to check for the existence of some config, but don't need to make changes to it, this is the one to use. It might have been nicer if there was a ->load() method for just loading single items of config, but loadMultiple() will have to do, just make sure to pass it an array :-) You will get an array back, which will be empty if the config doesn't exist, and otherwise contains immutable config objects, just like my first suggestion from above, $config_factory->get($id). It will also include any overrides like translations.

4. $config_storage->read($id)

As mentioned previously, you can use the ->isNew() or ->get() methods to check when config returned from its factory does not actually exist. If you really want to, you can interrogate the config storage service directly, to get back the array of existing config data, or FALSE if it doesn't. This gets a bit more low-level though, so I'd recommend against it really. As it returns an array, to actually save into a config object, the best option is to get the editable version from the factory as above and check ->isNew() is FALSE, or look into what the factory does with the data read from its storage.

Hopefully that helps you understand which method you might want to use for various needs, and why.

Dec 20 2016
Dec 20

Over the past couple of years I have become more active in issues on, in which time I have honed a selection of commands that I regularly use in the creation and application of patches. Here is a run-down of my most useful commands.

This examples in this article will focus on patches for Drupal core, but most of it (if not all) is equally relevant for contrib modules too.

I strongly recommend you read and understand the Git Instructions on which covers many of the basics (e.g. how to set up the repo). This article with overlap on certain points, but will include a few alternatives too.

I am not claiming this is necessarily the best solution, rather the best I have come across so far - I am always learning.

Obtain latest code

If you are intending to create a patch for submission to then you should make sure you have the latest version of code and are working on the correct branch:

git pull --rebase git checkout 8.3.x

Most of my current patches are for the 8.3.x branch, but replace this with the name of which ever branch you are working on.

Create a feature branch

I usually have multiple issues open at the same time ("Needs review", RTBC, etc.) and, because it is rare for me to fix an issue with a single commit, I create a feature branch for each issue:

git checkout -b feature/2669978-migrate_d7_menu_links

These feature branches will only ever be local so you can use any naming convention you like, but I prefer feature/[issue_id]-[description].

Feature branches can also make it easier to create interdiffs (see below) so I normally make a commit to a feature branch for every patch I submit to

Apply existing patch

There are times when you may want to make changes/improvements to an existing patch, in which case you will need to apply that patch to your code base. recommends the use of

git apply -v migrate_d7_menu_links-2669978-43.patch

but that requires you to download the patch to your working directory (and remember to delete it later) so I prefer the following:

curl '' | git apply -v

which works in exactly the same manner, but uses the online version of the patch.

Patch failed to apply

Sometimes git apply will not work, often when a re-roll is required. In this situation I use:

curl '' | patch -p1

which will apply as much of the patch as possible and create .rej files for those hunks that failed to apply making it easier to fix.

Create a patch

Once you have made the relevant code changes to fix your current issue (and fully tested it, of course!) it is time to create the patch. I usually use the simplest option, as recommended by

git diff > [description]-[issue_id]-[comment_number].patch

but here are a couple of alternatives worth remembering. If you do not want to include every change you have made to your codebase then just stage the changes you require and run:

git diff --cached > [description]-[issue_id]-[comment_number].patch

Or if this is not the first commit since branching from core (e.g. a subsequent patch), often on a feature branch, then use:

git diff [old_sha] [new_sha] > [description]-[issue_id]-[comment_number].patch

This command can also be used if you have already committed your changes.

Create an interdiff

To quote from's Creating an interdiff:

An interdiff is a text file in patch format that describes the changes between two versions of a patch.

If you have changed an existing patch then it is recommended that you create an interdiff to make life easier for code reviewers.

Normally I use:

git diff [previous_patch_sha] > interdiff-[issue_id]-[old_comment_number]-[new_comment_number].txt

This explains why I prefer to make a commit to my feature branch for every patch (see above).

If this is the only change since the previous patch then you can instead use:

git diff > interdiff-[issue_id]-[old_comment_number]-[new_comment_number].txt

There are some cases, most notably when re-rolling a patch that has failed to apply, when the previous options are not possible. In this case I recommend using the interdiff command that is included with patchutils:

interdiff old.patch new.patch > interdiff-[issue_id]-[old_comment_number]-[new_comment_number].txt

If you have any better/alternative commands that you prefer to use then tell me about them in the comments below.

Nov 29 2016
Nov 29

In a recent Drupal 8 project, one of the site's vocabularies had several thousand terms in it (representing airports), which caused its admin listing page to run out of memory before it could render. I wanted to solve this without affecting any other vocabularies on the site, and improve the listing itself along the way to be more useful, with filters to make searching it much easier. The original listing is not a view, and loads much more than it needs to. For comparison, here's an example of how core's original vocabulary listing can look: Example of the original core vocabulary listing ...and then here's the customised page for airports, with the filter and an extra column (corresponding to the term names and descriptions) that I wanted to head towards, below. You can download an export of the view configuration that I ended up using (for importing on a D8 site at /admin/config/development/configuration/single/import, though you will need an 'airports' vocabulary to exist). Customised airports view I thought of a few approaches:

Just limit the number of terms listed

Change the terms_per_page_admin setting in the taxonomy.settings config, which defaults to 100 (there is no UI for this):

\Drupal::configFactory()->getEditable('taxonomy.settings')->set('terms_per_page_admin', '25')->save();

But this would have affected all vocabularies, and couldn't allow improvements like added filters.

Override the controller for the existing route

Use my own controller on the existing entity.taxonomy_vocabulary.overview_form route (probably via a RouteSubscriber). So the page would no longer directly use the form that is specified in the taxonomy.routing.yml file, but instead a custom controller that would call through to that form for all other vocabularies and then do something else (probably embed a view) for my airports taxonomy. But that would mean changing the very definition of the existing route, which I want to remain in place for other vocabularies really. That might have been okay, but the route I went down was as follows...

Use a new view just for the vocabulary

Create a new page with views to be used on the specific path for airports (i.e. '/admin/structure/taxonomy/manage/airports/overview'). Behind the scenes, that means a new route would be set up, so there would be two routes valid for one path, but the more generic one from core would use a wildcard, whilst mine would specify the vocabulary in its path so it gets used ahead of core's listing, just for this vocabulary. The definition of the existing route from core could then remain untouched.

But of course it's never quite as easy as you think...

Once I'd set up this view, with the nice filters I wanted to add, and visited the taxonomy listing, I got a cryptic error message:

Symfony\Component\Routing\Exception\MissingMandatoryParametersException: Some mandatory parameters are missing ("taxonomy_vocabulary") to generate a URL for route "entity.taxonomy_vocabulary.devel_load". in Drupal\Core\Routing\UrlGenerator->doGenerate() (line 171 of core/lib/Drupal/Core/Routing/UrlGenerator.php).

This turns out to be caused by the view having a specific path, that does not use the vocabulary wildcard that one of the tabs on the page expects. This is the sort of thing I thought I'd avoid by leaving the original route intact. Disabling the Devel module, which provided this particular tab, does solve that issue, but the view now has no tabs, because the tabs are attached using the original route name, not mine. No tabs on vocabulary listing So, where to now? I thought of three ideas:

  1. Force my view to use the same route name as the original route. But then I would lose the original route, which I don't want to do for the reasons outlined above already.

  2. Loop over all tabs that are attached to the original route, in a hook_local_tasks_alter(), to attach them to my route instead. But then all the other vocabularies would lose their tabs too! I could work around that with some more wrapping code, but this idea became less attractive as I thought about it.

  3. Attach my view as a tab to the original vocabulary listing. This would create a duplicate 'List' tab, but I figured that would be the easiest problem to solve.

Duplicated 'List' tab That last option needed a few hooks to be implemented, and to remove the duplicate tab that you can see in the above screenshot...


 * Implements hook_local_tasks_alter().
function MYMODULE_local_tasks_alter(&$local_tasks) {
  // Views places the airports view below the vocab edit form route, but that
  // then stops the fields UI tabs showing up.
  if (isset($local_tasks['views_view:view.airports.page_1'])) {
    $local_tasks['views_view:view.airports.page_1']['base_route'] = 'entity.taxonomy_vocabulary.overview_form';


 * Implements hook_module_implements_alter().
function MYMODULE_module_implements_alter(&$implementations, $hook) {
  if ($hook === 'local_tasks_alter') {
    // Move MYMODLE_local_tasks_alter() to run after
    // views_local_tasks_alter() as that sets up the base routes.
    $group = $implementations['MYMODLE'];
    $implementations['MYMODLE'] = $group;


This one ensures only one of the routes that is valid for the 'List' tab actually shows. Otherwise both tabs will show, despite them both linking to the same URL!

 * Implements hook_menu_local_tasks_alter().
function MYMODULE_menu_local_tasks_alter(&$data, $route_name, $cacheability) {
  if (isset($data['tabs'][0]['views_view:view.airports.page_1'])) {
    /** @var \Drupal\Core\Url $url */
    if (isset($data['tabs'][0]['entity.taxonomy_vocabulary.overview_form']['#link']['url'])) {
      $url = $data['tabs'][0]['entity.taxonomy_vocabulary.overview_form']['#link']['url'];
      $params = $url->getRouteParameters();
      if (isset($params['taxonomy_vocabulary']) && $params['taxonomy_vocabulary'] === 'airports') {
      else {
    else {

This now allows the new custom filterable view and all the tabs to show up correctly for my airports vocabulary, and leave all other taxonomies as they were. But to have it working with the dynamically-generated tabs that the Devel module adds, I still needed a little more secret sauce. Potentially other modules could be doing similar (and I wanted to have Devel enabled!), so I had to go further down the rabbit hole...

Injecting a raw variable into a route

The route provided by core's taxonomy module uses this definition:

  path: '/admin/structure/taxonomy/manage/{taxonomy_vocabulary}/overview'

See the {taxonomy_vocabulary} part? That's a wildcard that matches the particular vocab to be listed, and Devel module used it to generate its tabs. I needed that wildcard 'variable' to be defined on the route for my new views page, and for it to have the value 'airports'. This meant altering the definition of the route with a RouteSubscriber but also then injecting the variable's value at run time, or more specifically, at the point of reacting to the request. RouteSubscribers are just event subscribers, and event subscribers can react to multiple events so I added a method to listen to the KernelEvents::REQUEST event, since that's when a route would be matched on visiting any page on a Drupal 8 site. Here's the service definition, which goes in and then the whole code of the class file (which is also listed at the end of this article for download):

    class: Drupal\MYMODULE\EventSubscriber\RouteSubscriber
      - { name: 'event_subscriber' }

namespace Drupal\MYMODULE\EventSubscriber;

use Drupal\Core\Routing\RouteSubscriberBase;
use Symfony\Cmf\Component\Routing\RouteObjectInterface;
use Symfony\Component\HttpKernel\Event\GetResponseEvent;
use Symfony\Component\HttpKernel\KernelEvents;
use Symfony\Component\Routing\RouteCollection;

 * Adds the unused parameters of the regular taxonomy overview to the current
 * route match object when it is the airports view so that other routes
 * (specifically, for the Devel load local task) can still find them.
class RouteSubscriber extends RouteSubscriberBase {

   * Adds default to the airport view to match the original vocabulary overview.
   * @param \Symfony\Component\Routing\RouteCollection $collection
   *   The route collection for adding routes.
  protected function alterRoutes(RouteCollection $collection) {
    if ($route = $collection->get('view.airports.page_1')) {
      $route->setDefault('taxonomy_vocabulary', 'airports');

   * Adds variables to the current route match object if it is the airport view.
   * @param \Symfony\Component\HttpKernel\Event\GetResponseEvent $event
   *   The response event, which contains the current request.
  public function onKernelRequest(GetResponseEvent $event) {
    $request = $event->getRequest();
    // Deliberately avoid calling \Drupal::routeMatch()->getRouteName() because
    // that will instantiate a protected route match object that will not have
    // the raw variable we want to add.
    if ($request->attributes->get(RouteObjectInterface::ROUTE_NAME) === 'view.airports.page_1') {
      if ($raw = $request->attributes->get('_raw_variables', array())) {
        $raw->add(array('taxonomy_vocabulary' => 'airports'));
        $request->attributes->set('_raw_variables', $raw);

   * {@inheritdoc}
  public static function getSubscribedEvents() {
    $events = parent::getSubscribedEvents();

    // The route object attribute will have been set in
    // router_listener::onKernelRequest(), which has a priority of 32.
    $events[KernelEvents::REQUEST][] = array('onKernelRequest', 31);

    return $events;

The class file should be placed at MYMODULE/src/EventSubscriber/RouteSubscriber.php. The alterRoutes() method is the normal one for a RouteSubscriber. That's where I alter the definition of the route (if it exists, since it won't until the view is actually created!). Then I've added a onKernelRequest() method, with a priority that means it runs after the request has been matched to a route. This sets the 'taxonomy_vocabulary' raw variable on the incoming request. Admittedly, that last part feels quite wrong! I'd love to know if there's a better way of doing all this. Perhaps I should have gone down the idea of swapping out the controller for the original route after all? But should you ever need to replace the listing for a specific vocabulary, at least you can now follow on from this. Debugging through the Drupal & Symfony internals where requests are matched to routes is not for the faint-hearted! I've learnt a lot along the way around it, which hopefully means I'll be ready for my next obscure D8 challenge!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web