Aug 19 2019
Aug 19

This is a short story on an interesting problem we were having with the Feeds module and Feeds directory fetcher module in Drupal 7.

Background on the use of Feeds

Feeds for this site is being used to ingest XML from a third party source (Reuters). The feed perhaps ingests a couple of hundred articles per day. There can be updates to the existing imported articles as well, but typically they are only updated the day the article is ingested.

Feeds was working well for over a few years, and then all of a sudden, the ingests started to fail. The failure was only on production, whilst the other (lower environments) the ingestion worked as expected.

The bizarre error

On production we were experiencing the error during import:

PDOStatement::execute(): MySQL server has gone away database.inc:2227 [warning] 
PDOStatement::execute(): Error reading result set's header [warning] 
database.inc:2227PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has [error]

The error is not so much that the database server is not alive, more so that PHP's connection to the database has been severed due to exceeding MySQL's wait_timeout value.

The reason why this would occur on only production happens on Acquia typically when you need to read and write to the shared filesystem a lot. As lower environments, the filesystem is local disk (as the environments are not clustered) the access is a lot faster. On production, the public filesystem is a network file share (which is slower).

Going down the rabbit hole

Working out why Feeds was wanting to read and/or write many files from the filesystem was the next question, and immediately one thing stood out. The shear size of the config column in the feeds_source table:

mysql> SELECT id,SUM(char_length(config))/1048576 AS size FROM feeds_source GROUP BY id;
+-------------------------------------+---------+
| id                                  | size    |
+-------------------------------------+---------+
| apworldcup_article                  |  0.0001 |
| blogs_photo_import                  |  0.0003 |
| csv_infographics                    |  0.0002 |
| photo_feed                          |  0.0002 |
| po_feeds_prestige_article           |  1.5412 |
| po_feeds_prestige_gallery           |  1.5410 |
| po_feeds_prestige_photo             |  0.2279 |
| po_feeds_reuters_article            | 21.5086 |
| po_feeds_reuters_composite          | 41.9530 |
| po_feeds_reuters_photo              | 52.6076 |
| example_line_feed_article           |  0.0002 |
| example_line_feed_associate_article |  0.0001 |
| example_line_feed_blogs             |  0.0003 |
| example_line_feed_gallery           |  0.0002 |
| example_line_feed_photo             |  0.0001 |
| example_line_feed_video             |  0.0002 |
| example_line_youtube_feed           |  0.0003 |
+-------------------------------------+---------+
What 52 MB of ASCII looks like in a single cell.

Having to deserialize 52 MB of ASCII in PHP is bad enough.

The next step was dumping the value of the config column for a single row:

drush --uri=www.example.com sqlq 'SELECT config FROM feeds_source WHERE id = "po_feeds_reuters_photo"' > /tmp/po_feeds_reuters_photo.txt
Get the 55 MB of ASCII in a file for analysis

Then open the resulting file in vim:

"/tmp/po_feeds_reuters_photo.txt" 1L, 55163105C
Vim struggles to open any file that has 55 million characters on a single line

And sure enough, inside this config column was a reference to every single XML file ever imported, a cool ~450,000 files.

a:2:{s:31:"feeds_fetcher_directory_fetcher";a:3:{s:6:"source";s:23:"private://reuters/pass1";s:5:"reset";i:0;
s:18:"feed_files_fetched";a:457065:{
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WR_RTROPTT_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:96:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_LYNXMPEE630KJ_RTROPTP_0_USA-TRADE-CHINA.XML";i:1530693632;
s:97:"private://reuters/pass1/topnews/2018-07-04T083617Z_147681_KBE99T04E_RTROPTT-LNK_0_OUSBSM-LINK.XML";i:1530693632;
s:102:"private://reuters/pass1/topnews/2018-07-04T083658Z_1_KBN1JU0X2_RTROPTT_0_JAPAN-RETAIL-ZOZOTOWN-INT.XML";i:1530693632
457,065 is the array size in feed_files_fetched

So this is the root cause of the problem, Drupal is attempting to stat() ~450,000 files that do not exist, and these files are mounted on a network file share. This process took longer than MySQL's wait_timeout and MySQL closed the connection. When Drupal finally wanted to talk to the database, it was not to be found.

Interesting enough, the problem of the config column running out of space came up in 2012, and "the solution" was just to change the type of the column. Now you can store 4GB of content in this 1 column. In hindsight, perhaps this was not the smartest solution.

Also in 2012, you see the comment from @valderama:

However, as feed_files_fetched saves all items which were already imported, it grows endless if you have a periodic import.

Great to see we are not the only people having this pain.

The solution

The simple solution to limp by is to increase the wait_timeout value of your Database connection. This gives Drupal more time to scan for the previously imported files prior to importing the new ones.

$databases['default']['default']['init_commands'] = [
  'wait_timeout' => "SET SESSION wait_timeout=1500",
];
Increasing MySQL's wait_timeout in Drupal's settings.php.

As you might guess, this is not a good long term solution for sites with a lot of imported content, or content that is continually being imported.

Instead we opted to do a fairly quick update hook that would loop though all of the items in the feed_files_fetched key, and unset the older items.

<?php

/**
 * @file
 * Install file.
 */

/**
 * Function to iterate through multiple strings.
 *
 * @see https://www.sitepoint.com/community/t/strpos-with-multiple-characters/2004/2
 * @param $haystack
 * @param $needles
 * @param int $offset
 * @return bool|int
 */
function multi_strpos($haystack, $needles, $offset = 0) {
  foreach ($needles as $n) {
    if (strpos($haystack, $n, $offset) !== FALSE) {
      return strpos($haystack, $n, $offset);
    }
  }
  return false;
}

/**
 * Implements hook_update_N().
 */
function example_reuters_update_7001() {
  $feedsSource = db_select("feeds_source", "fs")
    ->fields('fs', ['config'])
    ->condition('fs.id', 'po_feeds_reuters_photo')
    ->execute()
    ->fetchObject();

  $config = unserialize($feedsSource->config);

  // We only want to keep the last week's worth of imported articles in the
  // database for content updates.
  $cutoff_date = [];
  for ($i = 0; $i < 7; $i++) {
    $cutoff_date[] = date('Y-m-d', strtotime("-$i days"));
  }

  watchdog('FeedSource records - Before trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // We attempt to match based on the filename of the imported file. This works
  // as the files have a date in their filename.
  // e.g. '2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML'
  foreach ($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'] as $key => $source) {
    if (multi_strpos($key, $cutoff_date) === FALSE) {
      unset($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'][$key]);
    }
  }

  watchdog('FeedSource records - After trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // Save back to the database.
  db_update('feeds_source')
    ->fields([
      'config' => serialize($config),
    ])
    ->condition('id', 'po_feeds_reuters_photo', '=')
    ->execute();
}

Before the code ran, there were > 450,000 items in the array, and after we are below 100. So a massive decrease in database size.

More importantly, the importer now runs a lot quicker (as it is not scanning the shared filesystem for non-existent files).

Aug 01 2019
Aug 01

We’re thrilled to be attending Drupal Camp Pannonia from the 1st to 3rd August!

Ratomir and Dejan from the CTI Serbia office will be attending Drupal Camp to discover the biggest breakthroughs of the Open Source platform in 2019.

Grand-terrace-palic

The Grand Terrace in Palić, where Drupal Camp Pannonia is held.

 

Held in Palić, Serbia, near the Hungarian border, Drupal Camp Pannonia brings together some of Europe's top Drupal experts for a weekend of knowledge sharing and Open Source collaboration.

CTI Digital has long worked with a global roster of clients serving international customers. In early 2019, we made a permanent investment in Europe and opened a Serbia office. 

We selected Serbia as the Drupal community is growing quickly and filled with world-class talent. Supporting Drupal Camp Pannonia is a crucial part of our investment in the European Drupal community. If you’d like to chat with Dejan or Ratomir about CTI Digital or Drupal, please email [email protected] to set up a meeting.

Ratomir-and-Dejan-1-1

Dejan (Left) and Ratomir (Right)

We're pleased to announce Dejan (@dekisha007) is also conducting a session on Day 1, Friday 1st August at 15:45. He'll be delving into a new front-end practise we have developed at CTI Digital using Pattern Lab.

Here's what to expect:

  • Pattern Lab and Atomic Design in Drupal
  • Tailwind CSS
  • Advantages of Vue.js over React in Drupal
  • Wrapping all of the above up with CTI’s custom base theme

Be sure to catch Dejan’s talk on day 1, along with a host of brilliant sessions and workshops by checking out the online schedule.

Or follow @dcpannonia and @CTIDigitalUK for all the action on twitter.

Jul 28 2019
Jul 28

PHP 7.3.0 was released in December 2018, and brings with it a number of improvements in both performance and the language. As always with Drupal you need to strike a balance between adopting these new improvements early and running into issues that are not yet fixed by the community.

Why upgrade PHP to 7.3 over 7.2?

What hosting providers support PHP 7.3?

All the major players have support, here is how you configure it for each.

Acquia

Somewhere around April 2019 the option to choose PHP 7.3 was released. You can opt into this version by changing a value in Acquia Cloud. This can be done on a per environment basis.

The PHP version configuration screen for Acquia Cloud 

Pantheon

Pantheon have had support since the April 2019 as well (see the announcement post). To change the version, you update your pantheon.yml file (see the docs on this).

# Put overrides to your pantheon.upstream.yml file here.
# For more information, see: https://pantheon.io/docs/pantheon-yml/
api_version: 1
php_version: 7.3
Example pantheon.yml file

On a side note, it is interesting that PHP 5.3 is still offered on Pantheon (end of life for nearly 5 years).

Platform.sh

Unsure when Platform.sh released PHP 7.3, but the process to enable it is very similar to Pantheon, you update your .platform.app.yaml file (see the docs on this).

# .platform.app.yaml
type: "php:7.3"
Example .platform.app.yaml file

Dreamhost

PHP 7.3 is also available on Dreamhost, and can be chosen in a dropdown in their UI (see the docs on this).

The 'Manage Domains' section of Dreamhost

Dreamhost also win some award for also allowing the oldest version of PHP that I have seen in a while (PHP 5.2).

When can you upgrade PHP 7.3

Drupal 8

As of Drupal 8.6.4 (6th December 2018), PHP 7.3 is fully supported in Drupal core (change record). I have been running PHP 7.3 with Drupal 8 for a while now and have seen no issues, and this includes running some complex installation profiles such as Thunder and Lightning.

Any Drupal 8 site that is reasonably up to date should be fine with Drupal 8 today.

Drupal 7

Slated for support in the next release of Drupal 7 - being Drupal 7.68 (see the drupal.org issue), however there are a number of related tasks that seem like deal breakers. There also is not PHP 7.3 and Drupal 7 tests running daily either.

It seems like for the mean time, it is probably best to hold off on the PHP 7.3 upgrade until 7.68 is out the door, and also contributed modules have had a chance to upgrade and release a new stable release.

A simple search on Drupal.org yields the following modules that look like they may need work (more are certainly possible):

  • composer_manager (issue)
  • scald (issue) [now fixed and released]
  • video (issue)
  • search_api (issue) [now fixed and released]

Most of the issues seem to be related to this deprecation - Deprecate and remove continue targeting switch. If you know of any other modules that have issues, please let me know in the comments.

Drupal 6

For all you die hard Drupal 6 fans out there (I know a few large websites still running this), you are going to be in for a rough ride. There is a PHP 7 branch of the d6lts Github repo, so this is promising, however the last commit was September 2018, so this does not bode well for PHP 7.3 support. I also doubt contributed modules are going to be up to scratch (drupal.org does not even list D6 versions of modules anymore).

To test this theory, I audited the current 6.x-2.x branch of views

$ phpcs -p ~/projects/views --standard=PHPCompatibility --runtime-set testVersion 7.3
................................................W.W.WW.W....  60 / 261 (23%)
................................E........................... 120 / 261 (46%)
...................................................EE....... 180 / 261 (69%)
............................................................ 240 / 261 (92%)
.....................                                        261 / 261 (100%)

3 errors in views alone. The errors are show stoppers too

Function split() is deprecated since PHP 5.3 and removed since PHP 7.0; Use preg_split() instead

If this is the state of one of the most popular modules for Drupal 7, then I doubt the lesser known modules will be any better.

If you are serious about supporting Drupal 6, it would pay to get in contact with My Drop Wizard, as they at least at providing support for people looking to adopt PHP 7.

Jul 15 2019
Jul 15

When sending email from your application, using queuing process can reduce the application response time and increase speed.

By sending the message to queue instead of sending directly at the end of server response, you may achieve better user experience. Once messages are in queue, you just need a scheduled cron task to initiate scheduled email sending.

How ?

Queuing is simple in Drupal 8

Jul 08 2019
Jul 08

Part of my day job is to help tune the Cloudflare WAF for several customers. This blog post helps to summarise some of the default rules I will deploy to every Drupal (7 or 8) site as a base line.

The format of the custom WAF rules in this blog post are YAML format (for humans to read), if you do want to create these rules via the API, then you will need them in JSON format (see the end of this blog post for a sample API command).

Default custom WAF rules

Unfriendly Drupal 7 URLs

I often see bots trying to hit URLs like /?q=node/add and /?q=user/register. This is the default unfriendly URL to hit on Drupal 7 to see if user registration or someone has messed up the permissions table (and you can create content as an anonymous user). Needless to say, these requests are rubbish and add no value to your site, let's block them.

description: 'Drupal 7 Unfriendly URLs (bots)'
action: block
filter:
  expression: '(http.request.uri.query matches "q=user/register") or (http.request.uri.query matches "q=node/add")'

Autodiscover

If your organisation has bought Microsoft Exchange, then likely your site will receive loads of requests (GET and POST) to which is likely to just tie up resources on your application server serving these 404s. I am yet to meet anyone that actually serves back real responses from a Drupal site for Autodiscover URLs. Blocking is a win here.

description: Autodiscover
action: block
filter:
  expression: '(http.request.uri.path matches "/autodiscover\.xml$") or (http.request.uri.path matches "/autodiscover\.src/")'

Wordpress

Seeing as Wordpress has a huge market share (34% of all websites) a lot of Drupal sites get caught up in the mindless (and endless) crawling. These rules will effectively remove all of this traffic from your site.

description: 'Wordpress PHP scripts'
action: block
filter:
  expression: '(http.request.uri.path matches "/wp-.*\.php$")'
description: 'Wordpress common folders (excluding content)'
action: block
filter:
  expression: '(http.request.uri.path matches "/wp-(admin|includes|json)/")'

I separate wp-content into it's own rule as you may want to disable this rule if you are migrating from a old Wordpress site (and want to put in place redirects for instance).

description: 'Wordpress content folder'
action: block
filter:
  expression: '(http.request.uri.path matches "/wp-content/")'

SQLi

I have seen several instanced in the past where obvious SQLi was being attempted and the default WAF rules by Cloudflare were not intercepting them. This custom WAF rule is an attempt to fill in this gap.

description: 'SQLi in URL'
action: block
filter:
  expression: '(http.request.uri.path contains "select unhex") or (http.request.uri.path contains "select name_const") or (http.request.uri.path contains "unhex(hex(version()))") or (http.request.uri.path contains "union select") or (http.request.uri.path contains "select concat")'

Drupal 8 install script

Drupal 8's default install script will expose your major, minor and patch version of Drupal you are running. This is bad for a lot of reasons.

Drupal 8's default install screen exposes far too much information

It is better to just remove these requests from your Drupal site altogether. Note, this is not a replacement for upgrading Drupal, it is just to make fingerprinting a little harder.

description: 'Install script'
action: block
filter:
  expression: '(http.request.uri.path eq "/core/install.php")'

Microsoft Office and Skype for Business

Microsoft sure is good at making lots of products that attempt to DoS its own customers websites. These requests are always POST requests, often to your homepage, and you require partial string matching to match the user agent, as it changes with the version of Office/Skype you are running.

In large organisation, I have seen the number of requests here number in the hundreds of thousands per day.

description: 'Microsoft Office/Skype for Business POST requests'
action: block
filter:
  expression: '(http.request.method eq "POST") and (http.user_agent matches "Microsoft Office" or http.user_agent matches "Skype for Business")'

Microsoft ActiveSync

Yet another Microsoft product that you don't why it is trying to hit another magic endpoint that doesn't exist.

description: 'Microsoft Active Sync'
action: block
filter:
  expression: '(http.request.uri.path eq "/Microsoft-Server-ActiveSync")'

Using the Cloudflare API to import custom WAF rules

It can be a pain to have to manually point and click a few hundred times per zone to import the above rules. Instead you would be better off to use the API. Here is a sample cURL command you can use do import all of the above rules in one easy go.

You will need to replace the redacted sections with your details.

curl 'https://api.cloudflare.com/client/v4/zones/XXXXXXXXXXXXXX/firewall/rules' \
  -H 'X-Auth-Email: XXXXXXXXXXXXXX' \
  -H 'X-Auth-Key: XXXXXXXXXXXXXX'
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json'
  -H 'Accept-Encoding: gzip'
  -X POST \
  -d '[{"ref":"","description":"Autodiscover","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/autodiscover\\.xml$\") or (http.request.uri.path matches \"\/autodiscover\\.src\/\")"}},{"ref":"","description":"Drupal 7 Unfriendly URLs (bots)","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.query matches \"q=user\/register\") or (http.request.uri.query matches \"q=node\/add\")"}},{"ref":"","description":"Install script","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path eq \"\/core\/install.php\")"}},{"ref":"","description":"Microsoft Active Sync","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path eq \"\/Microsoft-Server-ActiveSync\")"}},{"ref":"","description":"Microsoft Office\/Skype for Business POST requests","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.method eq \"POST\") and (http.user_agent matches \"Microsoft Office\" or http.user_agent matches \"Skype for Business\")"}},{"ref":"","description":"SQLi in URL","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path contains \"select unhex\") or (http.request.uri.path contains \"select name_const\") or (http.request.uri.path contains \"unhex(hex(version()))\") or (http.request.uri.path contains \"union select\") or (http.request.uri.path contains \"select concat\")"}},{"ref":"","description":"Wordpress common folders (excluding content)","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/wp-(admin|includes|json)\/\")"}},{"ref":"","description":"Wordpress content folder","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/wp-content\/\")"}},{"ref":"","description":"Wordpress PHP scripts","paused":false,"action":"block","priority":null,"filter":{"expression":"(http.request.uri.path matches \"\/wp-.*\\.php$\")"}}]'

How do you know the above rules are working

Visit the firewall overview tab in Cloudflare's UI to see how many requests are being intercepted by the above rules.

Cloudflare's firewall overview screen showing the custom WAF rules in action

Final thoughts

The above custom WAF rules are likely not the only custom WAF rules you will need for any given Drupal site, but it should at least be a good start. Let me know in the comments if you have any custom WAF rules that you always deploy. I would be keen to update this blog post with additional rules from the community.

This is likely the first post in a series of blog posts on customising Cloudflare to suit your Drupal site. If you want to stay up to date - subscribe to the RSS feed, sign up for email updates, or follow us on Twitter.

Jun 21 2019
Jun 21

I am working with a customer now that is looking to go through a JSON:API upgrade, from version 1.x on Drupal 8.6.x to 2.x and then ultimately to Drupal 8.7.x (where it is bundled into core).

As this upgrade will involve many moving parts, and it is critical to not break any existing integrations (e.g. mobile applications etc), having basic end-to-end tests over the API endpoints is essential.

In the past I have written a lot about CasperJS, and since then a number of more modern frameworks have emerged for end-to-end testing. For the last year or so, I have been involved with Cypress.

I won't go too much in depth about Cypress in this blog post (I will likely post more in the coming months), instead I want to focus specifically on JSON:API testing using Cypress.

In this basic test, I just wanted to hit some known valid endpoints, and ensure the response was roughly OK.

Rather than have to rinse and repeat a lot of boiler plate code for every API end point, I wrote a custom Cypress command, to which abstracts all of this away in a convenient function.

Below is what the spec file looks like (the test definition), it is very clean, and is mostly just the JSON:API paths.

describe('JSON:API tests.', () => {

    it('Agents JSON:API tests.', () => {
        cy.expectValidJsonWithMinimumLength('/jsonapi/node/agent?_format=json&include=field_agent_containers,field_agent_containers.field_cont_storage_conditions&page[limit]=18', 6);
        cy.expectValidJsonWithMinimumLength('/jsonapi/node/agent?_format=json&include=field_agent_containers,field_agent_containers.field_cont_storage_conditions&page[limit]=18&page[offset]=72', 0);
    });
    
    it('Episodes JSON:API tests.', () => {
        cy.expectValidJsonWithMinimumLength('/jsonapi/node/episode?fields[file--file]=uri,url&filter[field_episode_podcast.nid][value]=4976&include=field_episode_podcast,field_episode_audio,field_episode_audio.field_media_audio_file,field_episode_audio.thumbnail,field_image,field_image.image', 6);
    });

});
jsonapi.spec.js

And as for the custom function implementation, it is fairly straight forward. Basic tests are done like:

  • Ensure the response is an HTTP 200
  • Ensure the content-type is valid for JSON:API
  • Ensure there is a response body and it is valid JSON
  • Enforce a minimum number of entities you expect to be returned
  • Check for certain properties in those returned entities.
Cypress.Commands.add('expectValidJsonWithMinimumLength', (url, length) => {
    return cy.request({
        method: 'GET',
        url: url,
        followRedirect: false,
        headers: {
            'accept': 'application/json'
        }
    })
    .then((response) => {
        // Parse JSON the body.
        let body = JSON.parse(response.body);
        expect(response.status).to.eq(200);
        expect(response.headers['content-type']).to.eq('application/vnd.api+json');
        cy.log(body);
        expect(response.body).to.not.be.null;
        expect(body.data).to.have.length.of.at.least(length);

        // Ensure certain properties are present.
        body.data.forEach(function (item) {
            expect(item).to.have.all.keys('type', 'id', 'attributes', 'relationships', 'links');
            ['changed', 'created', 'default_langcode', 'langcode', 'moderation_state', 'nid', 'path', 'promote', 'revision_log', 'revision_timestamp', 'status', 'sticky', 'title', 'uuid', 'vid'].forEach((key) => {
                expect(item['attributes']).to.have.property(key);
            });
        });
    });

});
commands.js

Some of the neat things in this function is that it does log the parsed JSON response with cy.log(body); this allows you to inspect the response in Chrome. This allows you to extend the test function rather easily to meet you own needs (as you can see the full entity properties and fields.

Cypress with a GUI can show you detailed log information

Using Cypress is like having an extra pair of eyes on the Drupal upgrade. Over time Cypress will end up saving us a lot of developer time (and therefore money). The tests will be in place forever, and so regressions can be spotted much sooner (ideally in local development) and therefore fixed much faster.

If you do JSON:API testing with Cypress I would be keen to know if you have any tips and tricks.

Apr 30 2019
Apr 30

Drupal is to release its latest version, Drupal 8.7, on the 1st May 2019. Review the entire Drupal product roadmap here.

Drupal 8.7 is a significant update for Marketers, Copywriters, Site Managers, and Creatives. It brings a new Media Library and Layout Builder, Drupal’s first drag-and-drop content manager, recently revealed at DrupalCon Seattle 2019

[embedded content]

 DrupalCon Seattle 2019

This update sees Drupal 8 retain its relevance in an emerging world of instant site builders, like Squarespace and Wix. Where Drupal has always stood out for its back-end capabilities and flexibility with API integrations, the new Layout Builder empowers Content Managers and Marketers. The user-friendly system allows Content Editors to maintain control over their content, throughout its journey to the end user. It also provides enhanced reactivity to the customer experience, as Site Managers can make edits without the assistance of a developer.

Of course, this isn’t just a standard drag-and-drop editor. Drupal 8.7 comes with a whole host of features you won’t find on low-end editors. These include instant multi-page management and the flexibility of powerful editorial workflows.


What is the Drupal Layout Builder?

The Drupal Layout Builder is a drag-and-drop content management tool. It allows non-technical individuals to design layouts in a structured way that adheres to the standard across the site. The new Layout Builder encourages users to take control, introducing blocks which help them to achieve designs and functionalities previously only editable in the developer's domain.

Key features include:

  • Modular template builder
  • Drag-and-drop paragraph editor
  • Permissions workflows
  • World-Class Accessibility 

[embedded content]

 Drupal 8.7 Layout Builder Recording

 

Simple, Reactive Page Management

Using the layout builder, Site Managers and Marketers can create new pages from templates. To build on these templates, users can insert new blocks from the pop-out sidebar that do not exist elsewhere on the site. Once the Site Manager has added a block, they can preview it instantly. At this point, if the editor isn’t happy, they can swap out blocks or replace images, text, buttons and more!

Users have recognised that it can be challenging to move huge, pre-built blocks and paragraphs in full preview. Using the preview toggle, users can now switch out of the full ‘Content Preview’ mode to a slicker block view. This simplified interface makes reordering blocks quick and easy. This differs from alternative drag-and-drop editors as the collapsed block editor sits within the same page. Therefore, editors can drag blocks across columns, as well as up and down.


Accessibility

The Drupal project has a long-standing commitment to accessibility and has been a leader in accessibility for many years. In 2018 Drupal Founder, Dries Buytaert, reaffirmed this commitment by stating that the forthcoming Layout Builder must be fully accessible.

The full 8.7 system is navigable by keyboard, in order to conform to Web Content Accessibility Guidelines, as recommended by the World Wide Web Consortium. This means that the drag-and-drop editor was designed with consideration for Site Managers, Marketers and Developers amongst us with specific accessibility requirements. Ultimately, the latest Drupal update will make agency life and in-house marketing that much more accessible to everyone.

[embedded content]

 Navigating Layout Builder by Keyboard

Workflows

The Layout Builder supports essential activities such as adding, saving and previewing content, or publishing data. The latest update also supports more complex features, such as content staging or approval workflows, using granular controls and notifications for the governance of each layout. This is a great win for marketing teams, as it de-centralises the content management process. Site managers can submit site updates safely, with version controls and final permission escalation.

For more complex updates, Drupal 8.7 supports workflows which push pages through multiple teams. For example, an initial submission which is then passed via testing teams, SEO managers, translation and localisation experts for international audiences, and final senior sign off.


Templates

The Layout Builder also includes a templating feature, useful for sites with large quantities of content. For enterprise sites, Content Managers can edit collections of pages all at once using the Layout Template Editor. Not only can they be edited in unison, they can also be rearranged while retaining their unique content. The ability to re-structure multiple pages at once saves a huge amount of time when one block needs to be changed across multiple locations.

Layout-Builder-template-modeThe Layout Builder in template editing mode

The Drupal Layout Builder was built entirely on Open Source contributions made by 123 contributors and 68 worldwide organisations.

Drupal 8.7 Layout Builder Contributors

The 123 contributors (DrupalCon Seattle 2019: Driesnote)

 

Media Library

Drupal 8.7 included the release of a Media Library, to make asset management faster for Site Managers, particularly within the drag-and-drop format.

[embedded content]

 The Media Library in action

 

The Media Library is a pop-out window from which images and video can be inserted into the webpage. The library pulls content from the organisation’s existing bank of images, hosted within Drupal. Or, users can upload media directly from their cloud or hard drive.

The addition of the Media Library allows Site Managers to insert or change images or video more efficiently than ever. The user selects the area where the image will be added and opens the Media Library. They can then select from existing images or upload multiple new assets at once. Once uploaded, users can remove or reorder any images ready for import. Once happy with the images, the user can add metadata and any extra information (like links), or change the format to video.

The Media Library was made possible by Open Source contributions from 310 individuals and 122 organisations.

Drupal 8.7 Media Library Contributors

The 310 contributors (DrupalCon Seattle 2019: Driesnote)

 

Technical updates

There have also been a host of additional updates and tweaks to Drupal, improving its speed and background functionality. Such updates include adding JSON:API as a core module and providing a consistent way of displaying JavaScript Messages. For the full list of technical updates, visit the Drupal website here.

What’s Next?

Administration UI - Drupal 8.8 or 8.9

The administrative side of Drupal, where site managers navigate the back end and manage content, has been close to untouched for the past 10 years.

The User Interface (UI) is currently being redeveloped to be aligned and reflect the software behind Drupal, giving it a modern look and feel. This includes better use of space (and white space) and more contrasting features. The additional space also means its more accessible, and will receive a WYSIWYG integration.

Drupal 8.8 UI

Proposed UI for Drupal 8.8

 

Sign up to email alerts to find out more or talk to our Drupal Web Development Experts about upgrading your Drupal Platform.

 

Get in touch
Mar 05 2019
Mar 05

Running a business is demanding. To be successful requires leadership be equipped with a broad range of skills from financial astuteness to empathy for staff. Whilst developers have ample resources from which to draw reference on best practice, for managers and business leaders knowledge gained is often be deemed competitive advantage and so kept secret or is accessed only through expensive training or courses.

Working in open source brings many benefits including the fostering of knowledge transfer that transcends merely code. It is to the benefit of all that business leaders in Drupal share this openness and are willing to reveal lessons learnt or formulae of success, that in other industries would remain behind closed doors. A fine example of this mindset is DrupalCamp London CXO, this years incarnation was no exception.

Prof. Costas Andriopoulos, Cass Business School, spoke about leadership and innovation in scaling enterprises. He explained that it’s far wiser to sort out your business early, when you are small and well ahead of scaling because what kills businesses is success, age and size.

Prof. Costas Andriopoulos

 

Success: breeds complacency, overstretching, even arrogance. All of these can be the downfall of your business.

Age: of leadership and team leads to them becoming slower, more stuck in your ways. Andriopoulos stated that curiosity drops with age — a child asks over 400 questions per day. By adulthood and towards later life this drops dramatically.

Size: brings bureaucracy, slowing the pace at which information disseminates. Layers of management become more risk averse. Humans are natural hoarders, it’s normal he says for people add but we hold on to things too long. This slows businesses down.

To maintain momentum in decision making he recommended all meetings and team sizes should be manageable — 4 or five, the best team is 2. There’s nowhere to hide here. You have to participate. In large meetings people repeat one another often or may say nothing at all.

Andriopoulos recommended when facing challenging projects do a pre-mortem. Split the team in two, half of them imagine the plans has been put in motion and failed terribly. Then write a story of what happened. The other half imagine that it succeeded and write their story of how that happened. Doing so equips you with a variety of scenarios to consider before the work beings.

Rasmus Lerdorf founder of the programming language PHP

 

Rasmus Lerdorf, founder of the programming language PHP, gave a potted history of how the language came to be and prospered. What struck me was how innovation and breaking free of the norm were key drivers. In the early days where hardware and networks were far slower than we know today, Rasmus questioned the merit of querying databases without the ability to reduce verbosity of responses. He introduced the “LIMIT” clause, something we all take for granted now, to introduce efficiency gains in early internet applications.

Upgrading to PHP 7 across the web would remove 7.5 BN Kg carbon dioxide emissions

Rasmus Lerdorf

 

This ethos remains today. Lerdorf stressed the importance of upgrading to PHP 7 or above as the dramatic performance improvements significantly reduce the physical hardware required to support PHP applications. Since PHP powers >70% internet sites, our efforts combined will contribute to he estimates a 15B KWH energy savings and 7.5 BN Kg less carbon dioxide emissions.

Michel Van Velde, founder of One Shoe agency

 

Michel Van Velde, founder of One Shoe agency, spoke openly of the challenges his business faced in 2017 and how a combination of reading and hiring a personal coach helped him evolve his approach to leadership, behaviour and in doing so the actions of his staff.

His presentation was a shining example of how business leaders in open source act differently. Whilst on the face of it counterintuitive, by sharing how he overcame adversity in his life with his potential competitors, what Michel was actually doing was helping his peers to avoid these pains meaning we all rise. Doing so he is contributing to a virtuous circle.

Van Velde put his success in 2018 down to a combination of three factors, rooted in knowledge of three leadership models and an appreciation of how to apply them to his circumstances.

The Drama Triangle: defines any conflictual situation to have victim, rescuer, persecutor. An oversimplification is to say a victim typically takes the “poor me!” stance, Rescuers are those who might choose to say “Let me help you!”, Persecutor adopts the “It’s all your fault!” stance.

Radical Candor: is the ability to Challenge Directly and show you Care Personally at the same time. “Radical Candor really just means saying what you think while also giving a damn about the person you’re saying it to”

Transactional Analysis: considers that we each have internal models of parents, children and also adults, and we play these roles with one another in our relationships. If we grow an appreciation in our daily conversations, meetings and conflicts what state we and others are in (parents, children, adults) we can begin to realise how to avoid or deal with conflict.

Van Velde explained that by rewiring how he dealt with his staff not meeting expectation, dealing with situations in such a way to offer his team the opportunity to realise their shortcomings themselves, providing opportunities to address their behaviour he was creating a positive environment in which his staff could grow.

Melissa Van Der Hecht’s presenting on “Why we need to be more open about diversity in tech”

 

Melissa Van Der Hecht’s presentation on “Why we need to be more open about diversity in tech” was a breath of fresh. I can never hear enough on this topic. I found her angle refreshing.

Rather than specifying diversity through gender, race, religion she saw diversity as that which makes us stand out, what makes us special. She talked about the fact that as a female in tech you have to work harder, a lot harder, to convince men you are worthy of respect and have your ideas recognised as having merit. Van Der Hecht said this is unrelenting. At best exhausting and worst leads to burnout, reporting those from minority groups suffer double burnout rates over those in the majority.

Van Der Hecht went on to explain that unconscious bias really hard to adjust. She spoke of the “Surgeon’s dilemma”, a test for unconscious bias and admitted she fell for this riddle. I compel you to take the test, how did you fare?

Watch this short video, as a further example used in the presentation illustrating the point. For me, rather than despair, it actually gave hope that generations soon entering the workplace could bring a tidal wave of impressive minds.

[embedded content]

 

Van Der Hecht highlighted that diverse teams are more productive, more innovative and creative. There is a strong correlation between diversity and increased innovation.

According to Forbes.com companies with more diverse teams reported 19% higher revenue due to innovation

 

I always remember Erynn Petersen, Executive Director of Outercurve an OSS foundation, speaking at DrupalCon Austin. She cited data showing that diversity leads to better performance in business. It’s hard to ignore these facts, unwise not to act upon the evidence.

I couldn’t help but notice while Melissa was speaking to an audience of ~100 people, only 3 were female, few of mixed race. True they were from across Europe, but the male dominance alone was striking. The Drupal is committed to diversity, during the weekend event it was striking to me how more diverse the attendee mix was. There is clearly a way to go in terms of fostering diversity in management, agency leadership. We should all consider how in our businesses we create cultures which foster diversity. We all have a lot to benefit from that.

I’ve strived in our business to create a culture which embraces all. It takes time and we are constantly learning, listening and evolving. These things don’t happen overnight and take commitment and a willingness to change.

We are fortunate in Drupal to have a vast community with many inspiring contributors from diverse backgrounds. Next time you are on Slack, at a meetup, DrupalCamp or Con why not take time out to open a conversation with someone quite different to you. It’s quite possible you’ll begin to realise being different is what makes them special. Thanks Melissa!

 

Feb 08 2019
Feb 08

Web monetization service is a browser API which allow creation of (micro) payments between the reader (user agent) and the content provider (website).

This is one way of getting paid for valuable content.

Today Coil is providing web monetization service using Interledger protocol (ILP).

We have built a simple module to integrate coil monetization with Drupal website:

Simple settings to add payment pointer is demonstrated in this video:

Your browser does not support the video tag.

Feb 05 2019
Feb 05
How do you start contributing to Drupal without code?

<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1114119085333160&amp;ev=PageView&amp;nosc...">

<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=247202&amp;fmt=gif">

Outlets for contributing to Drupal beyond code, whilst abundant, are not always evident to those having interest to do so. I want to help people become better acquainted with ways to get involved and how to start their contribution journey. Be they completely new to Drupal or simply yet to find an outlet.

At DrupalCamp London 2019 I will be speaking about non-code contributions and invite you to share your experiences and ideas. Open my eyes to the many ways individuals can create impact with their time, without knowing code at all.

Your participation will ensure attendees discover a multitude of ways to get involved well beyond my experiences alone. In doing so you are embodying the principle of contributing without code.

Therefore I have set up a Google Form to collect suggestions from the community. Please complete it with as many ideas as you have and I look forward to reading your suggestions!

 

Share your ideas

Jan 30 2019
Jan 30

Drupal Camp London is a 3-day event celebrating the users, designers, developers and advocates of Drupal and its community! Attracting 500 people from across Europe, after Drupalcon, it’s one of the biggest events in the Drupal Calendar.

CxO day, on Friday 1st March, is dedicated to businesses using Drupal, encouraging them to lead development and innovation of the Drupal platform.

The weekend (2nd & 3rd March) then packs in over 40 sessions covering seminars, Birds of a feather talks, Sprints and much more. Over the weekend there are also 3 Keynotes addressing the biggest upcoming changes to the technical platform, its place in the market, and the wider Drupal community.

 

Drupal camp 2018 Ryan SzramaRyan Szrama delivering a session at Drupal Camp 2018, Photo Cred: @pdjohnson

 

Our Weekend Sessions

Our Drupal Director, Paul Johnson, will be focussing on contributions to Drupal, but not as you know them. The session will be diving into a whole host of ways to contribute to Drupal that don’t require code. From content writing to photography, this session includes something for everyone and is guaranteed to spark some inspiration for new Drupal community initiatives.

 

Our UX specialist, James Genchi, will also be speaking at Drupal Camp London about UX and accessibility. With a background in development and a passion for user experience, James’ understanding of accessibility within design and development is vast. Following a substantial redesign for the University of West London, James will be sharing the unique design considerations he applied to achieve global accessibility within the website. In particular, he will be highlighting how accessible design is not just for people with a permanent disability, but also for those with a temporary and situational disability.

Be sure to follow us on twitter  so you don't miss either of these sessions!

Why should you attend Drupal Camp? 

[embedded content]

 

Exchange knowledge

Discover the latest in UX, design, development, business and more. There’s no limit to the types of topics that could come up...as long as they relate to Drupal that is!

Network

From C-Level and Site managers to developers and designers, over 500 people attended last year. Meet the best and brightest in the industry at talks and breakouts.

Recruit and be recruited

A wide variety of business and developers attend Drupal Camp, make the most of it by creating connections to further your own career or grow your agency team.

Reasons for attending the weekend eventImg Cred: Drupal Camp

 

 

We hope to see you there! Grab your ticket while you still can on the Drupal Camp website.

Tickets 

And be sure to follow us on your social platform of choice to find out when our sessions are!

 

Jan 23 2019
Jan 23

Just over a year ago I decided to repurpose an internal contrib-focused meeting and make an open meeting to support contributing to Drupal, called the Contrib Half Hour. Along the way we moved its time a little later to avoid conflicting with another important community initiative and then restructured it to add focused meetings for certain topics. To make things even better, almost all meetings have been recorded and uploaded to our Youtube channel. 2019 is going to see some additional changes that I'm excited to start on.

Our Q&A have issues

Throughout the year it became apparent that there was a huge overlap between the Q&A meetings, where we'd focus on general questions and discussions from the community, and the issues lab, where we'd focus on specific drupal.org project issues. While there's definitely a case for both, the overlap became somewhat naturally as the Q&A days had us often look at Drupal issues.

As a result of this we're going to combine the Q&A and Issues Lab days into one that'll be more open ended, letting us focus on general discussions some days and specific issues other days.

The testing shall continue until code improves

We're also continuing our dedication to helping people learn how to write code tests for their modules and themes. While during 2018 we focused on functional testing for Drupal 7 and 8, I'm aiming to expand our coverage (ba-dum-dum) in 2019 to also include unit testing.

I'd also like to open an invitation for folks to join our testing lab who are starting to write tests to join our labs where we'll be able to help.

Upgrade Lab: Time to upgrade

It was announced late last year that Drupal 9 will be released in June 2020 and that Drupal 7 and 8 will reach their end-of-life in November 2021. The main driving factor behind these dates is that the various libraries Drupal 7 and 8 depend upon will all have reached their end-of-life in November 2021 and so will no longer receive security updates from their respective maintainers. To avoid needing to come up with a plan to provide security coverage for a huge number of out-of-date 3rd party libraries, the Drupal core maintainers are dropping support when the other libraries also stop being supported.

It was also revealed that the usual upgrade anxiety for major releases of Drupal (5 to 6, 6 to 7, etc) would not be the case for Drupal 9. The plan is two release Drupal 9.0.0 and the final minor release of Drupal 8 on the same day, with the only difference being that all deprecated D8 APIs are removed from D9. As a result it will be relatively easy to upgrade from Drupal 8 to 9, “just” update all contrib code and custom code to no longer use the deprecated APIs along the way, and in theory everything should just work.

With this in mind we think it's time for people running Drupal 6 and 7 sites to start looking to upgrade to Drupal 8. Towards that goal we're going to have a regular meeting where we look at the steps to upgrade a site to Drupal 8 using the bundled Migrate system. We'll look at what's involved, how it works, how to plan for it, and how to help contributed modules support Drupal 8's upgrade solution. I'm intending that we'll be able to collaborate on improving both core and contrib's upgrade functionality, and in so doing help all sites looking to upgrade. I'm also hoping that we might be able to provide some assistance to folks attempting custom upgrades using Migrate's APIs, but we'll see how it goes.

Schedule

Our schedule for the next few months looks like this:

  • January 3: Q&A
  • January 10: Turning custom code into OSS
  • January 17: Q & A & Issues
  • January 24: Testing Lab
  • January 31: Upgrade Lab
  • February 7: No meeting
  • February 14: Mirroring git repositories
  • February 21: Q & A & Issues
  • February 28: Testing Lab
  • March 7: Upgrade Lab
  • March 14: Presentation TBD
  • March 21: No meeting
  • March 28: No meeting
  • April 4: Q & A & Issues
  • April 11: Testing Lab
  • April 18: Upgrade Lab
  • April 25: Presentation TBD
  • May 2: Q & A & Issues
  • May 9: Testing Lab
  • May 16: Upgrade Lab
  • May 23: Presentation TBD
  • May 30: Q & A & Issues

Last updated on Feb 14.

Same Bat Time!

We're going to continue meetings at the same time each month, using the same video conferencing provider:

And remember, if you're not able to join us then you can always catch up later as all meetings are recorded, when I don't forget to hit the record button that is.

See you then!

Jan 03 2019
Jan 03

Context

EK application has a module that store personal documents for user. When user account is deleted, those documents may be transferred to another account.

To achieve that, we need to alter the user account cancel form when building the form, validating and submitting it.

Let's review the 3 steps.

BUILD

The form before altering it looks like this

cancel user account before hook

We need to add a field to select another user account to which the document of the canceled account will be moved to.

To achieve that we Implements hook_form_alter() in MyModule.module:

function MyModule_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) {
    
  if ($form_id == 'user_multiple_cancel_confirm') {
    
    $form['move_uid_documents'] = [
      '#type' => 'textfield',
      '#title' => t('Move uer documents'),
      '#autocomplete_route_name' => 'MyModule.user_autocomplete',
      '#description' => t('Select to whom to transfer personal documents'),
    ];
    $form['#validate'][] = 'MyModule_form_user_delete_validate';
    $form['#submit'][] = 'MyModule_form_user_delete_submit';
    
     return $form;
      
  }
}

What we can notice here is:

  • We alter selected form defined by form ID. In this case : "user_multiple_cancel_confirm";
  • We create the required field by returning $form['move_uid_documents'] ;
  • We add 2 new actions for validation, $form['#validate'][], and submit, $form['#submit'][],  for the next steps.

After altering the form will look like this:

cancel user account after hook

We have a new field to select user. In our case, we also have an autocomplete function that helps selecting existing user. However, we need to ensure that the value entered in the field is really an existing user. This is the part handled by the validation.

 

VALIDATE

The validation is defined in MyModule_form_alter by adding validate callback named MyModule_form_user_delete_validate. Therefore, we need to create the function with thah particular name in MyModule.module.

function MyModule_form_user_delete_validate(&$form, \Drupal\Core\Form\FormStateInterface $form_state) {   if ($form['#form_id'] == 'user_multiple_cancel_confirm') {
        if ($form_state->getValue('move_uid_documents') <> '') {
            
            $query = "SELECT uid FROM {users_field_data} WHERE name = :n";
            $data = db_query($query, [':n' => $form_state->getValue('move_uid_documents')])
                    ->fetchField();
            if ($data) {
                $form_state->setValue('move_uid_documents', $data);
            } else {
                $form_state->setErrorByName('move_uid_documents', t('Unknown user to move documents'));
            }
            
 
        }
    
     return $form;
      
  }

Here the function will check against user_field_data table that the id is valid.

If not an error message will be displayed:

cancel user account validation error

However, if valid, we store the value to be used in the next step which is the submission.

SUBMISSION

As for validation, the submission is defined in MyModule_form_alter by adding validate callback named MyModule_form_user_delete_submit.

function MyModule_form_user_delete_submit(&$form, \Drupal\Core\Form\FormStateInterface $form_state) {
    
  if ($form['#form_id'] == 'user_multiple_cancel_confirm') {
    if($form_state->getValue('move_uid_documents')){
        foreach($form_state->getValue('accounts') as $key => $id) {               \Drupal::database()->update('MyModule_table')
                      ->fields(['uid' => $form_state->getValue('move_uid_documents'), 'folder' => t('Moved from user @u', ['@u' => $id])])
                      ->condition('uid', $id)->execute();
            }
    }
     \Drupal::messenger()->addStatus(t('Documents moved to user @u', ['@u' => $form_state->getValue('move_uid_documents')]));
     return $form;
      
  }
}

In the function above, we pick the id of each user account that is canceled and change to new user id in the document table.

The function also display a message to confirm actions: both cancellation and the submit hook have been executed.

cancel user submit alert

Please feel free to comment or suggest improvements.

Thank you.

Jan 02 2019
Jan 02

With the year winding down the month was a little quiet, but we still got some good contributions going.

Client sponsored

Thanks to our awesome clients for giving us a chance to help make open source software better for everyone.

Self-directed

Mediacurrent provides some extra time during the week for folks to scratch their own itches, and sometimes people triage issue queues instead of watching football on TV :-)

Blog posts

A little light this month, but there are still two good blog posts from our team.

Contrib Half Hour

We squeezed in four Contrib Half Hour meetings into the month, despite the company being closed for Turkey Day.

Events

Lots of folks were working on their presentation proposals for DrupalCon Seattle 2019. see Tara’s blog post for details. There are also several events coming up soon that we’ll be attending, including DrupalCamp NJ and Florida DrupalCamp in February and then NERDSummit in March.

Stay warm!

That’s it for this month. Hope everyone in the Northern Hemisphere stays warm, everyone in the Southern Hemisphere enjoys their summer, and the folks in the middle don’t brag too much!

Dec 17 2018
Dec 17

Drupal is an enormously welcoming community with countless online forums and community events to learn about the platform. Its open-source knowledge sharing and peer review is arguably second-to-none, and thanks to Acquia's Drupal certifications the Drupal learning process is becoming more consolidated.

However, nothing can quite beat the quality, focus, and hard work that goes into publishing a book. We’ve quizzed our Drupal developers and members of the Manchester tech community to find out which books every Drupal developer must read.


Drupal-Specific Books

 

Drupal 8 Module Development

Drupal 8 Module Development: Build and customize Drupal 8 modules and extensions efficiently

By Daniel Sipos

This book is a great welcome to Drupal 8, introducing you to the architecture of D8 and its subsystems, for a thorough foundational understanding. It guides you through creating your first module and continues to build upon your skills with more functionalities that all modern developers need to know.

View on amazon

Drupal Development Cookbook

Drupal 8 Development Cookbook - Second Edition: Harness the power of Drupal 8 with this recipe-based practical guide

By Matt Glaman

Using a fun, easy-to-follow recipe format, this book gives you an expansive look at the basic concepts of Drupal 8, in a step-by-step fashion. While this sometimes misses out on the ‘why’ of an action, it makes building new modules approachable and unintimidating for any level of developer.

View on amazon

Ambitious, intelligent, and a great Drupal developer? Check out our careers page.

CTI Careers

Drupal 8 Explained

Drupal 8 Explained: Your Step-by-Step Guide to Drupal 8

by Stephen Burge

Written in no-nonsense plain English, this book is great for any Drupal beginner. It’s been praised as the day-to-day reference book that any new developer should keep handy on their desk.

View on amazon

Drupal 8 Blueprints

Drupal 8 Blueprints: Step along the creation of 7 professional-grade Drupal sites

By Alex Burrows

This Drupal 8 guide will take you through 7 real Drupal 8 sites to demonstrate the latest practices in action. This all-encompassing view provides a look at the reasoning and methodology behind certain practices, and context for their larger impact on the site.

View on amazon

Definative Guide To Drupal 7

The Definitive Guide to Drupal 7 (Definitive Guide Apress)

by Benjamin Melancon

While new sites are being built in Drupal 8, it’s important the remember that many of the sites you’ll work on and maintain are in Drupal 7. This comprehensive book provides the nuts and bolts of any Drupal 7 site, to build a powerful and extensible system. Some concepts are slightly dated, so we’d recommend cross-checking online occasionally.

View on amazon

Pro Drupal 7 Development

Pro Drupal 7 Development (Expert's Voice in Open Source)

by Todd Tomlinson

This book is for slightly more ambitious developers as it quickly jumps through the basic modules to the more complex. Breaking down the development of APIs and improvements to Drupal 7, this book will have any Drupal Developer producing complex modules in no time.

View on amazon

Essential Development Books

Many coding principles span development languages and frameworks. Here are our essentials for any developer seeking the ability to produce high quality, clean code.

The Clean Coder

The Clean Coder: A Code of Conduct for Professional Programmers

By Robert C. Martin

This book delves into the difference between good and bad code. Split into 3 parts, the book first discusses principles, patterns, and practices of writing clean code. Real world case studies follow, before the book finishes with the signs of bad code and problem solving skills needed to de-bug and refresh any code base.

View on amazon

CSS

CSS Secrets: Better Solutions to Everyday Web Design Problems

by Lea Verou

CSS Secrets explains how to code common 'real world' solutions with CSS. Condensing the most useful and practical examples, this book is a more exciting read than the often extensive paperbacks which try to cover absolutely everything.

View on amazon

PHP and Javascript

Learning PHP, MySQL & JavaScript 5e (Learning PHP, MYSQL, Javascript, CSS & HTML5)

By Robin Nixon

Begin expanding your language stack with this multifaceted book. PHP is essential for any Drupal developer as it forms the core language of the Drupal framework. Meanwhile, learning Javascript, CSS and HTML5 will empower you to deliver more complex solutions.

View on amazon

And lastly, an open source book about… open source

The Cathedral & the Bazaar

By Eric S. Raymond

While this piece is slightly dated, it’s underlying concepts are still highly relevant and give great insight into the origins and essence of open source. It’s also free, so definitely still worth having a skim through… if you can handle the formatting!

Read it for free

I hope you've found some great reads here, if you've got a personal favourite please let us know below. Also if you're interested in advancing your Drupal career, please check out our careers page to see if there's a position perfect for you.

Careers at CTI Digital

Oct 02 2018
Oct 02

Whilst at Drupal Europe last month, I was privileged to be invited by Drupal’s founder, Dries Buytaert, to a round table discussion, aimed at further marketing the Drupal project.

Bringing together a number of leaders from the Drupal community, we all shared the same desire to boost the marketable assets of the open source platform. One of the ways we hope to achieve this publicity is by creating a comprehensive, customer-facing "Pitch Deck".

The session began as a workshop, facilitated by Adam Goodman. He primed the group to start  identifying and explore opportunities to better convey the benefits of Drupal to the uninitiated. The ultimate objective is to encourage the adoption of the Drupal platform. Consensus was reached that we focus upon three separate initiatives.

  

We're not competing with one another, yet we’re not helping each other either. Our role as leaders is to activate the assets that already exist in the community. Bert Boerland

The initial plan it to create a marketing resource that will present Drupal’s credentials in a persuasive manner. This slide deck will also contain impressive exemplar case studies, to ease the process of convincing an organisation or client to choose Drupal.

The Team

I volunteered to take overall responsibility for the creation of the end result. Joining forces with Suzanne Dergacheva and Ricardo Amarowho bring rich, varied perspectives and skill sets, I feel confident providing the basis for this universal toolkit. But we can only be truly successful if many others contribute to our initiative. We need sales people, marketers, copywriters to join our cause.

Get Involved Today

Providing a single and persuasive resource, available for all Drupal promoters, to sell the powerful advantages of Drupal will benefit all who use it. With strong consistent messaging, and bolstered by the many Drupal success stories, the deck will position all advocates better to expand the Drupal market share across many scenarios.

With a core team of fellow Drupal professionals, we plan to cover as many topics as we can identify, from security, accessibility and performance functionality through to specific industry verticals, like Higher Education or Media. The key intention is to show how Drupal can adapt to fit projects of all shapes and sizes, across all industries.

 

The Benefits

Many of Drupal’s competitors (think Wordpress, Squarespace etc.) are widely publicised and, consequently, innately popular. In many cases, Drupal may well be the ideal platform for a project, but it risks losing out to competing CMS providers as the success and potential of Drupal is not easily demonstrated.

Our intended users are sophisticated purchasers. As they ask more and more questions, our responsibility grows to equip agencies with comprehensive information. By using the collaborative resource, agencies will be able to accurately sell the Drupal platform, whilst spending more of their energy and resources focusing on the services they deliver. Freeing up time from writing and re-writing duplicated Drupal sales, organisations will be left to promote their unique strengths.


The Plan

We plan to kick off the project by identifying the high-level requirements and the mechanism to create the slide deck. From there, we hope to crowdsource for support, and seek volunteers from the wider business community. By recruiting sales people, marketers, copywriters and subject matter experts, we hope to create a well-rounded resource, targeted at the varied stakeholders of a new Drupal development project.

Brainstorm Notes from Drupal Europe RoundtableBrainstorm Notes from Drupal Europe Roundtable - Photo by Meike Jung

By working together, embracing open source ideals, we hope to rapidly achieve the first incarnation of the slide deck, ready for it to be built upon in the future. The sooner we create a draft, the sooner we can share the potential of Drupal with a wider audience. Projects like this prove that you needn’t be a web developer to be part of the welcoming Drupal community.

Get Involved!

If you’re interested in getting involved with this innovative project, please get in touch via our web form. Any contributions, big or small, will be gratefully received, as we strive to convert this idea into a reality.

Join the cause, let’s make Drupal better together!

Get Involved Today

Drupal.org Issue: Drupal "Pitch Deck" for Presenting to (Potential) Customers 

Sep 26 2018
Sep 26

The Drupal Unconference is coming up in November and we can’t wait! Following the huge success of last year's event, we are once again proud to be Silver Sponsors of this alternative annual conference.

As active members of the Drupal community, several of our team are already preparing lightning talks to pitch on the day. To secure attendance for the majority of our large Drupal team, we have just bought a batch of tickets. To avoid disappointment, we encourage you to do the same! 

Unconference Tickets

[embedded content]Co-organiser, Eli, on what to expect

This year’s Unconference will be held on 3rd November at The Federation, Manchester. The annual unconference breaks the mould, with an informal, accessible programme. All talks are planned on the day by the attendees rather than organisers. Representing open source ideals, Unconference recognises that the best ideas can come from anyone, no matter their experience. First-time speakers and long-term contributors have equal opportunity to share their insights into the Drupal Content Management System.

For the second year in a row, we are proudly sponsoring the event and attending en mass. Our developers are preparing talks on a wide range of topics: from front-end design using Pattern Lab, to a bold career change, swapping auto body repairs for Drupal development. The unplanned structure of the Unconference enables speeches that are reactive to recent topics and events. As such, we expect some competition for the most innovative talk this year!

Not sure what to talk about?

You can reach beyond Drupal core and open source code. Unconference presentations will address a wide range of digital topics. With talks and insights expected to cover UX, databases, frameworks, security and front-end design. Web developers, devops, project managers, designers and marketers can all expect relevant and actionable takeaways from the event. Website owners and end users, no matter their technical experience, are welcomed to the inclusive conversation.

Unlike Drupal sprints, which focus on delivering working software and contributed modules, the Unconference is designed to be a rich learning environment. Offering real-world case studies and ideas, NWDUG invite anyone to share their digital experiences.

Hosted at The Federation, the 2018 event will be bigger and better than ever. With more space comes more opportunities for different speakers and discussion groups.

[embedded content]Explore the Venue

Last year’s event was a huge success, so we are optimistic for Unconference 2018 to be the best yet. We are excited to see new faces and new innovations from the open source community.

Join the welcoming Drupal community this November 3rd for a day that celebrates inclusivity, accessibility and open source software.

 

Find out more and order your tickets of the Unconference website. We'll see you there!

Unconference Tickets

Sep 21 2018
DA
Sep 21

In this post we will share our experience in installing a Drupal 8 application on an Amazon EC2 server with latest Ubuntu 18.04 LTS.

Installing Drupal with composer greatly simplify system maintenance and further update.

AWS image

First you will need to create EC2 instance with proper AMI from Amazon Web Service. You can find the AMI from the locator.

We will not cover in detail this part, as we assume this is already covered by many other blogs and tutorials.

We pick latest 18.04 LTS version of Ubuntu to comply with requirements of Drupal 8 with PHP 7.2.

Composer

Once your server is running, the next step is to install composer.

Once again, we will not go too much into details as composer installation is also widely covered.

In our case we followed similar procedure as the one described here.

Drupal

For actual installatin of Drupal with composer, there is a guide at drupal.org with 3 options. We picked the option A.

The repository is a composer template for Drupal projects with pretty good usage guide. The latest version will install Drupal 8.6.1.

We run the command:

git clone https://github.com/drupal-composer/drupal-project.git <MyAppName>

(note: the code will be copied within the folder "MyAppName" within your current folder location. For instance if you are in /var/www, the application will be in /var/www/MyAppName)

At this point we edited the composer.json file to match our desired folder configuration. You need to edit manually the installer path here before installing the application or if you prefer, keep the default paths.

"installer-paths": {
            "web/core": ["type:drupal-core"],
            "web/libraries/{$name}": ["type:drupal-library"],
            "web/modules/contrib/{$name}": ["type:drupal-module"],
            "web/profiles/contrib/{$name}": ["type:drupal-profile"],
            "web/themes/contrib/{$name}": ["type:drupal-theme"],
            "drush/Commands/{$name}": ["type:drupal-drush"]
        },

To edit, run command:

Sudo nano MyAppName/composer.json

and edit "installer-paths". In our case we changed to:

Once your have the desired folder configuration, you can run actual installation command:

composer -vvv install

(note: -vvv option is optional)

This will install Drupal site.

Custom application

One of the purpose of using composer installation is to merge other composer files and install custom plugins and applications.

To be able to merge composer files, you need to install the composer-merge-plugin first with command:

composer require wikimedia/composer-merge-plugin

then run:

composer update --lock

You can now add additional plugins with their specific composer installer. In our case, we install our own application EK Management tools suite with the following command:

composer require arreasystem/ek:"dev-8.x-dev"

This will install custom application.

You can merge composer.json paths as "extra" option in main composer.json:

Sudo nano MyAppName/composer.json

For instance add the custom plugins paths:    

"extra": {
          "merge-plugin": {
                      "include": [
                          "modules/contrib/ek/ek_admin/composer.json"
                      ],
                      "recurse": true,
                      "replace": false,
                      "merge-extra": false
                  },

And run:

composer update --lock

This will for instance install following libraries:

You may encounter error with composer when updating with an out of memory error. This will happen with low specification EC2 instances. To solve this problem, add swap memory on Ubuntu server.

Create swap file: sudo dd if=/dev/zero of=/swapfile bs=2M count=2048 (this will create a 4M memory swap);

Enable file: sudo chmod 600 /swapfile;

Allocate: sudo mkswap /swapfile;

Start: sudo swapon /swapfile.

With this installation, you will just have to run composer update to update your installation version. This comes also with Drush and Drupal Console installed. Don't forget to run update.php after core update if necessary.

We hope this short post has been useful. Feel free to add comment or questions.

Sep 12 2018
Sep 12

This week, thousands of members of the Drupal community have come together to share insights and to celebrate the power of open source. Embracing knowledge transfer, the Digital Transformation and Enterprise track stands out as accessible for developers, marketers and business owners alike. 

From ambition, through innovation and implementation, the Digital Transformation track is not strictly technology-focused; rather it looks to the real-world impact of Drupal and how its adoption can transform the nature of a business.

These presentations are accessible for ‘Beginners’ yet affecting the top-level of international organisations from across industries; here's a detailed overview of our top talks: 

  1. The Digital Revolution at Chatham House
  2. BASF: Fostering a Contribution Culture Against a Backdrop of Secrecy
  3. Future of the Open Web and Open Source

Chatham House is a not-for-profit, non-governmental organisation with a mission to help governments and societies build a sustainably secure, prosperous and just world.

Founded in 1920, as a discussion group to prevent future wars, Chatham House has vastly transformed to its current world-leading position as a global independent policy institute. But its digital presence has struggled to evolve quite so prosperously...


Building Evidence for Change:

An audience survey in 2005 revealed that the Chatham House website was inaccessible and uninspiring. People were unable to access key reports and information in the “dull and academic” website.

Between 2004-2009, Josie Tree, Head of Digital Strategy and Development, built a case for the digital transformation at Chatham House. Providing evidence of success, building positive relationships and harnessing the power of healthy competition all proved vital. Working towards a clearer strategy, the plan was to ‘stop fire-fighting and refocus on priorities’.

Fear of change, cultural barriers and the ongoing battle for budget all hold back innovation; but Josie recognises that crisis can be an opportunity.

 Former Chatham House Website 2004The Chatham House Website in 2004

 

Why Drupal?

The Chatham House website is content-heavy, with a set of complex requirements.

Drupal offers the editorial flexibility they need, along with an open source ethos that reflects the Chatham House commitment to knowledge transfer.

With a flexible platform and determined digital champions, the possibilities of Drupal are infinite. 
-Paul Johnson, Drupal Director

What’s more, as the Drupal Europe conference demonstrates, Drupal is well-supported, widely used and comes from an inumerable choice of development partners.

Drupal acts as an ideal springboard for success, with endless possibilities.

Just the Beginning:

Moving to Drupal was just the beginning for the long-term digital transformation of Chatham House. The question remained:

How could digital better support the Chatham House mission?

A new strategy focused on improving the reputation of Chatham House, prioritising outputs, investing in marketing efforts and utilising insights from feedback. As with any strategy, this was all to be underpinned by measuring success KPIs and reporting.

The next steps for Chatham House involve a full website redevelopment project, with a user-centric design. 

With plans to upgrade to Drupal 8 and implement a new CRM system, the digital transformation of Chatham House has been ongoing for many years and is still only just beginning

Collaboration is Key:

The clear takeaway from Josie’s speech was the vital importance of working together. Combining strategic partnerships with strong internal relationships has seen positive results for Chatham House (with website growth climbing from 40k to 260k monthly visits).

Digital champions throughout the organisation are placed to provide training in necessary skills and to break down the barriers of communication.

Finding the right external support is important, but the core digital team remain at the heart of the project. This team has grown from just a single person to a group of 12 over the last seven years. 

With growing collaboration between the research and digital outputs, Chatham House hope to enhance their international reach for a wider and more diverse range of audiences.

BASF are the world’s leading chemical company, combining economic success with social responsibility and environmental protection.

With over 115,000 employees and sales upwards of €64,000 million, BASF have always been sure to maintain their position at the cutting edge by carefully protecting their intellectual property.

 

The Translation Dilemma

As the global leader in their industry, it is vital that the entire BASF digital platform is accessible in over 50 local languages.

Unfortunately, to allow for this level of functionality, it became clear that each string of code needed to be crawled and translated individually.

Operating across multiple sites, in multiple languages, for multiple brands, the obstacle grew exponentially. With a collection of 146,880 strings, translation presented an unsustainable issue, in terms of time and budget.


The Solution:

As with most seemingly impossible scenarios, the solution is beautifully simple: BASF required a mechanism to flag and filter relevant strings. By focusing on those strings most urgently requiring translation, the overwhelming workload becomes manageable.

With the right connections we were led to the right solution, utilising the collective power of the Drupal community. 
- Paul Johnson, Drupal Director

 

From there, untranslated strings can be arranged by order of the most viewed, to ensure a priority system for the inevitable translation of any multilingual site. 

As any translations need to be maintained continually, this new interface streamlines the system for all future development. 

String Translation SolutionDrupal 8 String Translation Solution

 

Lessons learnt:

Despite concerns during the project, Drupal 8 was presented as the right choice once again. The flexible extensibility of the platform has enabled BASF to maintain competitive advantage.

Most importantly, BASF learnt that open source is not about saving money. The principles of open source enabled the shared problem to result in a mutual solution. The contribution made as part of the BASF project has dramatically increased Drupal’s core capabilities, for all.

The digital transformation here resulted in a tangible, code-based output, but also in a noticeable shift for the company’s mindset. Sometimes intellectual property can increase in value once it is shared.  

On Thursday morning, the Digital Transformation track will look to the future. The founder of Drupal, Dries Buytaert, alongside key players from Google, Mautic and the Drupal Association, will discuss life at the forefront of the digital industry.

This keynote session will address the opportunities, as well as the responsibility, that come with leading one of the largest open source communities in the world.

 

You can catch the Drupal Europe speeches live streamed on Youtube

Or, to find out how you can undertake your own digital transformation with Drupal, speak to one of our Digital Strategy and Consultancy experts. 

Speak to an expert

 

Sep 03 2018
Sep 03

...And the story of how two unlikely organisations drove innovation in the Drupal Platform.

While delivering continual support for an energy clients Drupal website, we wanted to implement a Postcode Lookup feature, with auto-complete functionality. Existing solutions were either too basic or needlessly expensive, but we couldn’t justify building a new solution from scratch. As luck would have, I discovered at our weekly Drupal 8 meeting that my colleagues working on a Not-For-Profit site required autocomplete capabilities for the address lookup on the donations page.

These two very different organisations, shared a mutual requirement. Recognising the value of a Postcode Lookup across different sectors and services warranted the investment in developing and contributing the module to the open source community.

For The Not-For-Profit, the team had used the PostCodeAnywhere PHP API (now Loqate) by writing their own JavaScript to complete the autocomplete functionality. But this was proving more and more time consuming; initially, it wasn’t even clear that the service offered a JavaScript widget.

The solution that the team built for the client was quite custom and specific, utilising the Address module’s more rigid form fields. So the end result wasn’t something we could quickly or easily contribute back to the community.

Clearly this was a feature that would benefit the Drupal experience if there was a permanent publicly available fix. When discussing how different organisations may use this feature, I came to the conclusion that it needed to be available as part of the widely adopted webform module that provided the popular drag-and-drop interface.


The Solution

The Postcode Lookup is fully responsive, has autocomplete capabilities, functions smoothly on an enterprise level, and has a comprehensive database to pull from for global addresses. All of this is available as a stand-alone widget, or as part of the existing webform module.

loqate-module_drupal8The Drupal 8 Loquate module in action


How it works

A user starts to type a postcode and the field will suggest addresses based on the text entered. Upon choosing an address from the drop-down list, a whole set of address fields are populated. This was achieved using Loqate’s (then PostCodeAnywhere) paid “Address Capture” JavaScript API, which was really quick and easy to implement.

To add Postcode Lookup to the Webform, we created the Loqate module. All you need is the Webform module and a Loqate API key and you’re up and running with a Postcode Lookup autocomplete field that you can put in any form on your site.

But that's not the end of it…

Going forward we are planning to add:

  • Integration with the Address module: This could have extensive use-cases, as this module is used heavily by the Commerce module and would make setting addresses for buying things significantly easier.
  • The ability to automatically change required fields based on a user's location, to improve international experience.
  • Toggles for manual entry vs postcode entry.
  • General code improvements, for better maintainability and more use-cases.

Co-Author: Rakesh James, Drupal Developer at CTI Digital

Cover Photo Cred: @thenomadbrodie 

Interested in working with us? Check out our vacancies here.

Careers at CTI Digital

Jul 26 2018
Jul 26
July 26th, 2018

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name":"visreg",  "version":"1.0.0",  "description":"Website with visual regression testing",  "scripts":{    "test":"echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); constvisreg=require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); constvisreg=require('../../../../../../../../tests/config/globalHides.js');describe('Search Block',function(){  it('should look good',function(){    browser      .url('./')      .checkElement('.search-block-form',{hide:visreg.hide,remove:visreg.remove})      .forEach((item)=>{        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jul 11 2018
Jul 11
July 11th, 2018

Someone recently asked the following question in Slack. I didn’t want it to get lost in Slack’s history, so I thought I’d post it here:

Question: I’m setting a CSS background image inside my Pattern Lab footer template which displays correctly in Pattern Lab; however, Drupal isn’t locating the image. How is sharing images between PL and Drupal supposed to work?

My Answer: I’ve been using Pattern Lab’s built-in data.json files to handle this lately. e.g. you could do something like:

footer-component.twig:

... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %} ... {%footer_background_image=footer_background_image|default('/path/relative/to/drupal/root/footer-image.png')%}

This makes the image load for Drupal, but fails for Pattern Lab.

At first, to fix that, we used the footer-component.yml file to set the path relative to PL. e.g.:

footer-component.yml:

footer_background_image: /path/relative/to/pattern-lab/footer-image.png footer_background_image:/path/relative/to/pattern-lab/footer-image.png

The problem with this is that on every Pattern Lab page, when we included the footer copmonent, we had to add that line to the yml file for the page. e.g:

basic-page.twig:

... {% include /whatever/footer-component.twig %} ... {%include/whatever/footer-component.twig%}

basic-page.yml:

... footer_background_image: /path/relative/to/pattern-lab/footer-image.png ... footer_background_image:/path/relative/to/pattern-lab/footer-image.png

Rinse and repeat for each example page… That’s annoying.

Then we realized we could take advantage of Pattern Labs global data files.

So with the same footer-component.twig file as above, we can skip the yml files, and just add the following to a data file.

theme/components/_data/paths.json: (* see P.S. below)

{ "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png" }     "footer_background_image":"/path/relative/to/pattern-lab/footer-image.png"

Now, we can include the footer component in any example Pattern Lab pages we want, and the image is globally replaced in all of them. Also, Drupal doesn’t know about the json files, so it pulls the default value, which of course is relative to the Drupal root. So it works in both places.

We did this with our icons in Emulsify:

_icon.twig

paths.json

End of the answer to your original question… Now for a little more info that might help:

P.S. You can create as many json files as you want here. Just be careful you don’t run into name-spacing issues. We accounted for this in the header.json file by namespacing everything under the “header” array. That way the footer nav doesn’t pull our header menu items, or vise-versa.

example homepage home.twigthat pulls menu items for the header and the footer from data.json files

header.json

footer.json

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jun 06 2018
Jun 06
June 6th, 2018

I recently had the privilege of helping PRI.org launch a new React Frontend for their Drupal 7 project. Although I was fairly new to using React, I was able to lean on Four Kitchens’ senior JavaScript engineering team for guidance. I thought I might take the opportunity to share some things I learned along the way in terms of organization, code structuring and packages.

Organization

As a lead maintainer of Emulsify, I’m no stranger to component-driven development and building a user interface from minimal, modular components. However, building a library of React components provided me with some new insights worth mentioning.

Component Variations

If a component’s purpose starts to diverge, it may be a good time to split the variations in your component into separate components. A perfect example of this can be found in a button component. On any project of scale, you will likely have a multitude of buttons ranging from actual <button> elements to links or inputs. While these will likely share a number of qualities (e.g., styling), they may also vary not only in the markup they use but interactions as well. For instance, here is a simple button component with a couple of variations:

const Button = props => { const { url, onClick } = props; if (url) { return ( <a href={url}> ... </a> ); } return ( <button type="button" onClick={onClick}> ... </button> ); }; constButton=props=>{  const{url,onClick}=props;  if(url){    return(      <ahref={url}>        ...      </a>    );  return(    <buttontype="button"onClick={onClick}>      ...    </button>

Even with the simplicity of this example, why not separate this into two separate components? You could even change this component to handle that fork:

function Button(props) { ... return url ? <LinkBtn {...props}/> : <ButtonBtn {...props} />; } functionButton(props){  returnurl?<LinkBtn{...props}/>:<ButtonBtn{...props}/>;

React makes this separation so easy, it really is worth a few minutes to define components that are distinct in purpose. Also, testing against each one will become a lot easier as well.

Reuse Components

While the above might help with encapsulation, one of the main goals of component-driven development is reusability. When you build/test something well once, not only is it a waste of time and resources to build something nearly identical but you have also opened yourself to new and unnecessary points of failure. A good example from our project is creating a couple different types of toggles.  For accessible, standardized dropdowns, we introduced the well-supported, external library Downshift.:

In a separate part of the UI, we needed to build an accordion menu:

Initially, this struck me as two different UI elements, and so we built it as such. But in reality, this was an opportunity that I missed to reuse the well-built and tested Downshift library (and in fact, we have a ticket in the backlog to do that very thing). This is a simple example, but as the complexity of the component (or a project) increases, you can see where reusage would become critical.

Flexibility

And speaking of dropdowns, React components lend themselves to a great deal of flexibility. We knew the “drawer” part of the dropdown would need to contain anything from an individual item to a list of items to a form element. Because of this, it made sense to make the drawer contents as flexible as possible. By using the open-ended children prop, the dropdown container could simply just concern itself with container level styling and the toggling of the drawer. See below for a simplified version of the container code (using Downshift):

export default class Dropdown extends Component { static propTypes = { children: PropTypes.node }; static defaultProps = { children: [] }; render() { const { children } = this.props; return ( <Downshift> {({ isOpen }) => ( <div className=”dropdown”> <Button className=”btn” aria-label="Open Dropdown" /> {isOpen && <div className=”drawer”>{children}</div>} </div> )} </Downshift> ); } } exportdefaultclassDropdownextendsComponent{  staticpropTypes={    children:PropTypes.node  staticdefaultProps={    children:[]  render(){    const{children}=this.props;    return(      <Downshift>        {({isOpen})=>(          <divclassName=dropdown>            <ButtonclassName=btnaria-label="Open Dropdown"/>            {isOpen&&<divclassName=drawer>{children}</div>}          </div>        )}      </Downshift>    );

This means we can put anything we want inside of the container:

<Dropdown> <ComponentOne> <ComponentTwo> <span>Whatever</span> </Dropdown> <Dropdown>  <ComponentOne>  <ComponentTwo>  <span>Whatever</span></Dropdown>

This kind of maximum flexibility with minimal code is definitely a win in situations like this.

Code

The Right Component for the Job

Even though the React documentation spells it out, it is still easy to forget that sometimes you don’t need the whole React toolbox for a component. In fact, there’s more than simplicity at stake, writing stateless components may in some instances be more performant than stateful ones. Here’s an example of a hero component that doesn’t need state following AirBnB’s React/JSX styleguide:

const Hero = ({title, imgSrc, imgAlt}) => ( <div className=”hero”> <img data-src={imgSrc} alt={imgAlt} /> <h2>{title}</h2> </div> ); export default Hero; constHero=({title,imgSrc,imgAlt})=>(  <divclassName=hero>    <imgdata-src={imgSrc}alt={imgAlt}/>    <h2>{title}</h2>  </div>exportdefaultHero;

When you actually need to use Class, there are some optimizations you can make to at least write cleaner (and less) code. Take this Header component example:

import React from 'react'; class Header extends React.Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReactfrom'react';classHeaderextendsReact.Component{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

In this snippet, we can start by simplifying the React.Component extension:

import React, { Component } from 'react'; class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReact,{Component}from'react';classHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

Next, we can export the component in the same line so we don’t have to at the end:

import React, { Component } from 'react'; export default class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } importReact,{Component}from'react';exportdefaultclassHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX

Finally, if we make the toggleOpen() function into an arrow function, we don’t need the binding in the constructor. And because our constructor was really only necessary for the binding, we can now get rid of it completely!

export default class Header extends Component { state = { isMenuOpen: false }; toggleOpen = () => { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); }; render() { // JSX } } exportdefaultclassHeaderextendsComponent{  state={isMenuOpen:false};  toggleOpen=()=>{    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX

Proptypes

React has some quick wins for catching bugs with built-in typechecking abilities using React.propTypes. When using a Class component, you can also move your propTypes inside the component as static propTypes. So, instead of:

export default class DropdownItem extends Component { ... } DropdownItem.propTypes = { .. propTypes }; DropdownItem.defaultProps = { .. default propTypes }; exportdefaultclassDropdownItemextendsComponent{DropdownItem.propTypes={  ..propTypesDropdownItem.defaultProps={  ..defaultpropTypes

You can instead have:

export default class DropdownItem extends Component { static propTypes = { .. propTypes }; static defaultProps = { .. default propTypes }; render() { ... } } exportdefaultclassDropdownItemextendsComponent{  staticpropTypes={    ..propTypes  staticdefaultProps={    ..defaultpropTypes  render(){    ...

Also, if you want to limit the value or objects returned in a prop, you can use PropTypes.oneOf and PropTypes.oneOfType respectively (documentation).

And finally, another place to simplify code is that you can deconstruct the props option in the function parameter definition like so. Here’s a component before this has been done:

const SvgLogo = props => { const { title, inline, height, width, version, viewBox } = props; return ( // JSX ) } constSvgLogo=props=>{  const{title,inline,height,width,version,viewBox}=props;  return(    // JSX

And here’s the same component after:

const SvgLogo = ({ title, inline, height, width, version, viewBox }) => ( // JSX ); constSvgLogo=({title,inline,height,width,version,viewBox})=>(

Packages

Finally, a word on packages. React’s popularity lends itself to a plethora of packages available. One of our senior JavaScript engineers passed on some sage advice to me that is worth mentioning here: every package you add to your project is another dependency to support. This doesn’t mean that you should never use packages, merely that it should be done judiciously, ideally with awareness of the package’s support, weight and dependencies. That said, here are a couple of packages (besides Downshift) that we found useful enough to include on this project:

Classnames

If you find yourself doing a lot of classname manipulation in your components, the classnames utility is a package that helps with readability. Here’s an example before we applied the classnames utility:

<div className={`element ${this.state.revealed === true ? revealed : ''}`} > <divclassName={`element${this.state.revealed===true?revealed:''}`}>

With classnames you can make this much more readable by separating the logic:

import classNames from 'classnames/bind'; const elementClasses = classNames({ element: true, revealed: this.state.revealed === true }); <div classname={elementClasses}> importclassNamesfrom'classnames/bind';constelementClasses=classNames({  element:true,  revealed:this.state.revealed===true<divclassname={elementClasses}>

React Intersection Observer (Lazy Loading)

IntersectionObserver is an API that provides a way for browsers to asynchronously detect changes of an element intersecting with the browser window. Support is gaining traction and a polyfill is available for fallback. This API could serve a number of purposes, not the least of which is the popular technique of lazy loading to defer loading of assets not visible to the user.  While we could have in theory written our own component using this API, we chose to use the React Intersection Observer package because it takes care of the bookkeeping and standardizes a React component that makes it simple to pass in options and detect events.

Conclusions

I hope passing on some of the knowledge I gained along the way is helpful for someone else. If nothing else, I learned that there are some great starting points out there in the community worth studying. The first is the excellent React documentation. Up to date and extensive, this documentation was my lifeline throughout the project. The next is Create React App, which is actually a great starting point for any size application and is also extremely well documented with best practices for a beginner to start writing code.

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 23 2018
May 23

On 19th May 2018, the day of the Royal Wedding and the FA-Cup final, there was a lot going on in the UK. As a Chelsea fan, I wonder if there could be a better feeling than watching the FA Cup final at Wembley against Manchester United?! But, instead, a small group of Drupal users gathered together in Edinburgh for DrupalCamp Scotland. I was invited to attend and chosen to speak about Drupal migration, rather than going to Wembley... I can confirm that it was an excellent decision and I had a great time there!

DrupalCamp ScotlandMost of the attendees at DrupalCamp Scotland

DrupalCamp Scotland kick-started with a welcome speech from Jennifer Milne, Deputy CIO of the University of Edinburgh. She spoke a bit about the history of the university, and I was amazed to learn that it is the sixth oldest university in the UK! It was very nice to see the Deputy CIO and Director of the University of Edinburgh so excited and happy to support the Open source Technology Community, including Drupal.

After that, Billy Wardrop did a quick presentation about Drupal Europe. He explained the event details and how you can get involved. More importantly, he noted how vital the event is for Drupal users in Europe. Why? Because, it is more than a technical conference; Europe's largest Drupal event in 2018 will be a festival that promotes real feeling between people who have the opportunity to meet each other and to have a great time together. Drupal Europe will introduce additional dimensions including summits for industry verticals, workshops, and a broad set of session tracks, including Back-end development, Front-end, Design/UX, Site Building, Project Management.

The first talk at DrupalCamp Scotland was given by Jeffrey A. "JAM" McGuire, from Open Strategy Partners. He was speaking about how we can do ‘Authentic Communication’ in business and how this maps to how we should can be more purposeful and generate greater impact in our Drupal contributions.

Authentic CommunicationThe essential components of authentic communication

He was also telling us how “Empathy” and “Context” have to work together in business communications:

Empathy and ContextThe importance of Empathy and Context

Finally “JAM” talked about how to plan your own Contribution narrative. He explained that your “Contribution Narrative” must encapsulate the following:

  • Goals: what do you want or need from contribution
    • more features, quality, increase adoption
    • Help others, educate people
  • Who: could help you get there
  • Vibrancy signals: how a free and open source projects appears worthy of engagement for someone: regular releases, state of issue queue, quality of docs and release notes, responsiveness in channels.
  • Why: the logical/emotional reasons someone should contribute
  • How to contribute: another vibrancy signal: you care enough to be explicit about how people should engage. Telling them your expectations is also setting their expectations.
  • Where to connect: how you prefer to communicate, where your people hang out.

Contribution NarrativePlanning your Contribution Narrative

Once you know and have mapped out all of this, you can weave it into all your comms and hopefully attract new contributors.” Jeffrey A. ‘Jam’ McGuire

The second session was from Julia Pradel and Christian Fritsch from Thunder CMS Team. They talked about the feature provided by Thunder CMS as a Drupal distribution, explaining who is using this feature globally and how useful it is for media and publishing products. Then they talked about the SharpEye theme testing tool, which they have also introduced to Drupal. SharpEye a visual regression tool that takes screenshots and compares them in automated tests. We open sourced the tool and you can download it here.

Thunder CMSJulia and Christian's talk: Photos from Thunder CMS

The third talk was given by me! I was talking about migration API in Drupal 8, specifically: How to migrate data from a csv file and what is gained by migrating everything into Drupal. I have begun writing a blog series on the same theme, if you would like to find out more.

After my speech, we had a great lunch and enjoyed the surprising beauty of the Scottish summer.

Lunchtime Sunshine at DrupalCamp ScotlandNetworking over lunch in the Scottish Sunshine

After Lunch, the fourth session of the day was taken by Audra Martin MerrickAs the former VP Strategy & Operations for The Economist, Audra was talking about Content and Community, or Content and its Audience. She explained how to use social media efficiently, to connect the right content with the right audience.


How to convene an audienceHow do we convene an audience?

The fifth talk of the session came from Alice G Herbison. She talked about challenges identified from her own experience, as well as the edge case scenarios to be mindful of when conducting User Research. Her advice: she explained how to use tree testing for user experience and why it is important.

Defining Research GoalsThe importance of User Research

We had quick coffee break and got back to the sixth session of the day, which was taken by Martin Fraser. He was talking all about css: examples of people writing bad css, advice on how they can write good css, and how css can promote user interactivity, instead of using Javascript.

Poor old CSSPoor Old CSS

The seventh and final session of the day came from Paul Linney. He was talking about Voyage to Drupal 8. As he is currently working on some sizeable government projects, he discussed the problems he faced with Drupal 7 features and custom module, and how they improved the performance when they changed custom module with object-oriented programming. To support his important projects, he is looking into Drupal 8 instead.

Looking to Drupal 8Looking to Drupal 8

The closing note was delivered by Duncan Davidson. He was one of the main organisers who made DrupalCamp Scotland possible.

I really enjoyed my time at DrupalCamp Scotland and would like to give special thanks to Billy, Duncan and all of the other organisers. Huge thanks to University of Edinburgh and Makkaru for sponsoring the event and great thanks to the attendees also. On a personal note, I thank Paul Johnson and CTI for supporting me to go there and speak.

If you would like to learn more about my insights into Drupal 8 migrations, follow my blog series.

May 14 2018
May 14

Last weekend I attended my first ever Drupal Sprint organised by NWDUG.

My background with Drupal is slightly unconventional: as a newbie to Drupal, who only became aware of its existence just under a year ago, I’ve had a fast-track journey with the software whilst working at CTI Digital. I don’t have an in-depth knowledge of the technical side of Drupal, but as I work in Marketing, exploring the history and culture around Drupal, I’m quickly becoming a bit of an expert. Therefore, attending my first sprint was a great insight into the world of Drupal.

IMG_6029

The May Sprint Team

 

The sprint was organised by NWDUG and we were happy to offer up our office space, as a large space with comfy seats and plentiful coffee is something always appreciated at a sprint. The space is actually open for anyone to use, so if you’re interested in holding a Drupal event, please get in touch any time.

 

Here’s what went on at the sprint:

9:30am

Everyone arrived, chipper and excited for the day. The first thing I noticed was how friendly and welcoming everyone was, even though I obviously wasn’t your standard Sprint attendee. Coffee and some light bites were shared out before we headed to our event space, named ‘The Garden’, for the welcome brief, given by Phil Norton.

Phil talked us through what would happen throughout the sprint and took special care to ensure first-time sprinters were set up and knew how to get involved. There were also a few non-technical activities, for newbies like me to get involved with.

Dc_GnsiWAAAKT8H-129037-edited

Phil's welcome address in 'The Garden'


10am

The Sprint begins! Those interested in doing some development discussed which issues they’d like to get involved with, then broke into teams and got to work. Again, I was delighted to see just how engaged all the first-time sprinters were; no-one was left confused or overwhelmed by the Sprint.


11am

A few of us broke off into a Case Study Workshop. Working in Marketing, I’m a big fan of a beautifully written case study, so we created a task force to review how we can encourage more members of the Drupal community to celebrate their work within the Drupal case study section. We used the AIDA model to break down the journey of writing a case study for developers and marketers. Then, we discussed the blockers and opportunities at each stage.

case study workshop

The case study workshop in full swing

 

Lunch!

Pizza and more pizza! After a busy morning everyone had a break to eat pizza, play pool, and socialise. Thank you to Access, for co-providing the pizza with us. There was also time for a quick group photo and an impromptu dance break, where a mini sprinter taught the developers how to do The Floss. Unfortunately no future ‘Britain's Got Talent’ winners were discovered, but everyone definitely enjoyed themselves!

[embedded content]The Drupal Floss

 

1pm

Back to sprinting: the developers resumed their issue teams and the second non-technical activity took place. Paul Johnson took video interviews, detailing the origin stories of how the attendees got involved with Drupal in the first place. Members of the sprint group discussed how Drupal has changed their lives, something that Rakesh recently delved into on our blog. It was inspiring to hear the developments of personal stories and journeys with Drupal.

IMG_6220

Post lunch sprinting

 

3pm

Before we knew it, the sprint was over! In summary: it was a brilliant day for technical and non-technical individuals alike. Afterwards a few of the group went for some celebratory drinks to soak up the success of the day.

What did we achieve?

There were a total of:

  • 16 patches applied
  • 5 were successfully reviewed and tested by the community (RTBC
  • 2 issues completely fixed.

Along with the open source contributions, we achieved:

  • A significant development into the accessibility of case study writing
  • The capture and documentation of the origin stories of multiple Drupal advocates

Special Thanks

Finally, I’d like to take some time to give special thanks to a few individuals on the day:

Our MVPs - Gemma Butler and Craig Perks

Gemma and Craig came down to keep the day running smoothly and it couldn’t have happened without them. From first aid to countless other essential roles, Gemma and Craig really made the day what it was and we couldn’t say thank you enough!

Rakesh James

Rakesh got the ball rolling for this sprint in the first place and was the driving force in helping it happen. Thank you Rakesh and hopefully this isn’t the last time you’ll be making something like this happen.

Phil Norton

Phil heads up the North West Drupal User Group and provided the community to form the sprint of such a welcoming and successful group of multi-talented individuals. So thank you Phil for such a great day!

And thank you to everyone else who attended;

  • Peter Jones
  • Des Fildes
  • Nigel White
  • James Davidson
  • Lesley Moreira
  • Tony Barket
  • Richard Sheppard
  • Phil Wolstenholme
  • Steve Hilton
  • Syed Huda
  • John Cook
  • Daniel Harper
  • Andrew Macpherson
  • Rachel Lawson (From The Drupal Association)
  • Craig Perks
  • Michael Trestianu
  • Paul Johnson
  • Andrew J

Interested in attending the next Drupal Sprint? Follow  and  on twitter to hear about the next one.

.

Apr 25 2018
Apr 25

Here are some random useful snippets for dealing with caches in Drupal 8, just because I keep having to dig them up from the API.

I’ll try to add more here as I go.

<?php

// Set an expiring cache item
\Drupal::cache()->set('cache_key', 'cache_data', $expiration_timestamp);

// Set a permanent cache item
\Drupal::cache()->set('cache_key', 'cache_data', CacheBackendInterface::CACHE_PERMANENT);

// Set a permanent cache item with tags.
\Drupal::cache()->set('cache_key', 'cache_data', CacheBackendInterface::CACHE_PERMANENT, array('tag_one', 'second_tag'));

// Fetch an item from the cache
$cache = \Drupal::cache()->get('cache_key');
if (!empty($cache->data) {
  // Do something with $cache->data here.
}

// Invalidate a cache item
\Drupal::cache()->invalidate('cache_key');

// Invalidate multiple cache items
\Drupal::cache()->invalidateMultiple($array_of_cache_ids);

// Invalidate specific cache tags
use Drupal\Core\Cache\Cache;

Cache::invalidateTags(['config:block.block.YOURBLOCKID', 
  'config:YOURMODULE.YOURCONFIG', 'node:YOURNID']);

// Note that the invalidation functions also exist for deleting caches,
// by just replacing invalidate with delete.

// Flush the entire site cache.
drupal_flush_all_caches();

The end!

Apr 06 2018
Apr 06

In my humble opinion, as a Drupal developer, contributing back to the Drupal Community is something we should love to do.

Have you ever considered a Drupal with no Views module?

Or thought about a world where there is no Drupal at all? Just think of how much extra time you would be spending writing and fixing code for each individual project. Or how much more difficult it would be for a developer (or site builder) to finish a job on time!

Lucky for us, these days I hope that we have solved the issues of time-consuming development: the answer is open-source, the answer is Drupal. Thanks to collaborative contributions, Drupal is a quality, world-leading resource. I feel excited by the opportunity to get involved and contribute back to Drupal open-source projects, don’t you? The quantity of your contribution doesn’t matter; even your digital experience or expertise isn’t important. Big or small, all that matters is whether you are able to give something back or not.

Once willing to contribute, we all face the questions: How can I start my Drupal contribution? 

The simple answer: check for the next Drupal sprint happening near you, add it to your calendar and get to the sprint! Once there, you can find mentors and, most importantly, ask questions! Some people might say: 'I am not writing code any more' or 'I am not a developer'. Yet they also ask:

But I am using Drupal, so is there a way I can contribute?

Well there is a plenty of room for you to get involved. Here are just some of the ways I am aware of:

  • Register on Drupal.org as a user
  • Confirm as a user on Drupal.org
  • Tell someone about Drupal- spread the word!
  • Join the Drupal Association
  • Attend a Drupal Association meeting
  • Improve documentation - even if that’s just correcting a spelling mistake
  • Marketing - write blog posts, articles, organise events
  • Write Case Studies - explain what Drupal can achieve
  • Follow and share Drupal's social media
  • Mentoring 
  • List someone as a mentor on your Drupal.org profile
  • Speak at Drupal events
  • Test module patches (bug fixes) and quality assurance
  • Report an issue
  • Report spam users on Drupal.org
  • Take and share Drupal-related photographs
  • Organise Drupal events, like Meetups, Sprints and Camps
  • Sponsor a venue for Drupal events
  • Host the reception desk at Drupal events
  • Help on the sessions room
  • Fund or Sponsor Drupal events

Again, it’s not a matter of how we contribute to Drupal, what’s important is to ask yourself: 'Are we/ Am I giving back to Drupal?' Over the past fifteen years, Drupal has celebrated 8 major releases and it is totally incomparable from the first to the latest version. All of this is made possible because of many of our contributions. So whatever your contribution may be, it’s very important to Drupal.

Title image by pdjohnson on Flickr

Apr 05 2018
Apr 05

Drupal 8.5 was released on the 7th of March 2018 with a host of new features, bug fixes, and improvements. There are plenty of exciting updates for developers in this blog. Or if you're a business owner, click here to find out what this means for you.

Any projects using Drupal 8.4.x can and should update to Drupal 8.5 to continue receiving bug and security fixes. We recommend using composer to manage the codebase of your Drupal 8 projects.

For anyone still on Drupal 8.3.x or earlier I recommend reading the Drupal 8.4.0 release notes as Drupal 8.4 included major version updates for Symfony, jQuery, and jQuery UI meaning it is no longer compatible with older versions of Drush.

One of the great things we noticed from the update was the sheer number of commits in the release notes.

Seeing all the different issues and contributors in the release notes is a good reminder that many small contributions add up to big results.

Dries Buytaert, Founder of Drupal

So what are the highlights of the Drupal 8.5 release?

Stable Releases Of Content Moderation And The Settings Tray Modules

One of the changes to the way Drupal is maintained is the new and improved release cycle and adoption of semantic versioning. Major releases used to only happen once every couple of years, Drupal now uses a much smaller release cycle for adding new features to core of only 6 months. New features are added as “experimental core modules” and can be tested, bug fixed and eventually become part of Drupal core.

One example of the shorter release cycle is the BigPipe module. The module provides an implementation of Facebook’s BigPipe page rendering strategy, shortening the perceived page load speed of dynamic websites with non-cacheable content. This was an experimental module when Drupal 8.1 was released and became a part of Drupal core as a stable module in 8.2.

In Drupal 8.5 the BigPipe module is now enabled by default as a part of Drupal’s standard installation profile. BigPipe is actually the first new feature of Drupal 8 to progress from experimental to stable to being a part of a standard installation profile.

There are two exciting modules now stable in the update, they are:

  • Settings Tray
  • Content Moderation

Settings Tray is a part of the “outside-in” initiative where more of the content management tasks can be done without leaving the front end of the website, managing items in context such as editing the order of the menu items in a menu block.

The Content Moderation module allows the site builder to define states in which content can be placed such as “draft”, “needs review” and to define user permissions necessary to move content between those states. This way you can have a large team of authors who can place documents into draft or needs review states, allowing only website editors with specific permissions to publish.

New Experimental Layout Builder

Sticking with experimental modules, Drupal 8.5 sees the introduction of a new experimental layout builder. This module provides the ability to edit the layouts of basic pages, articles and other entity types using the same “outside-in” user interface provided by the settings tray.

This allows site builders to edit the layout of fields on the actual page rather than having to use a separate form in the backend. Another feature is the ability to have a different layout on a per-page / item basis if you so wish with the ability to revert back to the default if it doesn’t work for you. There’s still a long way to go and is currently only a basic implementation but it should be improving significantly over the coming months and hopefully will see a stable release in Drupal 8.6.

umami-8.5-layout-builder

The experimental layout builder in action 

PHP 7.2 Is Now Supported

This is the first version of Drupal to fully support the latest version of PHP. Support is not the only aspect of this release though, site owners are now also warned if they try to install Drupal on a version of PHP less than 7.0 they will no longer be supported by Drupal as of March 7, 2019.

Drupal 8.5 now also uses Symphony Components 3.4.5 since Symfony 3.2 no longer receives security coverage. I expect Drupal 8 to remain on 3.4 releases until late 2021 or the end of Drupal 8's support lifetime (whichever comes first). Finally, PHPUnit now raises test failures on deprecated code.

Media Module In Core Improved And Now Visible To All Site Builders

Drupal 8.4 added a Media API into core which was based on all the hard work done on the contributed Media Entity Module. The media module provides “media types” (file, audio, video, and image) and allows content creators to upload and play audio and video files and list and re-use media. The core media module can be expanded by the installation of key contributed modules which add the ability to add externally hosted media types such as YouTube and Vimeo videos.

The module has been present in the codebase but was hidden from the module management interface due to user experience issues. These issues have now been taken care of and anyone who has access to the module management page can now enable the module.


New “Out of the Box” Demo Site

One of the key initiatives is the “out of the box experience”. The aim is to showcase what Drupal can do by providing a simple to install demo website (called Umami presently) with example content, configuration, and theme.

According to Drupal, the main goal of the demo site is:

To add sample content presented in a well-designed theme, presented as a food magazine. Using recipes and feature articles this example site will make Drupal look much better right from the start and help evaluators explore core Drupal concepts like content types, fields, blocks, views, taxonomy, etc.

The good news is that Drupal 8.5 now comes with the demo website available as an installation profile. The profile is “hidden” at the moment from the installation GUI but can be installed using the command line / drush.

The demo website still needs a lot of work but the groundwork is firmly in place and may become selectable as an installation profile for demonstration and evaluation purposes in a future release of Drupal 8.5.x. I recommend users not to use the Umami demo as the basis of a commercial project yet since no backward compatibility or upgrade paths are provided.

Migrate Architecture, Migrate Drupal and Migrate UI Modules are now Stable

This item almost deserves its own blog post as it’s such a major milestone for Drupal, with over 570 contributors working on closing over 1300 issues over a 4 year period. As such the Migrate system architecture is considered fully stable and developers can write migration paths without worrying about the stability of the underlying system.

The Migrate Drupal and Migrate UI modules (which are used for Drupal 6 and 7 migrations to Drupal 8) are also considered stable for upgrading sites which are not multilingual, with multilingual support still being heavily worked on.

There is also support for incremental migrations meaning that the website can be worked on while the content is still being added on the site being upgraded/migrated from.

More information can be found in the official migrations in Drupal 8.5.0 post.

Links to Drupal 8 User Guide

Now on a standard installation you are greeted with a welcome page and a link to the new and improved Drupal 8 User Guide. While only a small addition, we can see this as a major win as it will improve the evaluation experience for new users.

Future Drupal Development

There is currently a proposed change to the 6-month release cycle to reduce it to a 4-month cycle because, according to Drupal, "currently, given time for alpha, beta and rc releases, issues that narrowly miss the beta window have to wait eight months to get into a tagged release."

This will require 2 core branches to be supported at once and additional work for core committers. However, new features and bug fixes will be available sooner so it will be interesting to see what the outcome of the proposal is.

What Does This Mean For Business Owners?

You’ll need to ensure you’ve updated your site from Drupal 8.4.5 to 8.5.0 to continue receiving bug and security fixes. The next of which is scheduled to be released on April 4th, 2018. If, however, you are on Drupal 8.3.x and below we urge you to read the release notes for Drupal 8.4.0 as there were some major updates to consider. These include a jump from jQuery 2 to 3 which may have some backward compatibility issues affecting any slideshows, carousels, lightboxes, accordions and other animated components.

Drupal 8.4 also dropped support for Internet Explorer 9 and 10 where ay bugs that affect these browsers will no longer be fixed and any workarounds for them have been removed in Drupal 8.5.

If your website is still on Drupal 7 then this is a good time to consider migrating to Drupal 8 as the hard work carried out on the migrate modules mentioned above will streamline the process of adopting the new platform.

If you have any questions about migrating your Drupal 7 website to Drupal 8 please let us know and we'll ensure one of our experts are on hand to help.

Get in touch

Important Dates

See https://www.drupal.org/core/release-cycle-overview for more:

  • 8.6.0 Feature Freeze: Week of July 18, 2018
  • 8.6.0 Release: September 5, 2018
  • 8.7.0 Feature Freeze January 2019
  • 8.7.0 Release: March 19, 2019
Apr 03 2018
Apr 03

While everyone has a busy week attending Drupalcon sessions and events (be sure to check out Mediacurrent’s afterparty) , if you find some extra time, Nashville has an eclectic mix of activities and places to go. Whether you're looking for great music in none-other than "Music City" or you're looking for a nice place to relax and grab a bite to eat, take advice from a Nashville native and check out my list of Nashville's must-see spots. When you're ready to take a break from drupalin', check out these suggestions and engulf yourself in the Nashville culture. 

Music

Downtown Nashville

Image source: Wikipedia 

Whether you enjoy country music or prefer other genres, Nashville offers something for every taste.  Some nights you might need to venture outside downtown for more rock and roll. If music is at the top of your Nashville bucket list, here are nine spots you won’t want to miss:  

Food 

Nashville Chicken

Source: Monell’s

There has been a huge number of new restaurants opening but here are a couple of classics and a newish one:

  • Rotier’s Restaurant, the original Cheeseburger in Paradise? A Nashville classic and award winner, just be sure to get the burger on French bread.
  • Family style southern food at Monell's.  Dinner and breakfast are served to the table and passed around like a family holiday.
  • Hip Pinewood Social attracts visitors any time of day, breakfast and Crema coffee, co-working spot during the day, and bowling on antique lanes in the evening.
  • Need Barbecue? Martin’s, Peg Leg Porker,  Edleys, or G’z BBQ are all good choices.
  • Restaurants of award winning chefs include Sean Brock's Husk from Charleston, Tandy Wilson's City House, and the Catbird Seat. This year's James Beard semifinalists include Henrietta Red, Bastion, Josephine, and longtime East Nashville restaurant Margot Café & Bar.
  • Nashville Hot Chicken is very popular with heat level choices for anyone. But pay heed if they warn you when ordering.

Don't forget about the famous Nashville Hot Chicken. A few favorites among many great spots:

  • Princes Hot Chicken Shack. The original.
  • The Tenders Royale from Pepperfire is a nice introduction along with a couple of local drafts on tap, and blues music in the background.
  • Tenn Sixteen Great East Nashville Five Points restaurant.  The hot chicken comes in one heat level, kind of a "Nashville medium".  That is, it's usually pretty hot, unlike other restaurants that don’t specialize in hot chicken.
  • Fannie Mae's, which conveniently just opened up a new restaurant location near the convention center.
  • Another list hot chicken can be found here


Museums

George Jones Museum

Source: George Jones Museum (Also known as the home of the Mediacurrent Afterparty!)

Nashville is rich with history and musical history is at no shortage. Most of these museums are an easy walk or bus ride downtown:

  • The Frist Center -  This art deco building was originally the post office. The current exhibition is the exclusive North American venue of Rome: City and Empire from the British Museum.
  • Country Music Hall of Fame and Museum - Across the street from the convention center, you can also check out Hatch Show Prints or tacos from Bajo Sexto.
  • Musicians Hall of Fame and Museum - This museum “honors the talented musicians who actually played on the greatest recordings of all time.” Additionally The Rolling Stones first ever major exhibition, Exhibitionism, is making its last U.S stop, taking on Music City at the Musicians Hall Of Fame and Museum.
  • Lane Motor Museum - An amazing variety of the largest European collection of cars in the U.S. located a few miles from the convention center.

Exercise

Warner Park

Source: Expedia

Jogging/Walking

  • The downtown Cumberland River Greenway connects to Bicentennial mall - This route can be varied for any distance. 
  • Another popular area for walking and jogging is to cross the Shelby Street Pedestrian Bridge to Cumberland Park and Nissan stadium. 

Hiking/Trail Running 

  • Warner Parks - Large wooded parks on the western boundary of Nashville has hills with a view of the city.
  • B Cycle has bikes for rent by the hour with many locations to pick up or leave a bicycle.   


Family and Kids Activities


Miscellaneous

  • There is a free bus downtown to the Gulch or Farmer's Market and Germantown that has stops around the convention center.  Look for the Green Circuit.  This would be a good way to get to the AAA Nashville Sounds Baseball game in the evening.  
  • A couple of hints on street pronunciations beyond just a southern accent might help too:

       Demonbreun Street - Pronounced da-mun’-bree-un.
       Lafayette Street - Pronounced luh-fay’-ett. ( I know, I know) 


Hopefully everyone has a great experience in Nashville and comes back for a more leisurely visit. 

Mar 21 2018
Mar 21

Drupal Europe promises to be the most significant community lead conference in the history of Drupal on the continent. Redefining and reinvigorating what a major Drupal conference means to the community, business users and agency leaders Drupal Europe promises to be an exciting and rewarding experience.

drupalcamp europe

Darmstadtium, the venue for DrupalEurope. Image cred: DrupalEurope

Right now it is vital that as many people as possible show support for the event in buying a conference ticket early. Doing so will help the volunteer team focus their minds upon providing a vibrant and rich programme rather than be concerned with concerns over finances.

Please join me in buying your ticket without delay. Help the organising team by suggesting your peers to do the same at meetups, via email, on social media.

Early supporter tickets are available until March 26th (6 pm CEST). Buy your ticket now, everyone else is!

UPDATE - Early Supporter Tickets are no longer available, but you can still purchase ticket through the link below.

Get Tickets

 

 

Drupal Europe Tweet Dries

Thank you for supporting the Drupal community and see you in Darmstadt! 

Paul Johnson, Drupal Director at CTI Digital.

Mar 16 2018
Mar 16

In our first post that announced the new Mediacurrent redesign, we looked at the evolution of Mediacurrent.com over the years and talked through the over goals of the relaunch. Now let’s take a look under the hood to see some of the cool stuff we did and discuss what our development team learned along the way.

Let’s talk architecture

Now for the fun part, the technical architecture of the new website. First, the backend was upgraded from Drupal 7 to Drupal 8 - that will probably not be a huge shock to anyone. The more interesting aspect of this build is that we have now implemented a fully decoupled frontend. We accomplished this using a static generator called Jekyll which has been integrated with the Drupal backend. More on that in a bit. First let’s answer the question, “Why decoupled?â€

Why decoupled?

A decoupled architecture provides flexibility for constant evolution, opening the door to a variety of potential programming languages and design philosophies to accomplish your website goals. There are any number of articles that discuss the benefits of moving to a decoupled approach. For this post, I want to focus specifically on the points that were deciding factors for our team.

Security

While we do have full confidence in the security features that Drupal offers, we have to acknowledge that a static public site does offer some advantages that make securing the application easier. First of all, we have the option to make the backend CMS completely walled off from the public site. It’s not a hard requirement that the Drupal admin is made publicly accessible. Second, there are simply fewer vulnerabilities that a static frontend will be susceptible to in comparison to a full PHP application. For example, it’s harder to DDOS a site serving only HTML/CSS/JS and there is no server side code running that could be hijacked by an SQL injection attack.

Performance

Decoupled sites often have a performance boost over a fully Drupal-rendered site because the frontend is more custom and lightweight. This is certainly true in our case. The static frontend requires no processing at the time of request so the page is served up immediately with no server-side execution required.

Hosting

One of the things we liked about this particular solution was that it made the hosting architecture pretty simple and inexpensive. With only editors logging into the CMS and the static site being served by Gitlab, we were able to have a fast, reliable stack up and running relatively easily. Up-time is great in that you aren’t as vulnerable to a production error or traffic spike bringing the site down. That being said, all platforms are subject to downtime each year.

Eating our own dog food

As many other agencies will attest to, when you work on your own website it’s a good chance to try something different! We looked at what some competitors had done and we wanted to try an approach that would be a good fit for our needs without overcomplicating the end solution. This endeavor was a way to take some risks and learn along the way.

Dividing the work

The great thing about decoupling is that you break apart the work that needs to get done. The frontend team can focus on the frontend stuff without being tied too much to the backend work (although there will always be some overlap). Our agency spends a lot of our day delivering solutions to our clients so being able to break apart some of the work streams was an advantage. We like that in the future we don’t necessarily need to do a big redesign and Drupal upgrade at the same time. With a decoupled approach, we have the flexibility to tackle each separately.

Technical Overview

Now that you have seen the “Why†behind this approach, let’s look at the “How.†We have kept our Drupal CMS in Bitbucket, which gets deployed to a Pantheon server. That piece is still the same as its been for many years. The new wrinkle is that the public frontend is served on GitLab Pages. If you haven’t heard of Github Pages (which run on Jekyll), Github, GitLab and many other services allow you host Jekyll source files which they can auto-compile into HTML pages and host for you for free or cheap. Pretty neat huh? We ended up going with GitLab Pages because GitLab allows you to add more build customizations than Github. We have also looked at potentially using Netlify in the future as the host for our Jekyll files.

The question you might be asking is how does Drupal content make its way to GitLab? Put simply, we translate node content to markdown and push to the GitLab API on every node save. For user files, we actually still use Drupal uploads and reference the path within Markdown files. If you are familiar with Markdown files, these are the “content†files that Jekyll compiles into pages. The diagram below illustrates the basic flow.

Illustration of how Drupal content makes its way to GitLab

The concept is pretty simple: have Drupal manage your content, write to Jekyll markdown files and deploy those files to a static host.

Why not [Insert favorite Node framework here]?

You might be saying, that's all well and good but why go with a static generator over a server-rendered JavaScript framework like Next.js or Nuxt.js?

The short answer is that we reviewed several options and concluded there wasn’t a framework we felt was a fit at the time we were planning development (around mid-late 2016). Going with a JavaScript-only framework was ruled out for SEO reasons and the options for Isomorphic (server + client side js) frameworks weren’t as stable as we would have liked. Jekyll was the most popular static framework (and still is) with a variety of plugins we could utilize. After doing some POC’s we opted for Jekyll in order to keep the page rendering lean, simple and speedy. The overall complexity of the integration was also a deciding factor in choosing Jekyll over other options.

Trade-offs

One of the fun challenges with a static only site is that you need to learn how to get around the lack of server side rendering. The files are of course static, thus if you want anything dynamic looking you are limited to mostly JavaScript-based solutions. A couple quick examples are forms and the site search. For forms, we were already using Pardot hosted forms for marketing automation, so that wasn’t a big tradeoff. For site search, we went with a pretty cool solution that leverages https://lunrjs.com/ to handle searching the site. Essentially we have Drupal push an index file that Lunr.js can search against using only Javascript in the browser. For RSS, we still generate the RSS feed from Drupal and push to GitLab.

Lessons learned and looking forward

Now that we have the shiny new website up and running, it’s time to look ahead. Would we recommend taking this exact approach in 2018? I would say not exactly. While we are still intrigued by the power of static generators serving as a front end we think something like Gatsby.js (Node/React-based) might have more upside than Jekyll. Further, we aren’t sold on this type of static-only being able to scale in comparison to Node-hosted solutions. The options for server-rendered JavaScript frameworks increase by the day and many have matured over the last few years. Mediacurrent.com will continue to be our place to try new approaches and share with you everything we’ve learned along the way. Thanks for joining us in this journey and enjoy the new site!

Additional Resources
The 3 C’s and 1 D of Drupal: Why Decoupled Matters | Mediacurrent Blog
Relearning Accessibility for a Decoupled Front End | Mediacurrent Blog
4 Benefits of Decoupled Architecture for Enterprise Marketers | Mediacurrent Blog

Mar 15 2018
Mar 15

Since joining Mediacurrent in 2009, I've seen firsthand how our company has grown and evolved, and how our website has mirrored those changes. Today, I am thrilled to announce the launch of our newly redesigned website, mediacurrent.com.

We’ve come a long way since 2007!

screenshots of the Mediacurrent.com homepage from 2007 to present

2017 marked the 10 year anniversary of Mediacurrent. It’s been an amazing journey from Drupal firm to a full-service digital agency. Along with this journey, we’ve expanded and redefined our services to meet the changing needs of our clients.

Explore the site to learn more about Mediacurrent’s development, design, and strategy services.

Built on Drupal 8 with a decoupled approach, the new site reflects a clear vision of our core focus:

Open source development, design, and strategy that grows your digital ROI

Celebrating Our Success

I am incredibly proud of the Mediacurrent team for the persistence and hard work that went into building our new website. It’s this teamwork that has fueled award-winning sites for weather.com, travelport.com, careaction.org, and many others. See how we do it.

group shot of the Mediacurrent team

Some of the MC team at a recent code sprint retreat in Los Angeles

Sharing Our Resources

In our very first company blog post circa 2009, we shared an ambitious goal — we hope the Mediacurrent staff can share pertinent content, and become a trusted advisor when it comes to your web related issues —and we got there! From blog posts and tutorials to videos and podcasts, our new site makes it easier to navigate a great depth of thought leadership content by the Mediacurrent team.

Expressing Our Culture

Mediacurrent is in an exciting period of growth, and we're evolving our team to keep pace with all of the work that lies ahead.

Our culture is at the core of everything we do and one of the biggest drivers behind our redesign was to tell that story. There are lots of ways to do this on the new Mediacurrent.com: meet our team, see how we give back to the Drupal community, and explore career opportunities

Enjoy the new Mediacurrent.com site, and we welcome your feedback!

Mar 12 2018
Mar 12

Last week I was able to attend Drupalcamp London and present a session called “Drupal 101”. The session was about how everyone is welcome in the Drupal Community, irrespective of who you are.  At Drupalcamp London I met people from all walks of life whose lives had been changed by Drupal. I caught up with a friend called Ryan Szrama who is a perfect example of my message, he conducted a brilliant speech at Drupalcamp about “doing well by doing good” so I’d like to share his story with you.

Ryan Szrama

39878713524_c3f9066cd8_k

Ryan giving his talk at Drupalcamp London. Photo Cred: pdjohnson

Ryans talk kick-started Drupalcamp London on a great note. He told the story of his amazing journey with Drupal. When he began his career with Drupal 12 years ago, Ryan was short-haired and beardless. He was fresh out of Bible college where he had studied theology, a far cry from computer science. However, before and during college, Ryan maintained a hobby of hacking on MUDs and making computer games with his brother. When he wasn’t playing with computers, Ryan dreamt of helping others. With the motivation to help others he packed up all of his belongings and moved to a neighborhood known for crime and harsh living conditions. He lived there for 8 years, but for all of his hard work, he felt like he hadn’t made much difference. After leaving the neighborhood he only knew of 2 people who had been able to move away for a better future.

In 2006, having recently discovered Drupal, Ryan faced an error whilst trying to download an e-commerce module. Straight away he went to drupal.org and unaware of the CVS system, posted his first support request. Just 34 minutes later, a stranger resolved Ryan’s problem. This fast exchange of knowledge amazed him.

A day later another user asked the same question on Drupal.org. Ryan knew the answer and helped the stranger, just as he had been helped the day before. Suddenly Ryan realised that he could use the internet and help other people. By teaching them how to use this accessible software, he could give someone the tools to develop their careers and support their families. By contributing to the Drupal open source project, he finally found he was impacting others lives. That is how Ryan started his Drupal life, which eventually lead him to start working with Ubercart on Drupal 5 whilst working as a developer at osCommerce. Now, on a daily basis, he helps people to use Drupal commerce and still answers Ubercart questions.

Ryan continued by talking about Drupal Commerce. He told us about how contributing to Drupal has impacted his Family life, daily routines, dedications, even how he takes care of his employees or colleagues and clients. His final summary really touched me, particularly this sentiment;

People before computers - `Relationships are worth more than dollars

Ryan Szrama, CEO Commerce Guys

 Find Ryan on twitter here: .

My own story

26718902138_f5c2259743_k

Me giving my session on Drupal 101 at Drupalcamp London. Photo Cred: pdjohnson

That fantastic keynote set the vibe for rest of the camp. Right after the keynote I had to run to my session room. For me, it was a dream come true moment. Like Ryan, Drupal has changed my life for the better so I’d like to share my story as well.

Giving this talk was a moment that I’d been waiting for since my childhood in Kerala, India. The variety of culture there means Kerala is often known as “God's own country”. Since primary school, we had studied Indian and British history. I heard stories that “In British kingdom, there is no sunset”, which amazed me a lot in my childhood dreams so I had always wanted to move to England where I could be an expert in my field.

In 2009, still in India, I started my Drupal life. I began installing Drupal for the first time but was hit with a big error. Fortunately, I knew there was a community around Drupal so I went directly to Drupal.org asking for help.

Rakeshs first drupal question

My first ever question on Drupal.org

A few minutes later, I got a reply from the other side of the world from a developer in the United States called Steve Ringwood. This showed me how amazing Drupal could be. So since then, I’ve worked with Drupal and never looked back. I worked in India for 8 years and trained over 600 other Drupal developers. Then finally, on the 27th January 2018, one of my childhood dreams came true. I flew over to England to join CTI Digital as a Drupal developer.

Over the past few years, I have been fortunate enough to work with a lot of people in the Drupal community. Each has somehow directly or indirectly helped, guided, and inspired me to grow in my career.

Rakeshs druapl community

Some of the amazing people in Drupal who have helped to change my life

So personally, I thank God for Drupal and the people who made it possible, like Dries, every day of my life. Because if Drupal didn’t exist I may have ended up in an unfulfilling career not doing what I loved. Drupal as a technology impacts humans life every day, be that the websites it makes possible like War Child UK or the people in the community.  After hearing from Ryan Szrama from Drupalcamp London, It’s more evident Drupal is impacting more people’s lives than ever before.

If Drupal has impacted your life also, tweet me your own stories at .

 

Resources

If you'd like to know more about Drupal, here are some resources.

My Slides - Drupal 101

Image Credits

Mar 06 2018
Mar 06

During the CXO day at Drupalcamp London, Dave O’Carroll the Head of Digital at War Child delivered a compelling speech on how Drupal has aided their mission in supporting the future and well-being of children living in some of the world’s most dangerous war zones.

When Warchild UK began to feel their website could no longer facilitate their day to day needs they began to consider a Drupal rebuild or even using an alternative technology. The existing Drupal platform was unfriendly towards images and so couldn’t reflect their work on the ground in its true light. Being unresponsive was also a major issue for the site.

After conducting research and consulting with peers, War Child UK came to the conclusion that Drupal still remained far above the rest in aiding the charity to continue their work and simply needed an update to meet their evolving needs.


When the time came for us to replace our website we were open to using different systems. But it soon became obvious that Drupal would remain the right choice

Dave O'Carroll

When making the decision to stay with Drupal, 4 key areas were turning points in confirming their decision.

1. Compatibility

War Child UK are acutely aware of the world of software solutions out there. Despite the natural desire to focus on having an aesthetically pleasing website, the websites ability to seamlessly take on integrations like MailChimp, Stripe, and SalesForce was deemed essential. As most of these software APIs and plugins are Drupal friendly, sticking with Drupal in this regard was a no-brainer.

The team at War Child UK dedicate themselves to changing the lives of children and spending as much time and money out on the field as possible. Being a charity, they also have to provide a great deal of accountability on where their money comes from and where it goes, so investment in digital can be incredibly difficult to justify. But by using Drupal, its compatible nature means the charity can spend more resources on helping children, not conducting systems integrations.

Having done this many times before, I knew the best websites are the ones that play nice with the other children - they integrate well.

Dave O'Carroll

2. Ease of use

War Child needed to give content creators the independence to upload their own stories so their messages could be told from the heart, and not dilluted by multiple teams. If they were able to train staff to directly upload content, War Child's work would be able to be projected in near real time.

Dave explained, with previous experience of Wordpress and Squarespace at other charities he had found the staff would receive training but come back repeatedly to clarify how to perform daily tasks. The simple intuitive administration screens we configured for War Child meant that, with Drupal, staff needed to be shown just once. This saves War Child time, and time saves money.

Our HR team, who don’t spring to mind as digital experts, are able to manage their own site section. It’s great they are able to have a degree of freedom. 

Dave O'Carroll

 

3. Support

The flexibility of Drupal provides support for all of War Child’s goals. War Child needs to be more flexible and creative to stand alongside larger charities with far bigger communications teams and marketing resource. The vast community surrounding Drupal means that no matter how improbable an idea appears to be, the community always manages to push up gems to make an idea reality.

Warchild_main_image-1 

With a big fat creative idea, there always seems to be a way to do it with Drupal

Dave O'Carroll


4. Future Proofing

What if I get hit by a bus? A concerning idea, but something that applies to War Child UK immensely. With thousands of children relying on the charity, they can't afford to not plan for the ‘what ifs’. Drupal's intuitive CMS already makes it easy to pick up where the last person left off. We crafted a solution to take this capability further and built a system to the best possible standards. This stronger governance means if War Child ever need to move agencies, replace key team members or work with freelancers the continuity will still be there and save them time and money, allowing War Child to focus on their mission.

Conclusion

All too often children are portrayed as the collateral damage of war. War Child wanted their site to portray a different story and so we implemented designs that placed children at the heart of the new website, you can read the full website case study here. The new platform allows War Child to overcome past restraints and think outside the box for future campaigns. We look forward to continuing to help those at War Child to support children in new innovative ways for years to come. One of their recent campaigns ‘Robot’ has been particularly moving, please watch the video below.

 

[embedded content]

 

Visit the war child website

 

Feb 01 2018
Feb 01
February 1st, 2018

Paragraphs is a powerful Drupal module that makes gives editors more flexibility in how they design and layout the content of their pages. However, they are special in that they make no sense without a host entity. If we talk about Paragraphs, it goes without saying that they are to be attached to other entities.
In Drupal 8, individual migrations are built around an entity type. That means we implement a single migration for each entity type. Sometimes we draw relationships between the element being imported and an already imported one of a different type, but we never handle the migration of both simultaneously.
Migrating Paragraphs needs to be done in at least two steps: 1) migrating entities of type Paragraph, and 2) migrating entities referencing imported Paragraph entities.

Migration of Paragraph entities

You can migrate Paragraph entities in a way very similar to the way of migrating every other entity type into Drupal 8. However, a very important caveat is making sure to use the right destination plugin, provided by the Entity Reference Revisions module:

destination: plugin: ‘entity_reference_revisions:paragraph’ default_bundle: paragraph_type destination:plugin:entity_reference_revisions:paragraphdefault_bundle:paragraph_type

This is critical because you can be tempted to use something more common like entity:paragraph which would make sense given that Paragraphs are entities. However, you didn’t configure your Paragraph reference field as a conventional Entity Reference one, but as an Entity reference revisions field, so you need to use an appropriate plugin.

An example of the core of a migration of Paragraph entities:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: 'feed.url/endpoint' ids: id: type: integer item_selector: '/elements' fields: - name: id label: Id selector: /element_id - name: content label: Content selector: /element_content process: field_paragraph_type_content/value: content destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: paragraph_type migration_dependencies: { } plugin:urldata_fetcher_plugin:httpdata_parser_plugin:jsonurls:'feed.url/endpoint'    type:integeritem_selector:'/elements'    name:id    label:Id    selector:/element_id    name:content    label:Content    selector:/element_contentfield_paragraph_type_content/value:contentdestination:plugin:'entity_reference_revisions:paragraph'default_bundle:paragraph_typemigration_dependencies:{  }

To give some context, this assumes the feed being consumed has a root level with an elements array filled with content arrays with properties like element_id and element_content, and we want to convert those content arrays into Paragraphs of type paragraph_type in Drupal, with the field_paragraph_type_content field storing the text that came from the element_content property.

Migration of the host entity type

Having imported the Paragraph entities already, we then need to import the host entities, attaching the appropriate Paragraphs to each one’s field_paragraph_type_content field. Typically this is accomplished by using the migration_lookup process plugin (formerly migration).

Every time an entity is imported, a row is created in the mapping table for that migration, with both the ID the entity has in the external source and the internal one it got after being imported. This way the migration keeps a correlation between both states of the data, for updating and other purposes.

The migration_lookup plugin takes an ID from an external source and tries to find an internal entity whose ID is linked to the external one in the mapping table, returning its ID in that case. After that, the entity reference field will be populated with that ID, effectively establishing a link between the entities in the Drupal side.

In the example below, the migration_lookup returns entity IDs and creates references to other Drupal entities through the field_event_schools field:

field_event_schools: plugin: iterator source: event_school process: target_id: plugin: migration_lookup migration: schools source: school_id field_event_schools:  plugin:iterator  source:event_school  process:    target_id:      plugin:migration_lookup      migration:schools      source:school_id

However, while references to nodes or terms basically consist of the ID of the referenced entity, when using the entity_reference_revisions destination plugin (as we did to import the Paragraph entities), two IDs are stored per entity. One is the entity ID and the other is the entity revision ID. That means the return of the migration_lookup processor is not an integer, but an array of them.

process: field_paragraph_type_content: plugin: iterator source: elements process: temporary_ids: plugin: migration_lookup migration: paragraphs_migration source: element_id target_id: plugin: extract source: '@temporary_ids' index: - 0 target_revision_id: plugin: extract source: '@temporary_ids' index: - 1 field_paragraph_type_content:  plugin:iterator  source:elements  process:    temporary_ids:      plugin:migration_lookup      migration:paragraphs_migration      source:element_id    target_id:      plugin:extract      source:'@temporary_ids'      index:        -0    target_revision_id:      plugin:extract      source:'@temporary_ids'      index:        -1

What we do then is, instead of just returning an array (it wouldn’t work obviously), use the extract process plugin with it to get the integer IDs needed to create an effective reference.

Summary

In summary, it’s important to remember that migrating Paragraphs is a two-step process at minimum. First, you must migrate entities of type Paragraph. Then you must migrate entities referencing those imported Paragraph entities.

More on Drupal 8

Top 5 Reasons to Migrate Your Site to Drupal 8

Creating your Emulsify 2.0 Starter Kit with Drush

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Jan 18 2018
Jan 18

During the redesign process of a website, there are many small changes that can ultimately affect the traffic of the new site. The key is to identify any changes that might break SEO, or changes that might affect the way the site looks to search engine spiders ahead of time to avoid traffic drops. In the end, we want the site to look fresh and new while still getting the same traffic, or more, as the old design. 

At Zivtech, we look at many factors in the planning phase of a website redesign project and try to identify those that could cause drops in traffic after the new design is launched. Once these have been identified, we ensure all of these tasks have been completed before launch. Let’s take a look at some of these factors and how to avoid traffic drops on your next website redesign project.

Meta Tags

We typically build sites with Drupal, so the Metatag module handles much of the meta tag configuration and display on the site. If you aren’t using Drupal though, there could be some changes to your front-end design that could affect your meta tags and confuse search engine spiders. You’ll need to make sure that all of your pages have meta tags and that there aren’t any duplicates. 

Broken Links

Broken links are a huge problem during website redesigns. This could be a result of changes in the menu structure or in path structures for content types. Broken links mean that users and search engines can’t find the pages they’re looking for, which can really wreak havoc on your site traffic statistics. 

To avoid broken links in Drupal, we can use the Link checker module, but there are also third party tools that can be used for non-Drupal sites. Google Search Console provides some additional tools to identify broken links and 404 pages too.

404 page

URL Redirects

Broken redirects or missing redirects to new URLs are also a big problem on site redesigns. These typically happen due to changes in URL patterns or menu structures. The Redirect module in Drupal provides an interface to add redirects for your pages without any coding experience. Non-Drupal sites can use .htaccess files or redirect statements in their web server configuration to ensure that all URLs that are changing have proper 301 redirects.

XML Sitemap

As URLs are changed on your site during a redesign, you’ll want to ensure that the XML sitemap has updated URLs that match the new ones. The XML sitemap module handles this for us on a Drupal site. 

If you aren’t running Drupal, a plugin for your CMS may handle this, or you’ll need to generate a new sitemap using third party tools. Once this has been completed, you can log in to Google Search Console and resubmit your sitemap for indexing.

Google Analytics

If you forget to place your Google Analytics tracking code in your new site’s markup before launch, you can end up in the dark when it comes to traffic fluctuations. The Google Analytics module handles the placement of this tracking code on a Drupal website, and even provides a status warning on the Drupal status page if the tracking ID has not been configured yet. 

Those who aren’t using Drupal should follow the instructions provided by Google Analytics to place the code snippet in their site’s markup, or use a plugin provided by their CMS of choice. With the Google Analytics tracking code in place, your organization can get a much better overview of how your site performs after the redesign launches. It’s much easier to track your successes or failures in your redesign if you were already running Google Analytics, but a relaunch is a great time to start using it too. 

While the factors in this post are some of the most important that we look at during a site redesign project at Zivtech, each project is unique and could require additional changes to your site to ensure you avoid traffic drops after launching your new design. 

Overall, you want to identify any changes that could affect URLs, meta data, and even content structure that search engine spiders or your visitors might be confused about. Even small changes or a missing meta tag can affect your search engine rankings, which can lead to traffic drops. Do your future self a favor and make a list of the individual factors that could affect your site. Then ensure that list is completed before calling your next website redesign a success.
 

Jan 12 2018
Jan 12

Drupalcamp 2018

Drupalcamp London returns in March for its 6th year. As the largest Drupal camp in Europe, it provides a scale of high-quality knowledge I rarely see elsewhere. If you’re new to Drupalcons and camps, Drupalcamp is a three-day knowledge-sharing conference in London that attracts a wide variety of over 600 Drupal stakeholders, including developers, agencies, business owners, and end users.

As a Drupal development agency contributing to the software for the past 15 years, we’re always looking to support the growth of the community. One major investment we make is sponsoring Drupalcamp London for the past 6 years. I’m a big supporter of the London event and if anything sums up the weekend, I believe this quote from Paul Johnson does the job.


I genuinely think Drupalcamp London is amongst the best Drupal event in the world. The breadth of topics covered is unparalleled and the high quality of speakers is a real draw.

Paul Johnson, Drupal Director at CTI Digital.

  

CXO Day

This year I’m set to be attending the CXO day on Friday 2nd March, a day that focuses on what Drupal means to its end users. It’s a rare opportunity to learn from the experiences of others and to hear how business leaders have been utilising Drupal and Open Source technology in the past year. The event is attended by a variety of end users new and familiar with Drupal, leaders from digital agencies, and wider Drupal business community. Opportunities for networking spaces with attendees will also be a valuable addition to the day, an area I will be frequenting.

At the CXO day our client, Dave O’Carroll, Head of Digital at War Child UK will be discussing what CTI's rebuild of the charity’s website has done for their end user experience and ability to increase their impact across the digital sphere.

Who’s attending?

Our Drupal Director and Evangelist, Paul Johnson and I will be attending. It will be an extremely busy day so if you would like to meet please do get in contact. We'll be glad to share our knowledge of Drupal and discuss our experiences working with London.gov and RCOT.

Drupal newbies and veterans will also be attending the weekend along with agencies and businesses invested in the world of Drupal. The organisers conducted an interesting survey of the attendees last year, as you can see below a majority attend to learn and share their knowledge of Drupal.

CXO Drupalcamp attendees

Drupalcamp study into reason for attending Drupalcamp 2017

 

The CXO always sells out quickly, visit their website now to find out more and register to attend. See you there.

Register Now 

 

Dec 20 2017
Dec 20
December 20th, 2017

One of the most common requests we get in regards to Emulsify is to show concrete examples of components. There is a lot of conceptual material out there on the benefits of component-driven development in Drupal 8—storing markup, CSS, and JavaScript together using some organizational pattern (à la Atomic Design), automating the creation of style guides (e.g., using Pattern Lab) and using Twig’s include, extends and embed functions to work those patterns into Drupal seamlessly. If you’re reading this article you’re likely already sold on the concept. It’s time for a concrete example!

In this tutorial, we’ll build a full site header containing a logo, a search form, and a menu – here’s the code if you’d like to follow along. We will use Emulsify, so pieces of this may be specific to Emulsify and we will try and note those where necessary. Otherwise, this example could, in theory, be extended to any Drupal 8 project using component-driven development.

Planning Your Component

The first step in component-driven development is planning. In fact, this may be the definitive phase in component-driven development. In order to build reusable systems, you have to break down the design into logical, reusable building blocks. In our case, we have 3 distinct components—what we would call in Atomic Design “molecules”—a logo, a search form, and a menu. In most component-driven development systems you would have a more granular level as well (“atoms” in Atomic Design). Emulsify ships with pre-built and highly flexible atoms for links, images, forms, and lists (and much more). This allows us to jump directly into project-specific molecules.

So, what is our plan? We are going to first create a molecule for each component, making use of the atoms listed above wherever possible. Then, we will build an organism for the larger site header component. On the Drupal side, we will map our logo component to the Site Branding block, the search form to the default Drupal search form block, the menu to the Main Navigation block and the site header to the header region template. Now that we have a plan, let’s get started on our first component—the logo.

The Logo Molecule

Emulsify automatically provides us with everything we need to print a logo – see components/_patterns/01-atoms/04-images/00-image/image.twig. Although it is an image atom, it has an optional img_url variable that will wrap the image in a link if present. So, in this case, we don’t even have to create the logo component. We merely need a variant of the image component, which is easy to do in Pattern Lab by duplicating components/_patterns/01-atoms/04-images/00-image/image.yml and renaming it as components/_patterns/01-atoms/04-images/00-image/image~logo.yml (see Pattern Lab documentation).

Next, we change the variables in the image~logo.yml as needed and add a new image_link_base_class variable, naming it whatever we like for styling purposes. For those who are working in a new installation of Emulsify alongside this tutorial, you will notice this file already exists! Emulsify ships with a ready-made logo component. This means we can immediately jump into mapping our new logo component in Drupal.

Connecting the Logo Component to Drupal

Although you could just write static markup for the logo, let’s use the branding block in Drupal (the block that supplies the theme logo or one uploaded via the Appearance Settings page). These instructions assume you have a local Drupal development environment complete with Twig debugging enabled. Add the Site Branding block to your header region in the Drupal administrative UI to see your branding block on your page. Inspect the element to find the template file in play.

In our case there are two templates—the outer site branding block file and the inner image file. It is best to use the file that contains the most relevant information for your component. Seeing as we need variables like image alt and image src to map to our component, the most relevant file would be the image file itself. Since Emulsify uses Stable as a base theme, let’s check there first for a template file to use. Stable uses core/themes/stable/templates/field/image.html.twig to print images, so we copy that file down to its matching directory in Emulsify creating templates/fields/image.html.twig (this is the template for all image fields, so you may have to be more specific with this filename). Any time you add a new template file, clear the cache registry to make sure that Drupal recognizes the new file. Now the goal in component-driven development is to have markup in components that simply maps to Drupal templates, so let’s replace the default contents of the image.html.twig file above ( <img{{ attributes }}> ) with the following:

{% include "@atoms/04-images/00-image/image.twig" with { img_url: "/", img_src: attributes.src, img_alt: attributes.alt, image_blockname: "logo", image_link_base_class: "logo", } %} {%include"@atoms/04-images/00-image/image.twig"with{  img_url:"/",  img_src:attributes.src,  img_alt:attributes.alt,  image_blockname:"logo",  image_link_base_class:"logo",

We’re using the Twig include statement to use our markup from our original component and pass a mixture of static (url, BEM classes) and dynamic (img alt and src) content to the component. To figure out what Drupal variables to use for dynamic content, see first the “Available variables” section at the top of the Drupal Twig file you’re using and then use the Devel module and the kint function to debug the variables themselves. Also, if you’re new to seeing the BEM class variables (Emulsify-specific), see our recent post on why/how we use these variables (and the BEM function) to pass in BEM classes to Pattern Lab and the Drupal Attributes object. Basically, this include statement above will print out:

<a class="logo" href="https://www.fourkitchens.com/"> <img class="logo__img" src=”/themes/emulsify/logo.svg" alt="Home"> </a> <aclass="logo"href="/">    <imgclass="logo__img"src=/themes/emulsify/logo.svg" alt="Home">

We should now see our branding block using our custom component markup! Let’s move on to the next molecule—the search form.

The Search Form Molecule

Component-driven development, particularly the division of components into controlled, separate atomic units, is not always perfect. But the beauty of Pattern Lab (and Emulsify) is that there is a lot of flexibility in how you markup a component. If the ideal approach of using a Twig function to include other smaller elements isn’t possible (or is too time consuming), simply write custom HTML for the component as needed for the situation! One area where we lean into this flexibility is in dealing with Drupal’s form markup. Let’s take a look at how you could handle the search block. First, let’s create a form molecule in Pattern Lab.

Form Wrapper

Create a directory in components/_patterns/02-molecules entitled “search-form” with a search-form.twig file with the following contents (markup tweaked from core/themes/stable/templates/form/form.html.twig):

<form {{ bem('search')}}> {% if children %} {{ children }} {% else %} <div class="search__item"> <input title="Enter the terms you wish to search for." size="15" maxlength="128" class="form-search"> </div> <div class="form-actions"> <input type="submit" value="Search" class="form-item__textfield button js-form-submit form-submit"> </div> {% endif %} </form> <form{{bem('search')}}>  {%ifchildren%}    {{children}}  {%else%}    <divclass="search__item">      <inputtitle="Enter the terms you wish to search for."size="15"maxlength="128"class="form-search">    </div>    <divclass="form-actions">      <inputtype="submit"value="Search"class="form-item__textfield button js-form-submit form-submit">    </div>  {%endif%}

In this file (code here) we’re doing a check for the Drupal-specific variable “children” in order to pass one thing to Drupal and another to Pattern Lab. We want to make the markup as similar as possible between the two, so I’ve copied the relevant parts of the markup by inspecting the default Drupal search form in the browser. As you can see there are two classes we need on the Drupal side. The first is on the outer <form>  wrapper, so we will need a matching Drupal template to inherit that. Many templates in Drupal will have suggestions by default, but the form template is a great example of one that doesn’t. However, adding a new template suggestion is a minor task, so let’s add the following code to emulsify.theme:

/** * Implements hook_theme_suggestions_HOOK_alter() for form templates. */ function emulsify_theme_suggestions_form_alter(array &$suggestions, array $variables) { if ($variables['element']['#form_id'] == 'search_block_form') { $suggestions[] = 'form__search_block_form'; } } * Implements hook_theme_suggestions_HOOK_alter() for form templates.functionemulsify_theme_suggestions_form_alter(array&$suggestions,array$variables){  if($variables['element']['#form_id']=='search_block_form'){    $suggestions[]='form__search_block_form';

After clearing the cache registry, you should see the new suggestion, so we can now add the file templates/form/form--search-block-form.html.twig. In that file, let’s write:
{% include "@molecules/search-form/search-form.twig" %} {%include"@molecules/search-form/search-form.twig"%}

The Form Element

We have only the “search__item” class left, for which we follow a similar process. Let’s create the file components/_patterns/02-molecules/search-form/_search-form-element.twig, copying the contents from core/themes/stable/templates/form/form-element.html.twig and making small tweaks like so:

{% set classes = [ 'js-form-item', 'search__item', 'js-form-type-' ~ type|clean_class, 'search__item--' ~ name|clean_class, 'js-form-item-' ~ name|clean_class, title_display not in ['after', 'before'] ? 'form-no-label', disabled == 'disabled' ? 'form-disabled', errors ? 'form-item--error', ] %} {% set description_classes = [ 'description', description_display == 'invisible' ? 'visually-hidden', ] %} <div {{ attributes.addClass(classes) }}> {% if label_display in ['before', 'invisible'] %} {{ label }} {% endif %} {% if prefix is not empty %} <span class="field-prefix">{{ prefix }}</span> {% endif %} {% if description_display == 'before' and description.content %} <div{{ description.attributes }}> {{ description.content }} </div> {% endif %} {{ children }} {% if suffix is not empty %} <span class="field-suffix">{{ suffix }}</span> {% endif %} {% if label_display == 'after' %} {{ label }} {% endif %} {% if errors %} <div class="form-item--error-message"> {{ errors }} </div> {% endif %} {% if description_display in ['after', 'invisible'] and description.content %} <div{{ description.attributes.addClass(description_classes) }}> {{ description.content }} </div> {% endif %} </div>   setclasses=[    'js-form-item',    'search__item',    'js-form-type-'~type|clean_class,    'search__item--'~name|clean_class,    'js-form-item-'~name|clean_class,    title_displaynotin['after','before']?'form-no-label',    disabled=='disabled'?'form-disabled',    errors?'form-item--error',  setdescription_classes=[    'description',    description_display=='invisible'?'visually-hidden',<div{{attributes.addClass(classes)}}>  {%iflabel_displayin['before','invisible']%}    {{label}}  {%endif%}  {%ifprefixisnotempty%}    <spanclass="field-prefix">{{prefix}}</span>  {%endif%}  {%ifdescription_display=='before'anddescription.content%}    <div{{description.attributes}}>      {{description.content}}    </div>  {%endif%}  {{children}}  {%ifsuffixisnotempty%}    <spanclass="field-suffix">{{suffix}}</span>  {%endif%}  {%iflabel_display=='after'%}    {{label}}  {%endif%}  {%iferrors%}    <divclass="form-item--error-message">      {{errors}}    </div>  {%endif%}  {%ifdescription_displayin['after','invisible']anddescription.content%}    <div{{description.attributes.addClass(description_classes)}}>      {{description.content}}    </div>  {%endif%}

This file will not be needed in Pattern Lab, which is why we’ve used the underscore at the beginning of the name. This tells Pattern Lab to not display the file in the style guide. Now we need this markup in Drupal, so let’s add a new template suggestion in emulsify.theme like so:

/** * Implements hook_theme_suggestions_HOOK_alter() for form element templates. */ function emulsify_theme_suggestions_form_element_alter(array &$suggestions, array $variables) { if ($variables['element']['#type'] == 'search') { $suggestions[] = 'form_element__search_block_form'; } }   * Implements hook_theme_suggestions_HOOK_alter() for form element templates.functionemulsify_theme_suggestions_form_element_alter(array&$suggestions,array$variables){    if($variables['element']['#type']=='search'){      $suggestions[]='form_element__search_block_form';

And now let’s add the file templates/form/form-element--search-block-form.html.twig with the following code:

{% include "@molecules/search-form/_search-form-element.twig" %} {%include"@molecules/search-form/_search-form-element.twig"%}

We now have the basic pieces for styling our search form in Pattern Lab and Drupal. This was not the fastest element to theme in a component-driven way, but it is a good example of complex concepts that will help when necessary. We hope to make creating form components a little easier in future releases of Emulsify, similar to what we’ve done in v2 with menus. And speaking of menus…

The Main Menu

In Emulsify 2, we have made it a bit easier to work with another complex piece of Twig in Drupal 8, which is the menu system. The files that do the heavy-lifting here are components/_patterns/02-molecules/menus/_menu.twig  and components/_patterns/02-molecules/menus/_menu-item.twig  (included in the first file). We also already have an example of a main menu component in the directory

themes/emulsify/components/_patterns/02-molecules/menus/main-menu themes/emulsify/components/_patterns/02-molecules/menus/main-menu

which is already connected in the Drupal template

templates/navigation/menu--main.html.twig templates/navigation/menu--main.html.twig

Obviously, you can use this as-is or tweak the code to fit your situation, but let’s break down the key pieces which could help you define your own menu.

Menu Markup

Ignoring the code for the menu toggle inside the file, the key piece from themes/emulsify/components/_patterns/02-molecules/menus/main-menu/main-menu.twig is the include statement:

<nav id="main-nav" class="main-nav"> {% include "@molecules/menus/_menu.twig" with { menu_class: 'main-menu' } %} </nav> <navid="main-nav"class="main-nav">  {%include"@molecules/menus/_menu.twig"with{    menu_class:'main-menu'

This will use all the code from the original heavy-lifting files while passing in the class we need for styling. For an example of how to stub out component data for Pattern Lab, see components/_patterns/02-molecules/menus/main-menu/main-menu.yml. This component also shows you how you can have your styling and javascript live alongside your component markup in the same directory. Finally, you can see a more simple example of using a menu like this in the components/_patterns/02-molecules/menus/inline-menu component. For now, let’s move on to placing our components into a header organism.

The Header Organism

Now that we have our three molecule components built, let’s create a wrapper component for our site header. Emulsify ships with an empty component for this at components/_patterns/03-organisms/site/site-header. In our usage we want to change the markup in components/_patterns/03-organisms/site/site-header/site-header.twig to:

<header class="header"> <div class="header__logo"> {% block logo %} {% include "@atoms/04-images/00-image/image.twig" %} {% endblock %} </div> <div class="header__search"> {% block search %} {% include "@molecules/search-form/search-form.twig" %} {% endblock %} </div> <div class="header__menu"> {% block menu %} {% include "@molecules/menus/main-menu/main-menu.twig" %} {% endblock %} </div> </header> <headerclass="header">  <divclass="header__logo">    {%blocklogo%}      {%include"@atoms/04-images/00-image/image.twig"%}    {%endblock%}  </div>  <divclass="header__search">    {%blocksearch%}      {%include"@molecules/search-form/search-form.twig"%}    {%endblock%}  </div>  <divclass="header__menu">    {%blockmenu%}      {%include"@molecules/menus/main-menu/main-menu.twig"%}    {%endblock%}  </div>

Notice the use of Twig blocks. These will help us provide default data for Pattern Lab while giving us the flexibility to replace those with our component templates on the Drupal side. To populate the default data for Pattern Lab, simply create components/_patterns/03-organisms/site/site-header/site-header.yml and copy over the data from components/_patterns/01-atoms/04-images/00-image/image~logo.yml and components/_patterns/02-molecules/menus/main-menu/main-menu.yml. You should now see your component printed in Pattern Lab.

Header in Drupal

To print the header organism in Drupal, let’s work with the templates/layout/region--header.html.twig file, replacing the default contents with:

{% extends "@organisms/site/site-header/site-header.twig" %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {%extends"@organisms/site/site-header/site-header.twig"%}{%blocklogo%}  {{elements.emulsify_branding }}{%endblock%}{%blocksearch%}  {{elements.emulsify_search }}{%endblock%}{%blockmenu%}  {{elements.emulsify_main_menu }}{%endblock%}

Here, we’re using the Twig extends statement to be able to use the Twig blocks we created in the component. You can also use the more robust embed statement when you need to pass variables like so:

{% embed "@organisms/site/site-header/site-header.twig" with { variable: "something", } %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {% endembed %} {%embed"@organisms/site/site-header/site-header.twig"with{  variable:"something",  {%blocklogo%}    {{elements.emulsify_branding }}  {%endblock%}  {%blocksearch%}    {{elements.emulsify_search }}  {%endblock%}  {%blockmenu%}    {{elements.emulsify_main_menu }}  {%endblock%}{%endembed%}

For our purposes, we can simply use the extends statement. You’ll notice that we are using the elements variable. This variable is currently not listed in the Stable region template at the top, but is extremely useful in printing the blocks that are currently in that region. Finally, if you’ve added the file, be sure and clear the cache registry—otherwise, you should now see your full header in Drupal.

Final Thoughts

Component-driven development is not without trials, but I hope we have touched on some of the more difficult ones in this article to speed you on your journey. If you would like to view the branch of Emulsify where we built this site header component, you can see that here. Feel free to sift through and reverse-engineer the code to figure out how to build your own component-driven Drupal project!

This fifth episode concludes our five-part video-blog series for Emulsify 2.x. Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Dec 14 2017
Dec 14
December 14th, 2017

When working with Pantheon, you’re presented with the three typical environments: Dev, Test and Live. This scheme is very common in major hosting providers, and not without a reason: it allows to plan and execute an effective and efficient development process that takes every client need into consideration. We can use CircleCI to manage that process.

CircleCI works based in the circle.yml file located at the root of projects. It is a script of all the stuff the virtual machine will do for you in the cloud, including testing and delivery. The script is triggered by a commit into the repository unless you have it configured to just react to commits to branches with open pull requests. It is divided into sections, each representing a phase of the Build-Test-Deploy process.

In the deployment section, you can put instructions in order to deploy your code to your web servers. A common deployment would look like the following:

deployment: dev: branch: master commands: - ./merge_to_master.sh deployment:    branch:master    commands:      -./merge_to_master.sh

This literally means, perform the operations listed under commands every time a commit is merged into the master branch. You may think there is not a real reason to use a deployment block like this to do an actual deployment. And it’s true, you can do whatever you want there. It’s ideal to perform deployments, but in essence, the deployment section allows you to implement conditional post-build subscripts that react differently depending on the nature of the action that triggered the whole build.

The Drops 8 Pantheon upstream for Drupal 8 comes with a very handy circle.yml that can help you set up a basic CircleCI workflow in a matter of minutes. It heavily relies on the use of Pantheon’s Terminus CLI and a couple of plugins like the excellent Terminus Build Tools Plugin, that provides probably the most important call of the whole script:

terminus build:env:create -n "$TERMINUS_SITE.dev" "$TERMINUS_ENV" --yes --clone-content --db-only --notify="$NOTIFY" terminusbuild:env:create-n"$TERMINUS_SITE.dev""$TERMINUS_ENV"--yes--clone-content--db-only--notify="$NOTIFY"

The line above creates a multidev environment in Pantheon, merging the code and generated assets in Circle’s VM into the code coming from the dev environment, and also clones the database from there. You can then use drush to update that database with the changes in configuration that you just merged in.

Once you get to the deployment section, you already have a functional multidev environment. The deployment happens by merging the artifact back into dev:

deployment: build-assets: branch: master commands: - terminus build:env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes deployment:    build-assets:        branch:master        commands:            -terminusbuild:env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes

This workflow assumes a simple git workflow where you just create feature branches and merge them into master where they’re ready. It also takes the deployment process just to the point code reaches dev. This is sometimes not enough.

Integrating release branches

When in a gitflow involving a central release branch, the perfect environment to host a site that completely reflects the state of the release branch is the dev environment, After all, development is only being actively done in that branch. Assuming your release branch is statically called develop:

deployment: dev: branch: develop commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh test: branch: master commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh # Deploy to TEST environment. - terminus env:deploy $TERMINUS_SITE.test --sync-content - ./rebuild_test.sh - terminus env:clone-content "$TERMINUS_SITE.test" "dev" --yes deployment:    branch:develop    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh    branch:master    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh        # Deploy to TEST environment.      -terminusenv:deploy$TERMINUS_SITE.test--sync-content      -./rebuild_test.sh      -terminusenv:clone-content"$TERMINUS_SITE.test""dev"--yes

This way, when you merge into the release branch, the multidev environment associated with it will get merged into dev and deleted, and dev will be rebuilt.

The feature is available in Dev right after merging into the release branch.

The same happens on release day when the release branch is merged into master, but after dev is rebuilt, it is also deployed to the test environment:

The release branch goes all the way to the test environment.

A few things to notice about this process:

  • The --sync-content option brings database and files from the live environment to test at the same time code is coming there from dev. By rebuilding test, we’re now able to test the latest changes in code against the latest changes in content, assuming live is your primary content entry point.
  • The last Terminus command takes the database from test and sends it back to dev. So, to recap, the database originally came from live, was rebuilt in test using dev’s fresh code, and now goes to dev. At this moment, test and dev are identical. Just until the next commit is thrown into the release branch.
  • This process facilitates testing. While the next release is already in progress and transforming dev, the client can take all the time to give the final approval for what’s in test. Once that happens, the deployment to live should occur in a semi-automatic way at most. But nothing really prevents you from using this same approach to automate also the deployment to live. Well, nothing but good judgment.
  • By using circle.yml to handle the deployment process, you contribute to keep workflow configuration centralized and accessible. With the appropriate system in place, you can trigger a complete and fully automated deployment just by doing a commit to Github, and all you’ll ever need to know about the process is in that single file.
Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Nov 13 2017
Nov 13
November 13th, 2017

Welcome to the fourth episode in our video series for Emulsify 2.x. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to best use a DRY Twig approach when working in Emulsify. This blog post accompanies a tutorial video, embedded at the end of this post.

DRYing Out Your Twigs

Although we’ve been using a DRY Twig approach in Emulsify since before the 2.x release, it’s a topic worth addressing because it is unique to Emulsify and provides great benefit to you workflow. After all, what drew you to component-driven development in the first place? Making things DRY of course!

In component-driven development, we build components once and reuse them together in different combination—like playing with Lego. In Emulsify, we use Sass mixins and BEM-style CSS to make our CSS as reusable and isolated as possible. DRY Twig simply extends these same benefits to the HTML itself. Let’s look at an example:

Non-DRY Twig:

<h2 class=”title”> <a class=”title__link” href=”/”>Link Text</a> </h2> <h2class=title><aclass=title__link” href=/>LinkText</a>

DRY Twig:

<h2 class=”title”> {% include "@atoms/01-links/link/link.twig" with { "link_content": “Link Text”, "link_url": “/”, "link_class": “title__link”, } %} </h2> <h2class=title>{%include"@atoms/01-links/link/link.twig"with{"link_content":LinkText,"link_url":/,"link_class":title__link”,

The code with DRY Twig is more verbose, but by switching to this method, we’ve now removed a point of failure in our HTML. We’re not repeating the same HTML everywhere! We write that HTML once and reuse it everywhere it is needed.

The concept is simple, and it is found everywhere in the components directory that ships in Emulsify. HTML gets written mostly as atoms and is simply reused in larger components using the default include, extends or embed functions built into Twig. We challenge you to try this in a project, and see what you think.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web