Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Nov 18 2020
Nov 18

From the consumer perspective, there’s never been a better time to build a website. User-friendly website platforms like Squarespace allow amateur developers to bypass complex code and apply well-designed user interfaces to their digital projects. Modern site-building tools aren’t just easy to use—they’re actually fun.

For anyone who has managed a Drupal website, you know the same can’t be said for your platform of choice. While rich with possibilities, the default editorial interface for Drupal feels technical, confusing, and even restrictive to users without a developer background. Consequently, designers and developers too often build a beautiful website while overlooking its backend CMS.

Drupal’s open-ended capabilities constitute a competitive advantage when it comes to developing an elegant, customer-facing website. But a lack of attention to the needs of those who maintain your website content contributes to a perception that Drupal is a developer-focused platform. By building a backend interface just as focused on your site editors as the frontend, you create a more empowering environment for internal teams. In the process, your website performs that much better as a whole.

UX principles matter for backend design as much as the frontend

Given Drupal’s inherent flexibilities, there are as many variations of CMS interfaces as there are websites on the platform. That uniqueness is part of what makes Drupal such a powerful tool, but it also constitutes a weakness.

The editorial workflow for every website is different, which opens an inevitable training gap in translating your site’s capabilities to your editorial team. Plus, despite Drupal’s open-source strengths, you’ll likely need to reinvent the wheel when designing CMS improvements specific to your organization.

For IT managers, this is a daunting situation because the broad possibilities of Drupal are often overwhelming. If you try to make changes to your interface, you can be frustrated when a seemingly easy fix requires 50 hours of development work. Too often, Drupal users will wind up working with an inefficient and confusing CMS because they’re afraid of the complexity that comes with building out a new interface.

Fortunately, redesigning your CMS doesn’t have to be a demanding undertaking. With the right expertise, you can develop custom user interfaces with little to no coding required. Personalized content dashboards and defined roles and permissions for each user go a long way toward creating a more intuitive experience.

Improving your backend design is often seen as an additional effort, but think of it as a baseline requirement. And, by sharing our user stories within the Drupal community, we also build a path toward improving the platform for the future.

Use Drupal’s Views module to customize user dashboards

One of the biggest issues with Drupal’s out-of-the-box editorial tools is that they don’t reflect the way any organization actually uses the CMS. Just as UX designers look to provide a positive experience for first-time visitors to your site, your team should aim for delivering a similarly strong first impression for those managing its content.

By default, Drupal takes users to their profile pages upon login, which is useful to . . . almost no one. Plus, the platform’s existing terminology uses cryptic terms such as “node,” “taxonomy,” and “paragraphs” to describe various content items. From the beginning, you should remove these abstract references from your CMS. Your editorial users shouldn’t have to understand how the site is built to own its content.

Powering Our Communities homepage

In the backend, every Drupal site has a content overview page, which shows the building blocks of your site. Offering a full list that includes cryptic timestamps and author details, this page constitutes a floodgate of information. Designing an effective CMS is as much an exercise in subtraction as addition. Whether your user’s role involves reviewing site metrics or new content, their first interaction with your CMS should display what they use most often.

Manage News interface

If one population of users is most interested in the last item they modified, you can transform their login screen to a custom dashboard to display those items. If another group of users works exclusively with SEO, you can create an interface that displays reports and other common tasks. Using Drupal’s Views module, dashboards like these are possible with a few clicks and minimal coding.

By tailoring your CMS to specific user habits, you allow your website teams to find what they need and get to work faster. The most dangerous approach to backend design is to try and build one interface to rule them all.

Listen to your users and ease frustrations with a CMS that works

Through Drupal Views, you can modify lists of content and various actions to control how they display in your CMS. While Views provides many options to create custom interfaces, your users themselves are your organization’s most vital resource. By watching how people work on your site, you can recognize areas where your CMS is falling short.

Drupal content dashboard

Even if you’ve developed tools that aimed to satisfy specific use cases, you might be surprised the way your tools are used. Through user experience testing, you’ll often find the workarounds your site editors have developed to manage the site.

In one recent example, site editors needed to link to a site page within the CMS. Without that functionality, they would either find the URL by viewing the source code in another tab and copying its node ID number. Anyone watching these users would find their process cumbersome, time-consuming, and frustrating. Fortunately, there’s a Drupal module called Linkit that was implemented to easily eliminate this needless effort.

There are many useful modules in the Drupal ecosystem that can enhance the out-of-the-box editorial experience. Entity Clone expedites the content creation process. Views Bulk Operations and Bulk Edit simplify routine content update tasks. Computed Field and Automatic Entity Label take the guesswork out of derived or dependent content values. Using custom form modes and Field Groups can help bring order and streamline the content creation forms.

Most of the time, your developers don’t know what solutions teams have developed to overcome an ineffective editorial interface. And, for fear of the complexity required to create a solution, these supposed shortcuts too often go unresolved. Your backend users may not even be aware their efforts could be automated or otherwise streamlined. As a result, even the most beautiful, user-friendly website is bogged down by a poorly designed CMS.

Once these solutions are implemented, however, you and your users enjoy a shared win. And, through sharing your efforts with the Drupal community, you and your team build a more user-friendly future for the platform as well.

Sep 23 2020
Sep 23

Working in digital design and development, you grow accustomed to the rapid pace of technology. For example: After much anticipation, the latest version of Drupal was released this summer. Just months later, the next major version is in progress.

At July’s all-virtual DrupalCon Global, the open-source digital experience conference, platform founder Dries Buytaert announced Drupal 10 is aiming for a June 2022 release. Assuming those plans hold, Drupal 9 would have the shortest release lifetime of any recent major version.

For IT managers, platform changes generate stress and uncertainty. Considering the time-intensive migration process from Drupal 7 to 8, updating your organization’s website can be costly and complicated. Consequently, despite a longtime absence of new features, Drupal 7 still powers more websites than Drupal 8 and 9 combined. And, as technology marches on, the end of its life as a supported platform is approaching.

Fortunately, whatever version your website is running, Drupal is not running away from you. Drupal’s users and site builders may be accustomed to expending significant resources to update their website platform, but the plan for more frequent major releases alleviates the stress of the typical upgrade. And, for those whose websites are still on Drupal 7, Drupal 10 will continue offering a way forward.

The news that Drupal 10 is coming sooner rather than later might have been unexpected, but you still have no reason to panic just yet. However, your organization shouldn’t stand still, either.

Image via Dri.es

The End for Drupal 7 Is Still Coming, but Future Upgrades Will Be Easier

Considering upgrading to Drupal 8 involves the investment of building a new site and migrating its content, it’s no wonder so many organizations have been slow to update their platform. Drupal 7 is solid and has existed for nearly 10 years. And, fortunately, it’s not reaching its end of life just yet.

At the time of Drupal 9’s release, Drupal 7’s planned end of life was set to arrive late next year. This meant the community would no longer release security advisories or bug fixes for that version of the platform. Affected organizations would need to contact third-party vendors for their support needs. With the COVID-19 pandemic upending businesses and their budgets, the platform’s lifespan has been extended to November 28, 2022.

Drupal’s development team has retained its internal migration system through versions 8 and 9, and it remains part of the plan for the upcoming Drupal 10 as well. And the community continues to maintain and improve the system in an effort to make the transition easier. If your organization is still on Drupal 7 now, you can use the migration system to jump directly to version 9, or version 10 upon its release. Drupal has no plans to eliminate that system until Drupal 7 usage numbers drop significantly.

Once Drupal 10 is ready for release, Drupal 7 will finally reach its end of life. However, paid vendors will still offer support options that will allow your organization to maintain a secure website until you’re ready for an upgrade. But make a plan for that migration sooner rather than later. The longer you wait for this migration, the more new platform features you’ll have to integrate into your rebuilt website.

Initiatives for Drupal 10 Focus on Faster Updates, Third-Party Software

In delivering his opening keynote for DrupalCon Global, Dries Buytaert outlined five strategic goals for the next iteration of the platform. Like the work for Drupal 9 that began within the Drupal 8 platform, development of Drupal 10 has begun under the hood of version 9.

A Drupal 10 Readiness initiative focuses on upgrading third-party components that count as technological dependencies. One crucial component is Symfony, which is the PHP framework Drupal is based upon. Symfony operates on a major release schedule every two years, which requires that Drupal is also updated to stay current. The transition from Symfony 2 to Symfony 3 created challenges for core developers in creating the 8.4 release, which introduced changes that impacted many parts of Drupal’s software.

To avoid a repeat of those difficulties, it was determined that the breaking changes involved in a new Symfony major release warranted a new Drupal major release as well. While Drupal 9 is on Symfony 4, the Drupal team hopes to launch 10 on Symfony 6, which is a considerable technical challenge for the platform’s team of contributors. However, once complete, this initiative will extend the lifespan of Drupal 10 to as long as three or four years.

Other announced initiatives included greater ease of use through more out-of-the-box features, a new front-end theme, creating a decoupled menu component written in JavaScript, and, in accordance with its most requested feature, automated security updates that will make it as easy as possible to upgrade from 9 to 10 when the time comes. For those already on Drupal 9, these are some of the new features to anticipate in versions 9.1 through 9.4.

Less Time Between Drupal Versions Means an Easier Upgrade Path

The shift from Drupal 8 to this summer’s release of Drupal 9 was close to five years in the making. Fortunately for website managers, that update was a far cry from the full migration required from version 7. While there are challenges such as ensuring your custom code is updated to use the most recent APIs, the transition was doable with a good tech team at your side.

Still, the work that update required could generate a little anxiety given how comparatively fast another upgrade will arrive. But the shorter time frame will make the move to Drupal 10 easier for everybody. Less time between updates also translates to less deprecated code, especially if you’re already using version 9. But if you’re not there yet, the time to make a plan is now.

Apr 23 2020
Apr 23

We’ve been making big websites for 14 years, and almost all of them have been built on Drupal. It’s no exaggeration to say that Four Kitchens owes its success to the incredible opportunities Drupal has provided us. There has never been anything like Drupal and the community it has fostered—and there may never be anything like it ever again.

That’s why it’s crucial we do everything we can to support the Drupal Association. Especially now.

The impacts of COVID-19 have been felt everywhere, especially at the Association. With the cancellation of DrupalCon Minneapolis, the Drupal Association lost a major source of annual fundraising. Without the revenue from DrupalCon, the Association would not be able to continue its mission to support the Drupal project, the community, and its growth.

The Drupal community’s response to this crisis was tremendous. For our part, we proudly joined 27 other organizations in pledging our sponsorship fees to the Association regardless of whether, or how, DrupalCon happened. I ensured my Individual Membership was still active, and I made a personal contribution.

But we need to do more.

You can help by joining us in the #DrupalCares campaign.

The #DrupalCares campaign is a fundraiser to protect the Drupal Association from the financial impact of COVID-19. Your support will help keep the Drupal Association strong and able to continue accelerating the Drupal project.

The Drupal Association

The outpouring of support has been… Inspiring. First, project founder Dries Buytaert and his partner Vanessa Buytaert pledged their generous support of $100,000. Then, a coalition of Drupal businesses pledged even more matching contributions. We are proud to count ourselves among the dozens of participating Drupal businesses.

Any individual donations, increased memberships, or new memberships through the end of April will be tripled by these matching pledges, up to $100,000, for a total of $300,000.

Please join us in supporting the Drupal Association. Your contribution will help ensure the continued success of the Association and the Drupal community for years to come.

Give to #DrupalCares through April to help the Association receive a 3:1 matching contribution. 

Jan 23 2020
Jan 23

In the Drupal support world, working on Drupal 7 sites is a necessity. But switching between Drupal 7 and Drupal 8 development can be jarring, if only for the coding style.

Fortunately, I’ve got a solution that makes working in Drupal 7 more like working in Drupal 8. Use this three-part approach to have fun with Drupal 7 development:

  • Apply Xautoload to keep your PHP skills fresh, modern, and compatible with all frameworks and make your code more reusable and maintainable between projects. 
  • Use the Drupal Libraries API to use third-party libraries. 
  • Use the Composer template to push the boundaries of your programming design patterns. 

Applying Xautoload

Xautoload is simply a module that enables PSR-0/4 autoloading. Using Xautoload is as simple as downloading and enabling it. You can then start using use and namespace statements to write object-oriented programming (OOP) code.

For example:

xautoload.info

name = Xautoload Example
description = Example of using Xautoload to build a page
core = 7.x package = Midcamp Fun

dependencies[] = xautoload:xautoload

xautoload_example.module

<?php use Drupal\xautoload_example\SimpleObject; function xautoload_example_menu() { $items['xautoload_example'] = array( 'page callback' => 'xautoload_example_page_render', 'access callback' => TRUE, ); return $items; } function xautoload_example_page_render() { $obj = new SimpleObject(); return $obj->render(); } use Drupal\xautoload_example\SimpleObject;function xautoload_example_menu() {  $items['xautoload_example'] = array(    'page callback' => 'xautoload_example_page_render',    'access callback' => TRUE,  return $items;function xautoload_example_page_render() {  $obj = new SimpleObject();  return $obj->render();

src/SimpleObject.php

<?php namespace Drupal\xautoload_example; class SimpleObject { public function render() { return array( '#markup' => "<p>Hello World</p>", ); } } namespace Drupal\xautoload_example;class SimpleObject {  public function render() {    return array(      '#markup' => "

Hello World

"
,    );

Enabling and running this code causes the URL /xautoload_example to spit out “Hello World”. 

You’re now ready to add in your own OOP!

Using Third-Party Libraries

Natively, Drupal 7 has a hard time autoloading third-party library files. But there are contributed modules (like Guzzle) out there that wrap third-party libraries. These modules wrap object-oriented libraries to provide a functional interface. Now that you have Xautoload in your repertoire, you can use its functionality to autoload libraries as well.

I’m going to show you how to use the Drupal Libraries API module with Xautoload to load a third-party library. You can find examples of all the different ways you can add a library in xautoload.api.php. I’ll demonstrate an easy example by using the php-loremipsum library:

1. Download your library and store it in sites/all/libraries. I named the folder php-loremipsum. 

2. Add a function implementing hook_libraries_info to your module by pulling in the namespace from Composer. This way, you don’t need to set up all the namespace rules that the library might contain.

function xautoload_example_libraries_info() { return array( 'php-loremipsum' => array( 'name' => 'PHP Lorem Ipsum', 'xautoload' => function ($adapter) { $adapter->composerJson('composer.json'); } ) ); } function xautoload_example_libraries_info() {  return array(    'php-loremipsum' => array(      'name' => 'PHP Lorem Ipsum',      'xautoload' => function ($adapter) {        $adapter->composerJson('composer.json');      }

3. Change the page render function to use the php-loremipsum library to build content.

use joshtronic\LoremIpsum; function xautoload_example_page_render() { $library = libraries_load('php-loremipsum'); if ($library['loaded'] === FALSE) { throw new \Exception("php-loremipsum didn't load!"); } $lipsum = new LoremIpsum(); return array( '#markup' => $lipsum->paragraph('p'), ); } use joshtronic\LoremIpsum;function xautoload_example_page_render() {  $library = libraries_load('php-loremipsum');  if ($library['loaded'] === FALSE) {    throw new \Exception("php-loremipsum didn't load!");  $lipsum = new LoremIpsum();  return array(    '#markup' => $lipsum->paragraph('p'),

Note that I needed  to tell the Libraries API to load the library, but I then have access to all the namespaces within the library. Keep in mind that the dependencies of some libraries are immense. You’ll very likely need to use Composer from within the library and commit it when you first start out. In such cases, you might need to make sure to include the Composer autoload.php file.

Another tip:  Abstract your libraries_load() functionality out in such a way that if the class you want already exists, you don’t call libraries_load() again. Doing so removes libraries as a hard dependency from your module and enables you to use Composer to load the library later on with no more work on your part. For example:

function xautoload_example_load_library() { if (!class_exists('\joshtronic\LoremIpsum', TRUE)) { if (!module_exists('libraries')) { throw new \Exception('Include php-loremipsum via composer or enable libraries.'); } $library = libraries_load('php-loremipsum'); if ($library['loaded'] === FALSE) { throw new \Exception("php-loremipsum didn't load!"); } } } function xautoload_example_load_library() {  if (!class_exists('\joshtronic\LoremIpsum', TRUE)) {    if (!module_exists('libraries')) {      throw new \Exception('Include php-loremipsum via composer or enable libraries.');    $library = libraries_load('php-loremipsum');    if ($library['loaded'] === FALSE) {      throw new \Exception("php-loremipsum didn't load!");

And with that, you’ve conquered the challenge of using third-party libraries!

Setting up a New Site with Composer

Speaking of Composer, you can use it to simplify the setup of a new Drupal 7 site. Just follow the instructions in the Readme for the Composer Template for Drupal Project. From the command line, run the following:

composer create-project drupal-composer/drupal-project:7.x-dev --no-interaction

This code gives you a basic site with a source repository (a repo that doesn’t commit contributed modules and libraries) to push up to your Git provider. (Note that migrating an existing site to Composer involves a few additional considerations and steps, so I won’t get into that now.)

If you’re generating a Pantheon site, check out the Pantheon-specific Drupal 7 Composer project. But wait: The instructions there advise you to use Terminus to create your site, and that approach attempts to do everything for you—including setting up the actual site. Instead, you can simply use composer create-project  to test your site in something like Lando. Make sure to run composer install if you copy down a repo.

From there, you need to enable the Composer Autoload module , which is automatically required in the composer.json you pulled in earlier. Then, add all your modules to the require portion of the file or use composer require drupal/module_name just as you would in Drupal 8.

You now have full access to all the  Packagist libraries and can use them in your modules. To use the previous example, you could remove php-loremipsum from sites/all/libraries, and instead run composer require joshtronic/php-loremipsum. The code would then run the same as before.

Have fun!

From here on out, it’s up to your imagination. Code and implement with ease, using OOP design patterns and reusable code. You just might find that this new world of possibilities for integrating new technologies with your existing Drupal 7 sites increases your productivity as well.

Nov 07 2019
Nov 07
Image courtesy of Acquia

Crafting a user experience that feels customized — but not forced or invasive — is key to successfully creating targeted website content. If you work with Drupal, Acquia Lift is one obvious solution to this challenge. Few other complete solutions for personalizing the user experience integrate with Drupal so well. 

Personalize Drupal content

Acquia Lift is a personalization tool that can help serve up custom content by tracking a variety of user activities on your site:

  • Pages visited
  • Content types or taxonomies viewed
  • Data that relates to location and browser information 

Lift works with Drupal versions 7 and 8, so the tool is available for use by many active sites on the Acquia platform. As an Acquia tool, Lift also integrates with the rest of the Acquia Cloud hosting platform. 

The tool uses criteria that you enter into its Profile Manager to place website visitors into segments. You then use Profile Manager to set up campaigns that target, track, and provide personalized experiences to specific segments. Segment criteria can be as simple as device type or as complex as specific combinations of interactions with the website. This level of granular control improves the quality of your content customization — and in turn, your engagement metrics.

After setting up campaigns and segments, you can select site sections and content to personalize. For example, you might replace a generic hero banner image with a location-specific graphic or swap out promotional content to reflect a visitor’s account status. When used correctly, this type of personalization can elevate the user experience and improve conversion rates. And because Lift integrates with Drupal so closely as a module, the tool has direct access to site content. This approach helps to ensure that the content visitors see doesn’t feel jarring or out of place. 

Focus on personalization, not code

From a technical perspective, installing and using Acquia Lift is not too difficult. Acquia provides a contributed module and library that work well with Drupal. In my opinion, the documentation could be improved, but it provides enough information for most developers to install Lift without trouble. The personalization and swapping of content all happen client-side (outside of Drupal), so after Lift is installed there isn’t much to configure within Drupal. Some handling with CSS is necessary to reduce visual indications of changing content, but the process is included in the documentation and the Acquia team is willing to assist if necessary.

Aside from these specifics, the magic that happens with personalization and setup occurs within the Acquia Lift interfaces. So after Lift is set up, you don’t need a high level of technical expertise for its regular use. The analytics, user data, segments, campaigns, and targeting all occur in the user-friendly Profile Manager. 

Before you decide

Lift does have a few limitations that you need to be aware of. The most significant: Dynamic or “live” content can be tricky to replace or personalize.  Because Lift bases replacements on static versions of content (i.e., rendered HTML), Lift might not replicate or replace carousels, slideshows, and other views with custom displays. Content that relies on data that might change or on JavaScript for rendering might need to be recreated in a different way to generate the same effect. 

Second, differences in documentation between the two available versions of Lift (versions 3 and 4) can cause a bit of confusion during setup confirmation. For example, Version 4 is an integrated tool inside Profile Manager, whereas version 3 works as a sort of pop-up that occurs within Drupal.

Also note that despite the ease of installation and deep documentation, the Acquia team will probably need to be involved to finish the installation. The team is fairly responsive, but this necessity could cause a slowdown in implementation, so you need to be aware of it if you’re deploying Lift close to launch. The extra assistance is even more likely to be necessary when using version 4 because of its more limited documentation and active development. We hope to see improvements in this area in the near future as development on version 4 continues.

Lastly, setting up test campaigns after you’ve installed and configured Lift can be difficult if you only have a few content items and some Lorem Ipsum. This service needs real content and real slots into which to swap that content to be tested properly. You won’t have a problem when implementing Lift into an existing site, but if you plan to use the tool on a new site, be aware that you’ll need to delay implementation until just before or after launch.

A complete personalization experience

All-in-all, few options out there work directly with a Drupal site and provide such a complete experience for personalization. Targeted and personalized content doesn’t need to be difficult to set up or irritating for the user. Tools like Acquia Lift can help make websites feel just a bit more personal. Check out Lift’s informational pages for more details.

Jul 03 2019
Jul 03

When someone thinks of Drupal, they think about scalability, customization, and the oh-so-steep learning curve. What isn’t thought of as much is the smooth and simple editing experience. Though there are many efforts out there to improve the editing experience in Drupal, they tend to still require a bit of tinkering before they are ready for the end user. That isn’t to say that the stock experience is unusable, but for those that are not familiar with Drupal it can be a bit complicated and confusing. We aren’t here to complain about all of that though because there is another solution to this that is gaining traction and we should most definitely talk about it. 

Introducing Gutenberg

Those that are at all familiar with WordPress will already know about the Gutenberg editing experience. The classic editor for WordPress is basically just a rich text editor that is great for writing content, and more specifically blogs posts, but requires some more understanding of markup if you want to build an actual page with it. That is where the idea for Gutenberg comes in. With this editor you build your content with blocks. These blocks handle most things you could want, such as: basic paragraphs, headings, layout, embedded videos, and more. Since Gutenberg is included and set as the default editor mode in WordPress 5.0+, everyone is going to be making use of this soon.

This really changes the way the typical WordPress page is going to look without a lot of custom work. Something that has been missing from the WordPress experience has been a way to make things look more consistent between pages. Gutenberg blocks provide this consistency while also allowing editors to build pages without much restriction.

Gutenberg in WordPress still makes use of the same shortcodes that could be entered manually to save the layout configuration and data. It just does this automatically for the user while they configure the page visually. This reduces the barrier for making interesting and varied pages that used to require a lot more intimate knowledge of the platform. 

Since Gutenberg is largely built with React, it is also very flexible and extendable. The community has put together a large collection of ready-to-use blocks in the Gutenberg Cloud plugin. This means that very little work should be required to create a wide array of pages with this editor and anything that is missing should be fairly easy to develop.

Why bring Gutenberg to Drupal?

There has been a concerted effort in Drupal 8 to bring more focus to the editing experience and overall UI of the platform. As I mentioned before, they are lacking still despite major strides in version 8 from 7. This leads to a great deal of curiosity into what other CMS are doing to solve this problem. It wasn’t too long before whispers of a port of the Gutenberg plugin came around and an alpha version of Gutenberg for Drupal came to be. 

On the surface, it can seem a bit duplicative to have something like Gutenberg in Drupal. While block-based layout and reusability may be a game changer for something like WordPress, it is old hat for the Drupal ecosystem. Creating pages using different sorts of blocks and widgets is practically one of Drupal’s claims to fame as a CMS. Not to mention, other existing solutions and modules, like paragraphs, already address something like this in a much more Drupal-y sort of way. Why both bringing in another solution with language around blocks and tokens to a system that all but invented it? 

Though it may seem redundant, it actually makes a lot of sense to do this. For one, familiarity is a luxury that Drupal can rarely offer to those new to the platform. Everything in the UI is almost entirely originated from the Drupal ecosystem and can be a bit unintuitive. Having an offering that was made popular in the CMS with a market share of somewhere around a quarter of the internet is a good way to do that. Another reason for it to be on Drupal is because the idea behind the tool isn’t a wholly WordPress specific need. Editors want to be able to build these pages without a need for expert level platform knowledge. They want to be able to create content and not worry about tokens, shortcodes, blocks, fields, etc. WordPress has a deceptively simple problem and Drupal has that for engineers by engineers feeling that isn’t always welcoming to editors.

That isn’t to say that Gutenberg for Drupal is the answer to our UI prayers, however. It is a nice solution, but it is one that comes with caveats as these things usually do. At the moment, the module has an all or nothing approach for the editing experience. The content types that have the Gutenberg experience enabled will need to have a long text field on them for it to work. It will then use that field for storage and all other fields will be stashed in the Gutenberg sidebar on the edit page. (Currently, the Gutenberg module will look for whatever field matches the description and use the first one it finds for this. Something to watch out for it you add this to an existing content type.) It fully takes over the display of the edit form as it will now be replaced with the React-based experience that Gutenberg brings.

Another thing to note on this is that the Gutenberg module does support some Drupal core blocks out of the box and should mostly support custom Drupal blocks as well. The cool part about this is that in theory you can set up things like ads, content embeds, and views ahead of time for use in many content pieces. The less cool part is that not all blocks will work, and it isn’t clear which ones do and why others do not at this time. Keep an eye on the Supported Blocks page on Drupal.org for more details as they are available.

Is Gutenberg ready for use?

Gutenberg for Drupal 8 is a very promising module that solves a lot of problems in the traditional Drupal editing experience and it is making great strides. At the time of this writing, 8.x-1.2 has been released and solves many of the issues that previously would have made this a harder module to recommend. The better question right now would be if it is worth checking out and I would say that it absolutely is. Whether you enjoy building content in Drupal or not, the Gutenberg experience is something worth trying and considering for your editors.

Jan 28 2019
Jan 28

SDSU Extension is a South Dakota State University organization that provides educational outreach programs for the citizens of South Dakota. They provide farmers, ranchers, agri-business people, communities, families, and youth with the research-based information they need to succeed.

Solutions Four Kitchens Provided

SDSU Extension wanted to improve content creation and editing workflow on their website and provide a better navigation experience for users. They wanted to assure that content was presented in a consistent manner and that all information was the most up to date.

SDSU Extension took full advantage of Four Kitchens UX and strategy expertise to get them on the right track for a new site. This was a necessary step in the project that proved to be a critical component of delivering a solid end product for the project.

SDSU recently rebuilt their main site (sdstate.edu) with Drupal 8 and SDSU Extension was ready to follow suit. SDSU Extension was on a very old ExpressionEngine install that became difficult to manage content and users. Four Kitchens built a Drupal 8 solution that solved their primary concern; more versatile content management while keeping a consistent look and feel for their content. We took the time to define a content architecture plan to match the needs discovered with our UX and strategy work.

The Drupal 8 build was based on the Thunder distribution which provided a great starting point and gets us an improved administrative experience out of the box. We used CircleCi to build, test, and deploy the application on Acquia Cloud where the site is hosted. We took advantage of paragraphs and custom entities and to give SDSU Extension the tools they need to create consistent and rich page layouts for all content. We built a custom site search solution based on Solr with facets to allow users to search by text and filter results.

SDSU Extension content is generated and created by SDSU Extension faculty and staff. Most of these users already have profile information on sdstate.edu. We worked with their IT staff to help them expose this profile information with JSON API so that we could consume the data and display it on their profile on the new SDSU Extension site. This allows them to have a canonical point of data entry for the profile information while displaying it on both sites.

Kudos to team members

The SDSU Extension team was lead by Alex Hicks as the project manager and Adam Erickson as the tech lead who also molded the architecture plan for the project. Donna Habbersat led the UX strategy for the project and shared the project owner responsibilities with Adam. Randy Oest crafted a beautiful new design for the project and assisted Evan Willhite who lead the charge on implementing the design and provided some frontend magic. Joel Travieso handled much of the backend heavy lifting while contributing valuable input to architecture improvements. This group was like a ‘well oiled machine” as quoted by the client. A huge shout out to Lindsey Gerard and the SDSU Extension staff for all the hard work and efforts on their side to make this project a success.

Jan 28 2019
Jan 28

At Four Kitchens, we take accessibility very seriously. It is why we choose tools like Drupal that take it seriously as well. We are always looking for ways to improve our efforts on this front, which is why on a recent project for SDSU Extension, we implemented automated testing in our prototyping tool and Drupal 8 theme, Emulsify. This means that every custom component we now build in Emulsify gets automatically tested and any errors/warnings are flagged to the developer immediately. Awesome! Let’s dig into some of the details.

SDSU Extension

Education, among some other fields, has strict requirements around accessibility. We work hard with these clients to ensure their project meets the expected standard, but we also expect a certain baseline of accessibility on any project we deliver regardless of whether accessibility is a stated goal. For the SDSU Extension project, we needed to meet and/or exceed WCAG 2.0AA standards. Drupal core and many of the contributed modules put such a heavy focus on meeting that standard that an enormous amount of foundational work is done for us. However, when it comes to us designing and coding custom components, it is up to us to also deliver up to that standard. This is where automated testing comes in.

Automated Testing Using Pa11y

Even with the best intentions, vendors and clients often find themselves putting off best practices (accessibility, performance, documentation) in favor of deadlines. This is why automated testing is so important. With automated tests, accessibility issues are surfaced immediately during the build process and ideally then fixed within the scope of that same deliverable. This ensures the component is not only accessible upon delivery but also avoids the stress of last-minute fixes in the final phase of the project.

There are many tools out there for running automated tests, but one of the most popular is pa11y. Having had good experiences with their testing suite on a recent React project for PRI.org, we decided to use it in Emulsify as well. The results were fantastic! We were able to set a baseline of WCAG 2.0 AA but offer that and many more options in configuration to make it easy to change that for a given project. See below for more technical details.

Technical Details

Unlike much of Emulsify Gulp, instead of writing a Gulp command to accomplish a task, we instead created a function to simply run a pa11y test (code). This has been added to our CSS and Pattern Lab Gulp (watch) tasks, so basically it is run whenever a component stylesheet or Twig file is saved. The test is run not just against the component code but the rendered Pattern Lab url that is generated from that component. This means that it tests the final code but also catches visual errors such as color contrast. Besides errors, we also show notices and warnings by default, exposing the handy recommendations that pa11y gives for things that automated tests can’t verify.  However, these settings and many more (including the standard itself) are available to be changed via the configuration file (see here for instructions on creating a local config in Emulsify).

Final Note on Accessibility Testing and the Future

Automated testing is a great tool in verifying the accessibility of a project but is only one tool and it does not guarantee a fully accessible product on its own. That said, we are so excited to have this in place for all our Drupal projects moving forward (and to offer it back to the community) as an important step towards building more accessible and responsible products for our clients. Also, we hope to add even more automated tests into our toolchain soon. Here’s to building a more accessible web!

Jan 28 2019
Jan 28

Begin with the end in mind—defining our goals

Our collaboration with South Dakota State University’s (SDSU) outreach arm, SDSU Extension, began by defining the user experience and branding issues that the previous site had. The visual design was in need of an update, the team wanted to make information easier for people to find, and mobile users were forced to view the desktop version of the site.

With these issues defined, we put together a series of goals that fell into two major groups—user experience and branding. For the user experience goals, we defined a user-centered approach to ensure that the work we were doing was going to help people using the site engage more with the site and more easily find what they were looking for. For the branding goals, we wanted an improved, modern look and feel that felt like a part of the larger South Dakota State University brand.

Creating a palette to work from (e.g. creating Style Tiles)

Every design project at Four Kitchens starts with a visual alignment in the form of style tiles, a design deliverable showing colors, fonts, and elements that helps create a common visual language for the project.

These are presented to everyone using InVision Freehand so that as we discuss the options we can add notes directly on the style tiles. For SDSU Extension we had two rounds of style tiles, landing quickly on one that we all agreed was the right direction.

Figuring out what we’ll need (e.g. wire-framing all the things)

Design systems are all the rage in the industry and with good reason. They allow projects to move more quickly by having a library of reusable parts that are ready to go. So at this point in the process for SDSU Extension, it was time to define what those parts needed to be.

We did this by reviewing the current site and discovery document to suss out what was going to be important for the new site. As a group—Four Kitchens and SDSU Extension—had discussions to detail what sorts of things would be vital and what would be nice-to-haves.

From there we worked up a series of wireframes that showed both a component library—a page with every possible thing on it, like cards, quotes, and video callouts—and a few samples of how the new pages could be assembled from these parts.

This process worked out the kinks for trickier components, like the many-level deep navigation on mobile while minimizing effort. The cycle of posting, review, and implementing feedback was quick leading us to a final collection of wireframes.

Making it come to life (e.g. comps)

As soon as wireframes were approved we moved into the next step—breathing life into them. We took the visual language that was defined in the style tile and applied it to the wireframes. The designs included all of the components at small, medium, and large screen sizes.

These components were then quickly assembled into mock pages to show what they would look like when the site was done. Having a wealth of work already done in the form of style tiles and wireframes, we hit on the right direction quickly. Once the first few comps were finalized there was a flood of comps as we built them out faster and faster using previously approved components.

A great collaboration

Working with SDSU Extension on this project was marvelous and we’re happy that it is live and shared with the rest of the world.

Jul 26 2018
Jul 26

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name": "visreg",  "version": "1.0.0",  "description": "Website with visual regression testing",  "scripts": {    "test": "echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); const visreg = require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); const visreg = require('../../../../../../../../tests/config/globalHides.js');describe('Search Block', function () {  it('should look good', function () {    browser      .url('./')      .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove})      .forEach((item) => {        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Jul 11 2018
Jul 11

Someone recently asked the following question in Slack. I didn’t want it to get lost in Slack’s history, so I thought I’d post it here:

Question: I’m setting a CSS background image inside my Pattern Lab footer template which displays correctly in Pattern Lab; however, Drupal isn’t locating the image. How is sharing images between PL and Drupal supposed to work?

My Answer: I’ve been using Pattern Lab’s built-in data.json files to handle this lately. e.g. you could do something like:

footer-component.twig:

... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %} ... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %}

This makes the image load for Drupal, but fails for Pattern Lab.

At first, to fix that, we used the footer-component.yml file to set the path relative to PL. e.g.:

footer-component.yml:

footer_background_image: /path/relative/to/pattern-lab/footer-image.png footer_background_image: /path/relative/to/pattern-lab/footer-image.png

The problem with this is that on every Pattern Lab page, when we included the footer copmonent, we had to add that line to the yml file for the page. e.g:

basic-page.twig:

... {% include /whatever/footer-component.twig %} ... {% include /whatever/footer-component.twig %}

basic-page.yml:

... footer_background_image: /path/relative/to/pattern-lab/footer-image.png ... footer_background_image: /path/relative/to/pattern-lab/footer-image.png

Rinse and repeat for each example page… That’s annoying.

Then we realized we could take advantage of Pattern Labs global data files.

So with the same footer-component.twig file as above, we can skip the yml files, and just add the following to a data file.

theme/components/_data/paths.json: (* see P.S. below)

{ "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png" }     "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png"

Now, we can include the footer component in any example Pattern Lab pages we want, and the image is globally replaced in all of them. Also, Drupal doesn’t know about the json files, so it pulls the default value, which of course is relative to the Drupal root. So it works in both places.

We did this with our icons in Emulsify:

_icon.twig

paths.json

End of the answer to your original question… Now for a little more info that might help:

P.S. You can create as many json files as you want here. Just be careful you don’t run into name-spacing issues. We accounted for this in the header.json file by namespacing everything under the “header” array. That way the footer nav doesn’t pull our header menu items, or vise-versa.

example homepage home.twigthat pulls menu items for the header and the footer from data.json files

header.json

footer.json

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web