Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 23 2020
Sep 23

Working in digital design and development, you grow accustomed to the rapid pace of technology. For example: After much anticipation, the latest version of Drupal was released this summer. Just months later, the next major version is in progress.

At July’s all-virtual DrupalCon Global, the open-source digital experience conference, platform founder Dries Buytaert announced Drupal 10 is aiming for a June 2022 release. Assuming those plans hold, Drupal 9 would have the shortest release lifetime of any recent major version.

For IT managers, platform changes generate stress and uncertainty. Considering the time-intensive migration process from Drupal 7 to 8, updating your organization’s website can be costly and complicated. Consequently, despite a longtime absence of new features, Drupal 7 still powers more websites than Drupal 8 and 9 combined. And, as technology marches on, the end of its life as a supported platform is approaching.

Fortunately, whatever version your website is running, Drupal is not running away from you. Drupal’s users and site builders may be accustomed to expending significant resources to update their website platform, but the plan for more frequent major releases alleviates the stress of the typical upgrade. And, for those whose websites are still on Drupal 7, Drupal 10 will continue offering a way forward.

The news that Drupal 10 is coming sooner rather than later might have been unexpected, but you still have no reason to panic just yet. However, your organization shouldn’t stand still, either.

Image via Dri.es

The End for Drupal 7 Is Still Coming, but Future Upgrades Will Be Easier

Considering upgrading to Drupal 8 involves the investment of building a new site and migrating its content, it’s no wonder so many organizations have been slow to update their platform. Drupal 7 is solid and has existed for nearly 10 years. And, fortunately, it’s not reaching its end of life just yet.

At the time of Drupal 9’s release, Drupal 7’s planned end of life was set to arrive late next year. This meant the community would no longer release security advisories or bug fixes for that version of the platform. Affected organizations would need to contact third-party vendors for their support needs. With the COVID-19 pandemic upending businesses and their budgets, the platform’s lifespan has been extended to November 28, 2022.

Drupal’s development team has retained its internal migration system through versions 8 and 9, and it remains part of the plan for the upcoming Drupal 10 as well. And the community continues to maintain and improve the system in an effort to make the transition easier. If your organization is still on Drupal 7 now, you can use the migration system to jump directly to version 9, or version 10 upon its release. Drupal has no plans to eliminate that system until Drupal 7 usage numbers drop significantly.

Once Drupal 10 is ready for release, Drupal 7 will finally reach its end of life. However, paid vendors will still offer support options that will allow your organization to maintain a secure website until you’re ready for an upgrade. But make a plan for that migration sooner rather than later. The longer you wait for this migration, the more new platform features you’ll have to integrate into your rebuilt website.

Initiatives for Drupal 10 Focus on Faster Updates, Third-Party Software

In delivering his opening keynote for DrupalCon Global, Dries Buytaert outlined five strategic goals for the next iteration of the platform. Like the work for Drupal 9 that began within the Drupal 8 platform, development of Drupal 10 has begun under the hood of version 9.

A Drupal 10 Readiness initiative focuses on upgrading third-party components that count as technological dependencies. One crucial component is Symfony, which is the PHP framework Drupal is based upon. Symfony operates on a major release schedule every two years, which requires that Drupal is also updated to stay current. The transition from Symfony 2 to Symfony 3 created challenges for core developers in creating the 8.4 release, which introduced changes that impacted many parts of Drupal’s software.

To avoid a repeat of those difficulties, it was determined that the breaking changes involved in a new Symfony major release warranted a new Drupal major release as well. While Drupal 9 is on Symfony 4, the Drupal team hopes to launch 10 on Symfony 6, which is a considerable technical challenge for the platform’s team of contributors. However, once complete, this initiative will extend the lifespan of Drupal 10 to as long as three or four years.

Other announced initiatives included greater ease of use through more out-of-the-box features, a new front-end theme, creating a decoupled menu component written in JavaScript, and, in accordance with its most requested feature, automated security updates that will make it as easy as possible to upgrade from 9 to 10 when the time comes. For those already on Drupal 9, these are some of the new features to anticipate in versions 9.1 through 9.4.

Less Time Between Drupal Versions Means an Easier Upgrade Path

The shift from Drupal 8 to this summer’s release of Drupal 9 was close to five years in the making. Fortunately for website managers, that update was a far cry from the full migration required from version 7. While there are challenges such as ensuring your custom code is updated to use the most recent APIs, the transition was doable with a good tech team at your side.

Still, the work that update required could generate a little anxiety given how comparatively fast another upgrade will arrive. But the shorter time frame will make the move to Drupal 10 easier for everybody. Less time between updates also translates to less deprecated code, especially if you’re already using version 9. But if you’re not there yet, the time to make a plan is now.

Jan 23 2020
Jan 23

In the Drupal support world, working on Drupal 7 sites is a necessity. But switching between Drupal 7 and Drupal 8 development can be jarring, if only for the coding style.

Fortunately, I’ve got a solution that makes working in Drupal 7 more like working in Drupal 8. Use this three-part approach to have fun with Drupal 7 development:

  • Apply Xautoload to keep your PHP skills fresh, modern, and compatible with all frameworks and make your code more reusable and maintainable between projects. 
  • Use the Drupal Libraries API to use third-party libraries. 
  • Use the Composer template to push the boundaries of your programming design patterns. 

Applying Xautoload

Xautoload is simply a module that enables PSR-0/4 autoloading. Using Xautoload is as simple as downloading and enabling it. You can then start using use and namespace statements to write object-oriented programming (OOP) code.

For example:

xautoload.info

name = Xautoload Example
description = Example of using Xautoload to build a page
core = 7.x package = Midcamp Fun

dependencies[] = xautoload:xautoload

xautoload_example.module

<?php use Drupal\xautoload_example\SimpleObject; function xautoload_example_menu() { $items['xautoload_example'] = array( 'page callback' => 'xautoload_example_page_render', 'access callback' => TRUE, ); return $items; } function xautoload_example_page_render() { $obj = new SimpleObject(); return $obj->render(); } useDrupal\xautoload_example\SimpleObject;functionxautoload_example_menu(){  $items['xautoload_example']=array(    'page callback'=>'xautoload_example_page_render',    'access callback'=>TRUE,  return$items;functionxautoload_example_page_render(){  $obj=newSimpleObject();  return$obj->render();

src/SimpleObject.php

<?php namespace Drupal\xautoload_example; class SimpleObject { public function render() { return array( '#markup' => "<p>Hello World</p>", ); } } namespaceDrupal\xautoload_example;classSimpleObject{  publicfunctionrender(){    returnarray(      '#markup'=>"<p>Hello World</p>",    );

Enabling and running this code causes the URL /xautoload_example to spit out “Hello World”. 

You’re now ready to add in your own OOP!

Using Third-Party Libraries

Natively, Drupal 7 has a hard time autoloading third-party library files. But there are contributed modules (like Guzzle) out there that wrap third-party libraries. These modules wrap object-oriented libraries to provide a functional interface. Now that you have Xautoload in your repertoire, you can use its functionality to autoload libraries as well.

I’m going to show you how to use the Drupal Libraries API module with Xautoload to load a third-party library. You can find examples of all the different ways you can add a library in xautoload.api.php. I’ll demonstrate an easy example by using the php-loremipsum library:

1. Download your library and store it in sites/all/libraries. I named the folder php-loremipsum. 

2. Add a function implementing hook_libraries_info to your module by pulling in the namespace from Composer. This way, you don’t need to set up all the namespace rules that the library might contain.

function xautoload_example_libraries_info() { return array( 'php-loremipsum' => array( 'name' => 'PHP Lorem Ipsum', 'xautoload' => function ($adapter) { $adapter->composerJson('composer.json'); } ) ); } functionxautoload_example_libraries_info(){  returnarray(    'php-loremipsum'=>array(      'name'=>'PHP Lorem Ipsum',      'xautoload'=>function($adapter){        $adapter->composerJson('composer.json');      }

3. Change the page render function to use the php-loremipsum library to build content.

use joshtronic\LoremIpsum; function xautoload_example_page_render() { $library = libraries_load('php-loremipsum'); if ($library['loaded'] === FALSE) { throw new \Exception("php-loremipsum didn't load!"); } $lipsum = new LoremIpsum(); return array( '#markup' => $lipsum->paragraph('p'), ); } usejoshtronic\LoremIpsum;functionxautoload_example_page_render(){  $library=libraries_load('php-loremipsum');  if($library['loaded']===FALSE){    thrownew\Exception("php-loremipsum didn't load!");  $lipsum=newLoremIpsum();  returnarray(    '#markup'=>$lipsum->paragraph('p'),

Note that I needed  to tell the Libraries API to load the library, but I then have access to all the namespaces within the library. Keep in mind that the dependencies of some libraries are immense. You’ll very likely need to use Composer from within the library and commit it when you first start out. In such cases, you might need to make sure to include the Composer autoload.php file.

Another tip:  Abstract your libraries_load() functionality out in such a way that if the class you want already exists, you don’t call libraries_load() again. Doing so removes libraries as a hard dependency from your module and enables you to use Composer to load the library later on with no more work on your part. For example:

function xautoload_example_load_library() { if (!class_exists('\joshtronic\LoremIpsum', TRUE)) { if (!module_exists('libraries')) { throw new \Exception('Include php-loremipsum via composer or enable libraries.'); } $library = libraries_load('php-loremipsum'); if ($library['loaded'] === FALSE) { throw new \Exception("php-loremipsum didn't load!"); } } } functionxautoload_example_load_library(){  if(!class_exists('\joshtronic\LoremIpsum',TRUE)){    if(!module_exists('libraries')){      thrownew\Exception('Include php-loremipsum via composer or enable libraries.');    $library=libraries_load('php-loremipsum');    if($library['loaded']===FALSE){      thrownew\Exception("php-loremipsum didn't load!");

And with that, you’ve conquered the challenge of using third-party libraries!

Setting up a New Site with Composer

Speaking of Composer, you can use it to simplify the setup of a new Drupal 7 site. Just follow the instructions in the Readme for the Composer Template for Drupal Project. From the command line, run the following:

composer create-project drupal-composer/drupal-project:7.x-dev <YOUR SITE DIRECTORY> --no-interaction

This code gives you a basic site with a source repository (a repo that doesn’t commit contributed modules and libraries) to push up to your Git provider. (Note that migrating an existing site to Composer involves a few additional considerations and steps, so I won’t get into that now.)

If you’re generating a Pantheon site, check out the Pantheon-specific Drupal 7 Composer project. But wait: The instructions there advise you to use Terminus to create your site, and that approach attempts to do everything for you—including setting up the actual site. Instead, you can simply use composer create-project  to test your site in something like Lando. Make sure to run composer install if you copy down a repo.

From there, you need to enable the Composer Autoload module , which is automatically required in the composer.json you pulled in earlier. Then, add all your modules to the require portion of the file or use composer require drupal/module_name just as you would in Drupal 8.

You now have full access to all the  Packagist libraries and can use them in your modules. To use the previous example, you could remove php-loremipsum from sites/all/libraries, and instead run composer require joshtronic/php-loremipsum. The code would then run the same as before.

Have fun!

From here on out, it’s up to your imagination. Code and implement with ease, using OOP design patterns and reusable code. You just might find that this new world of possibilities for integrating new technologies with your existing Drupal 7 sites increases your productivity as well.

Dec 09 2019
Dec 09

With Drupal 9 set to be released later next year, upgrading to Drupal 8 may seem like a lost cause. However, beyond the fact that Drupal 8 is superior to its predecessors, it will also make the inevitable upgrade to Drupal 9, and future releases, much easier. 

Acquia puts it best in this eBook, where they cover common hangups that may prevent migration to Drupal 8 and the numerous reasons to push past them.

The Benefits of Drupal 8

To put it plainly, Drupal 8 is better. Upon its release, the upgrade shifted the way Drupal operates and has only improved through subsequent patches and iterations, most recently with the release of Drupal 8.8.0

Some new features of Drupal 8 that surpass those of Drupal 7 include improved page building tools and content authoring, multilingual support, and the inclusion of JSON:API as part of Drupal core. We discussed some of these additions in a previous blog post

Remaining on Drupal 7 means hanging on to a less capable CMS. Drupal 8 is simply more secure with better features.

What Does Any of This Have to Do With Drupal 9?

With an anticipated release date of June 3, 2020, Drupal 9 will see the CMS pivot to an iterative release model, moving away from the incremental releases that have made upgrading necessary in the past. That means that migrating to Drupal 8 is the last major migration Drupal sites will have to undertake. As Acquia points out, one might think “Why can’t I just wait to upgrade to Drupal 9?” 

While migration from Drupal 7 or Drupal 8 to Drupal 9 would be essentially the same process, Drupal 7 goes out of support in November 2021. As that deadline approaches, upgrading will only become an increasingly pressing necessity. By migrating to Drupal 8 now, you avoid the complications that come with a hurried migration and can take on the process incrementally. 

So why wait? 

To get started with Drupal migration, be sure to check out our Drupal Development Services, and come back to our blog for more updates and other business insights. 
 

Mar 13 2019
Mar 13

Note: This post refers to Drupal 8, but is very applicable to Drupal 7 sites as well

Most Drupal developers are experienced building sitewide search with Search API and Views. But it’s easy to learn and harder to master. These are the most common mistakes I see made when doing this task:

Not reviewing Analytics

Before you start, make sure you have access to analytics if relevant. You want to get an idea of how much sitewide search is being used and what the top searches are. On many sites, sitewide search usage is extremely low and you may need to explain this statistic to stakeholders asking for any time-consuming search features (and yourself before you start going down rabbit holes of refinements).

Take a look for yourself at how the sitewide search is currently performing for the top keywords users are giving it. Do the relevant pages come up first? You’ll take this into account when configuring boosts.

Using Solr for small sites

Drupal 8 Search API comes with database search included. Search API DB has come a long way over the years and is likely to have the features you need for smaller sites. Using a Solr backend is going to add complexity that may not be worth it for the amount of value your sitewide search is giving. Remember, if you use a Solr backend you have to have Solr running on all environments used in the project and you’ll have to reindex when you sync databases.

Not configuring all environments for working Solr

Which takes us to this one. If you do use Solr (or another server-side index) you need to also make sure your team has Solr running on their local environments and has an index for the site. 

Your settings.php needs to be configured to connect to the right index on each environment. We use Probo for review sandboxes so we need to configure our Probo builds to use the right search index and to index it on build.

Missing fields in index or wrong type

Always included the ‘Rendered HTML’ field in your search index rather than trying to capture every text field on all your content types and then having to come back to add more every time you add a field. Include the title field as well, but don’t forget to use ‘Fulltext’ as its field type. Only ‘Fulltext’ text fields are searchable by word.

Not configuring boosts

In your Processor settings, use Type-specific boosting and Tag-boosting via HTML filter. Tag boosting is straightforward: boost headers. For type-specific boosting you’re not necessarily just boosting the most important content types, but also thinking about what’s in the index and what people are likely looking for. Go back to your analytics for this. 

For example, when someone searches for a person’s name, are they likely wanting the top result to be the bio and contact info, a news posting mentioning that person, or a white paper authored by the person? So, even if staff bios are not the most important content on the site, perhaps they will need to be boosted high in search, where they are very relevant.

Not ordering by relevance

Whoops. This is a very common and devastating mistake. All your boost work be damned if you forget this. The View you make for search results needs to order results by Relevance: Descending.

Using AJAX

Don’t use the setting to ‘Use AJAX’ on your search results View. Doing so would mean that search results don’t have unique URLs, which is bad for user experience and analytics. It’s all about the URLs not about the whizzbang.

Not customizing the query string

Any time you configure a View with an exposed filter, take the extra second to customize the query string it is going to use. ‘search’ is a better query string than ‘search_api_fulltext’ for the search filter. URLs are part of your user interface.

No empty text

Similarly, when you add an exposed filter to a search you should also almost always be adding empty text. “No results match your search” is usually appropriate.

Facets that don’t speak to the audience

Facets can be useful for large search indexes and certain types of sites. But too many or too complex facets just create confusion. ‘Content-type’ is a very common facet, but if you use it, make sure you only include in its options the names of content types that are likely to make sense to visitors. For example, I don’t expect my visitors to understand the technical distinction between a ‘page’ and a ‘landing page’ so I don’t include facet links for these.

A screen shot of facets in DrupalYou can exclude confusing facet options 

Making search results page a node

I tell my team to make just about every page a visitor sees a node. This simplifies things for both editors and developers. It also ensures every page is in the search index: If you make key landing pages like ‘Events Calendar’ as Views pages or as custom routes these key pages will not be found in your search results. 

One important exception is the Search Results page itself. You don’t want your search results page in the search index: this can actually make an infinite loop when you search. Let this one be a Views page, not a Views block you embed into a node.

Important page content not in the ‘content’

Speaking of blocks and nodes, the way you architect your site will determine how well your search works. If you build your pages by placing blocks via core Block Layout, these blocks are not part of the page ‘content’ that gets indexed in the ‘Rendered HTML.’ Anything you want to be searchable needs to be part of the content. 

You can embed blocks in node templates with Twig Tweak, or you can reference blocks as part of the content (I use Paragraphs and Block Field.)

Not focusing on accessibility

The most accessible way to handle facets is to use ‘List of Links’ widget. You can also add some visually hidden help text just above your facet links. A common mistake is to hide the ‘Search’ label on the form. Instead of display: none, use the ‘visually-hidden’ class.

Feb 01 2019
Feb 01

In this article we will see how to update data models in Drupal 8, how to make the difference between model updating and content updating, how to create default content, and finally, the procedure to adopt for successful deployments to avoid surprises in a continuous integration/delivery Drupal cycle.

Before we start, I would encourage you to read the documentation of the hook hook_update_N() and to take into account all the possible impacts before writing an update.

Updating the database (executing hook updates and/or importing the configuration) is a very problematic task during a Drupal 8 deployment process, because the updating actions order of structure and data is not well defined in Drupal, and can pose several problems if not completely controlled.

It is important to differentiate between a contributed module to be published on drupal.org aimed at a wide audience, and a custom Drupal project (a set of Drupal contrib/custom modules) designed to provide a bespoke solution in response to a client’s needs. In a contributed module it is rare to have a real need to create instances of configuration/content entities, on the other hand deploying a custom Drupal project makes updating data models more complicated. In the following sections we will list all possible types of updates in Drupal 8.

The Field module allows us to add fields to bundles, we must make difference between the data structure that will be stored in the field (the static schema() method) and all the settings of the field and its storage that will be stored as a configuration. All the dependencies related to the configuration of the field are stored in the field_config configuration entity and all the dependencies related to the storage of the field are stored in the field_storage_config configuration entity. Base fields are stored by default in the entity’s base table.  

Configurable fields are the fields that can be added via the UI and attached to a bundle, which can be exported and deployed. Base fields are not managed by the field_storage_config configuration entities and field_config.

To update the entity definition or its components definitions (field defintions for example if the entity is fieldable) we can implement hook_update_N(). In this hook don’t use the APIs that require a full Drupal bootstrap (e.g. database with CRUD actions, services, …), to do this type of update safely we can use the methods proposed by the contract EntityDefinitionUpdateManagerInterface (e.g. updating the entity keys, updating a basic field definition common to all bundles, …)

To be able to update existing data entities or data fields in the case of a fieldable entity following a modification of a definition we can implement hook_post_update_NAME(). In this hook you can use all the APIs you need to update your entities.

To update the schema of a simple, complex configuration (a configuration entity) or a schema defined in a hook_schema() hook, we can implement hook_update_N().

In a custom Drupal project we are often led to create custom content types or bundles of custom entities (something we do not normally do in a contributed module, and we rarely do it in an installation profile), a site building action allows us to create this type of elements which will be exported afterwards in yml files and then deployed in production using Drupal configuration manager.

A bundle definition is a configuration entity that defines the global schema, we can implement hook_update_N() to update the model in this case as I mentioned earlier. Bundles are instances that persist as a Drupal configuration and follow the same schema. To update the bundles, updated configurations must be exported using the configuration manager to be able to import them into production later. Several problems can arise:

  • If we add a field to a bundle, and want to create content during the deployment for this field, using the current workflow (drush updatedb -> drush config-import) this action is not trivial, and the hook hook_post_update_NAME() can’t be used since it’s executed before the configuration import.
  • The same problem can arise if we want to update fields of bundles that have existing data, the hook hook_post_update_NAME() which is designed to update the existing contents or entities will run before the configuration is imported. What is the solution for this problem? (We will look at a solution for this problem later in this article.)

Now the question is: How to import default content in a custom Drupal project?

Importing default content for a site is an action which is not well documented in Drupal, in a profile installation often this import is done in the hook_install() hook because always the data content have not a complex structure with levels of nested references, in some cases we can use the default content module. Overall in a module we can’t create content in a hook_install() hook, simply because when installing a module the integrity of the configuration is still not imported.

In a recent project i used the drush php-script command to execute import scripts after the (drush updatedb -> drush config-import) but this command is not always available during deployment process. The first idea that comes to mind is to subscribe to the event that is triggered after the import of the configurations to be able to create the contents that will be available for the site editors, but the use of an event is not a nice developer experience hence the introduction of a new hook hook_post_config_import_NAME() that will run once after the database updates and configuration import. Another hook hook_pre_config_import_NAME() has also been introduced to fix performance issues.

A workflow that works for me

To achieve a successful Drupal deployment in continuous integration/delivery cycles using Drush, the most generic workflow that I’ve found at the moment while waiting for a deployment API in core is as follows :

  1. drush updatedb
    • hook_update_N() : To update the definition of an entity and its components
    • hook_post_update_N() : To update entities when you made an entity definition modification (entity keys, base fields, …)
  2. hook_pre_config_import_NAME() : CRUD operations (e.g. creating terms that will be taken as default values when importing configuration in the next step)
  3. drush config-import : Importing the configuration (e.g. new bundle field, creation of a new bundle, image styles, image crops, …)
  4. hook_post_config_import_NAME(): CRUD operations (e.g. creating contents, updating existing contents, …)

This approach works well for us, and I hope it will be useful for you. If you’ve got any suggestions for improvements, please let me know via the comments.

Jan 28 2019
Jan 28

Begin with the end in mind—defining our goals

Our collaboration with South Dakota State University’s (SDSU) outreach arm, SDSU Extension, began by defining the user experience and branding issues that the previous site had. The visual design was in need of an update, the team wanted to make information easier for people to find, and mobile users were forced to view the desktop version of the site.

With these issues defined, we put together a series of goals that fell into two major groups—user experience and branding. For the user experience goals, we defined a user-centered approach to ensure that the work we were doing was going to help people using the site engage more with the site and more easily find what they were looking for. For the branding goals, we wanted an improved, modern look and feel that felt like a part of the larger South Dakota State University brand.

Creating a palette to work from (e.g. creating Style Tiles)

Every design project at Four Kitchens starts with a visual alignment in the form of style tiles, a design deliverable showing colors, fonts, and elements that helps create a common visual language for the project.

These are presented to everyone using InVision Freehand so that as we discuss the options we can add notes directly on the style tiles. For SDSU Extension we had two rounds of style tiles, landing quickly on one that we all agreed was the right direction.

Figuring out what we’ll need (e.g. wire-framing all the things)

Design systems are all the rage in the industry and with good reason. They allow projects to move more quickly by having a library of reusable parts that are ready to go. So at this point in the process for SDSU Extension, it was time to define what those parts needed to be.

We did this by reviewing the current site and discovery document to suss out what was going to be important for the new site. As a group—Four Kitchens and SDSU Extension—had discussions to detail what sorts of things would be vital and what would be nice-to-haves.

From there we worked up a series of wireframes that showed both a component library—a page with every possible thing on it, like cards, quotes, and video callouts—and a few samples of how the new pages could be assembled from these parts.

This process worked out the kinks for trickier components, like the many-level deep navigation on mobile while minimizing effort. The cycle of posting, review, and implementing feedback was quick leading us to a final collection of wireframes.

Making it come to life (e.g. comps)

As soon as wireframes were approved we moved into the next step—breathing life into them. We took the visual language that was defined in the style tile and applied it to the wireframes. The designs included all of the components at small, medium, and large screen sizes.

These components were then quickly assembled into mock pages to show what they would look like when the site was done. Having a wealth of work already done in the form of style tiles and wireframes, we hit on the right direction quickly. Once the first few comps were finalized there was a flood of comps as we built them out faster and faster using previously approved components.

A great collaboration

Working with SDSU Extension on this project was marvelous and we’re happy that it is live and shared with the rest of the world.

Dec 14 2018
Dec 14

A lot of people have been jumping on the headless CMS bandwagon over the past few years, but I’ve never been entirely convinced. Maybe it’s partly because I don’t want to give up on the sunk costs of what I’ve learned about Drupal theming, and partly because I’m proud to be a boring developer, but I haven’t been fully sold on the benefits of decoupling.

On our current project, we’ve continued to take an approach that Dries Buytaert has described as “progressively decoupled Drupal”. Drupal handles routing, navigation, access control, and page rendering, while rich interactive functionality is provided by a JavaScript application sitting on top of the Drupal page. In the past, we’d taken a similar approach, with AngularJS applications on top of Drupal 6 or 7, getting their configuration from Drupal.settings, and for this project we decided to use React on top of Drupal 8.

There are a lot of advantages to this approach, in my view. There are several discrete interactive applications on the site, but the bulk of the site is static content, so it definitely makes sense for that content to be rendered by the server rather than constructed in the browser. This brings a lot of value in terms of accessibility, search engine optimisation, and performance.

A decoupled system is almost inevitably more complex, with more potential points of failure.

The application can be developed independently of the CMS, so specialist JavaScript developers can work without needing to worry about having a local Drupal build process.

If at some later date, the client decides to move away from Drupal, or at the point where we upgrade to Drupal 9, the applications aren’t so tightly coupled, so the effort of moving them should be smaller.

Having made the decision to use this architecture, we wanted a consistent framework for managing application configuration, to make sure we wouldn’t need to keep reinventing the wheel for every application, and to keep things easy for the content team to manage.

The client’s content team want to be able to control all of the text within the application (across multiple languages), and be able to preview changes before putting them live.

There didn’t seem to be an established approach for this, so we’ve built a module for it.

As we’ve previously mentioned, the team at Capgemini are strongly committed to supporting the open source communities whose work we depend on, and we try to contribute back whenever we can, whether that’s patches to fix bugs and add new features, or creating new modules to fill gaps where nothing appropriate already exists. For instance, a recent client requirement to promote their native applications led us to build the App Banners module.

Aiming to make our modules open source wherever possible helps us to think in systems, considering the specific requirements of this client as an example of a range of other potential use cases. This helps to future-proof our code, because it’s more likely that evolving requirements can be met by a configuration change, rather than needing a code change.

So, guided by these principles, I’m very pleased to announce the Single Page Application Landing Page module for Drupal 8, or to use the terrible acronym that it has unfortunately but inevitably acquired, SPALP.

On its own, the module doesn’t do much other than provide an App Landing Page content type. Each application needs its own module to declare a dependency on SPALP, define a library, and include its configuration as JSON (with associated schema). When a module which does that is installed, SPALP takes care of creating a landing page node for it, and importing the initial configuration onto the node. When that node is viewed, SPALP adds the library, and a link to an endpoint serving the JSON configuration.

Deciding how to store the app configuration and make all the text editable was one of the main questions, and we ended up answering it in a slightly “un-Drupally” way.

On our old Drupal 6 projects, the text was stored in a separate ‘Messages’ node type. This was a bit unwieldy, and it was always quite tricky to figure out what was the right node to edit.

For our Drupal 7 projects, we used the translation interface, even on a monolingual site, where we translated from English to British English. It seemed like a great idea to the development team, but the content editors always found it unintuitive, struggling to find the right string to edit, especially for common strings like button labels. It also didn’t allow the content team to preview changes to the app text.

We wanted to maintain everything related to the application in one place, in order to keep things simpler for developers and content editors. This, along with the need to manage revisions of the app configuration, led us down the route of using a single node to manage each application.

This approach makes it easy to integrate the applications with any of the good stuff that Drupal provides, whether that’s managing meta tags, translation, revisions, or something else that we haven’t thought of.

The SPALP module also provides event dispatchers to allow configuration to be altered. For instance, we set different API endpoints in test environments.

Another nice feature is that in the node edit form, the JSON object is converted into a usable set of form fields using the JSON forms library. This generic approach means that we don’t need to spend time copying boilerplate Form API code to build configuration forms when we build a new application - instead the developers working on the JavaScript code write their configuration as JSON in a way that makes sense for their application, and generate a schema from that. When new configuration items need to be added, we only need to update the JSON and the schema.

Each application only needs a very simple Drupal module to define its library, so we’re able to build the React code independently, and bring it into Drupal as a Composer dependency.

The repository includes a small example module to show how to implement these patterns, and hopefully other teams will be able to use it on other projects.

As with any project, it’s not complete. So far we’ve only built one application following this approach, and it seems to be working pretty well. Among the items in the issue queue is better integration with configuration management system, so that we can make it clear if a setting has been overridden for the current environment.

I hope that this module will be useful for other teams - if you’re building JavaScript applications that work with Drupal, please try it out, and if you use it on your project, I’d love to hear about it. Also, if you spot any problems, or have any ideas for improvements, please get in touch via the issue queue.

Nov 28 2018
Nov 28
Moshe Weitzman

I recently worked with the Mass.gov team to transition its development environment from Vagrant to Docker. We went with “vanilla Docker,” as opposed to one of the fine tools like DDev, Drupal VM, Docker4Drupal, etc. We are thankful to those teams for educating and showing us how to do Docker right. A big benefit of vanilla Docker is that skills learned there are generally applicable to any stack, not just LAMP+Drupal. We are super happy with how this environment turned out. We are especially proud of our MySQL Content Sync image — read on for details!

Pretty docks at Boston Harbor. Photo credit.

The heart of our environment is the docker-compose.yml. Here it is, then read on for a discussion about it.

Developers use .env files to customize aspects of their containers (e.g. VOLUME_FLAGS, PRIVATE_KEY, etc.). This built-in feature of Docker is very convenient. See our .env.example file:

The most innovative part of our stack is the mysql container. The Mass.gov Drupal database is gigantic. We have tens of thousands of nodes and 500,000 revisions, each with an unholy number of paragraphs, reference fields, etc. Developers used to drush sql:sync the database from Prod as needed. The transfer and import took many minutes, and had some security risk in the event that sanitization failed on the developer’s machine. The question soon became, “how can we distribute a mysql database that’s already imported and sanitized?” It turns out that Docker is a great way to do just this.

Today, our mysql container builds on CircleCI every night. The build fetches, imports, and sanitizes our Prod database. Next, the build does:

That is, we commit and push the refreshed image to a private repository on Docker Cloud. Our mysql image is 9GB uncompressed but thanks to Docker, it compresses to 1GB. This image is really convenient to use. Developers fetch a newer image with docker-compose pull mysql. Developers can work on a PR and then when switching to a new PR, do a simple ahoy down && ahoy up. This quickly restores the local Drupal database to a pristine state.

In order for this to work, you have to store MySQL data *inside* the container, instead of using a Docker Volume. Here is the Dockerfile for the mysql image.

Our Drupal container is open source — you can see exactly how it’s built. We start from the official PHP image, then add PHP extensions, Apache config, etc.

An interesting innovation in this container is the use of Docker Secrets in order to safely share an SSH key from host to the container. See this answer and mass_id_rsa in the docker-compose.yml above. Also note the two files below which are mounted into the container:

Configure SSH to use the secrets file as private key Automatically run ssh-add when logging into the container

Traefik is a “cloud edge router” that integrates really well with docker-compose. Just add one or two labels to a service and its web site is served through Traefik. We use Traefik to provide nice local URLs for each of our services (www.mass.local, portainer.mass.local, mailhog.mass.local, …). Without Traefik, all these services would usually live at the same URL with differing ports.

In the future, we hope to upgrade our local sites to SSL. Traefik makes this easy as it can terminate SSL. No web server fiddling required.

Our repository features a .ahoy.yml file that defines helpful aliases (see below). In order to use these aliases, developers download Ahoy to their host machine. This helps us match one of the main attractions of tools like DDev/Lando — their brief and useful CLI commands. Ahoy is a convenience feature and developers who prefer to use docker-compose (or their own bash aliases) are free to do so.

Our development environment comes with 3 fine extras:

  • Blackfire is ready to go — just run ahoy blackfire [URL|DrushCommand] and you’ll get back a URL for the profiling report
  • Xdebug is easily enabled by setting the XDEBUG_ENABLE environment variable in a developer’s .env file. Once that’s in place, the PHP in the container will automatically connect to the host’s PHPStorm or other Xdebug client
  • A chrome-headless container is used by our suite which incorporates Drupal Test Traits — a new open source project we published. We will blog about DTT soon

Of course, we are never satisfied. Here are a couple issues to tackle:

Nov 09 2018
Nov 09

Last week, the Children’s Hospital of Philadelphia (CHOP) Vaccine Makers Project (VMP) won a PR News Digital Award in the category “Redesign/Relaunch of Site.” The awards gala honors the year’s best and brightest campaigns across a variety of media. 

PR News Award on a table.

Our CEO, Alex, and our Director of Client Engagement, Aaron, along with members of the Vaccine Makers team attended the event at the Yale Club in New York City.

Screenshot of a Tweet posted by the PR News. Source

The Vaccine Makers Project (VMP) is a subset of CHOP’s Vaccine Education Center (VEC). It’s a public education portal for students and teachers that features resources such as lesson plans, downloadable worksheets, and videos. 

The Vaccine Makers team first approached us in need of a site that aligned with the branding of CHOP’s existing site. They also wanted a better strategy for site organization and resource classification. Our team collaborated with theirs to build a new site that’s easy to navigate for all users. You can learn more about the project here.

Screenshot of a Tweet from Vaccine Makers team. Source

We’d like to thank CHOP and the Vaccine Makers team for giving us the opportunity to work on this project. We’d also like to thank PR News for recognizing our work and hosting such a wonderful event. 

Finally, we’d like to congratulate our incredible team for their endless effort and dedication to this project. 
 

Nov 06 2018
Nov 06
Jody's desk

Hardware

After a long run on MacBook Pros, I switched to an LG Gram laptop running Debian this year. It’s faster, lighter, and less expensive. 

If your development workflow now depends on Docker containers running Linux, the performance benefits you’ll get with a native Linux OS are huge. I wish I could go back in time and ditch Mac earlier.

Containers

For almost ten years I was doing local development in Linux virtual machines, but in the past year, I’ve moved to containers as these tools have matured. The change has also come with us doing less of our own hosting. My Zivtech engineering team has always held the philosophy that you need your local environment to match the production environment as closely as possible. 

But in order to work on many different projects and accomplish this in a virtual machine, we had to standardize our production environments by doing our own hosting. A project that ran on a different stack or just different versions could require us to run a separate virtual machine, slowing down our work. 

As the Drupal hosting ecosystem has matured (Pantheon, Platform.sh, Acquia, etc.), doing our own hosting began to make less sense. As we diversified our production environments more, container-based local development became more attractive, allowing us to have a more light-weight individualized stack for each project.

I’ve been happy using the Lando project, a Docker-based local web development system. It integrates well with Pantheon hosting, automatically making my local environment very close to the Pantheon environments and making it simple to refresh my local database from a Pantheon environment. 

Once I fully embraced containers and switched to a Linux host machine, I was in Docker paradise. Note: you do not need a new machine to free yourself from OSX. You can run Linux on your Mac hardware, and if you don’t want to cut the cord you could try a double boot.

Philadelphia City Hall outside Jody's office
A cool office view (like mine of Philly’s City Hall) is essential for development mojo

Editor

In terms of editors/IDEs I’m still using Sublime Text and vim, as I have for many years. I like Sublime for its performance, especially its ability to quickly search projects with 100,000 files. I search entire projects constantly. It’s an approach that has always served me well. 

I also recommend using a large font size. I’m at 14px. With a larger font size, I make fewer mistakes and read more easily. I’m not sure why most programmers use dark backgrounds and small fonts when it’s obvious that this decreases readability. I’m guessing it’s an ego thing.

Browser

In browser news, I’m back to Chrome after a time on Firefox, mainly because the LastPass plugin in Firefox didn’t let me copy passwords. But I have plenty of LastPass problems in any browser. When working on multiple projects with multiple people, a password manager is essential, but LastPass’s overall crappiness makes me miserable.

Wired: Linux, git, Docker, Lando
Tired: OSX, Virtual machines, small fonts
Undesired: LastPass, egos

Terminal

I typically only run the browser, the text editor, and the terminal, a few windows of each. In the terminal, I’m up to 16px font size. Recommend! A lot of the work I do in the terminal is running git commands. I also work in the MySQL CLI a good deal. I don’t run a lot of custom configuration in my shell – I like to keep it pretty vanilla so that when I work on various production servers I’m right at home.

Terminal screenshot

Git

I get a lot of value out of my git mastery. If you’re using git but don’t feel like a master, I recommend investing time into that. With basic git skills you can quickly uncover the history of code to better understand it, never lose any work in progress, and safely deploy exactly what you want to.

Once I mastered git I started finding all kinds of other uses for it. For example, I was recently working on a project in which I was scraping a thousand pages in order to migrate them to a new CMS. At the beginning of the project, I scraped the pages and stored them in JSON files, which I added to git.  At the end of the project, I re-scraped the pages and used git to tell me which pages had been updated and to show me which words had changed. 

On another project, I cut a daily import process from hours to seconds by using git to determine what had changed in a large inventory file. On a third, I used multiple remotes with Jenkins jobs to create a network of sites that run a shared codebase while allowing individual variations. Git is a good friend to have.

Hope you found something useful in my setup. Have any suggestions on taking it to the next level?
 

Oct 29 2018
Oct 29

At this year's BADCamp, our Senior Web Architect Nick Lewis led a session on Gatsby and the JAMstack. The JAMStack is a web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. Gatsby is one of the leading JAMstack based static page generators, and this session primarily covers how to integrate it with Drupal. 

Our team has been developing a "Gatsby Drupal Kit" over the past few months to help jump start Gatsby-Drupal integrations. This kit is designed to work with a minimal Drupal install as a jumping off point, and give a structure that can be extended to much larger, more complicated sites.

This session will leave you with: 

1. A base Drupal 8 site that is connected with Gatsby.  

2. Best practices for making Gatsby work for real sites in production.

3. Sane patterns for translating Drupal's structure into Gatsby components, templates, and pages.

This is not an advanced session for those already familiar with React and Gatsby. Recommended prerequisites are a basic knowledge of npm package management, git, CSS, Drupal, web services, and Javascript. Watch the full session below. 

[embedded content]

Oct 23 2018
Oct 23

PHP 5.6 will officially be no longer supported through security fixes on December 31, 2018. This software has not been actively developed for a number of years, but people have been slow to jump on the bandwagon. Beginning in the new year, no bug fixes will be released for this version of PHP. This opens the door for a dramatic increase in security risks if you are not beginning the new year on a version of PHP 7. PHP 7 was released back in December 2015 and PHP 7.2 is the latest version that you can update to. PHP did skip over 6; so don’t even try searching for it.

Drupal 8.6 is the final Drupal version that will support PHP 5.6. Many other CMS’s will be dropping their support for PHP 5.6 in their latest versions as well. Simply because it is supported in that version does not mean that you will be safe from the security bugs; you will still need to upgrade your PHP version before December 31, 2018. In addition to the security risks, you have already been missing out on many improvements that have been made to PHP.

What Should You Do About This?

You are probably thinking “Upgrade, I get it.” It may actually be more complicated than that and you will need to refactor. 90-95% of your code should be fine. The version your CMS is may affect the complexity of your conversion. Most major CMS’s will handle PHP 7 right out of the box in their most recent versions.

By upgrading to a version of PHP 7, you will see a variety of performance improvements; the most dramatic being speed. The engine behind PHP, Zend Technologies, ran performance tests on a variety of PHP applications to compare the performance of PHP 7 vs PHP 5.6. These tests compared requests per second across the two versions. This relates to the speed at which code is executed, and how fast queries to the database and server are returned. These tests showed that PHP 7 runs twice as fast and you will see additional improvements in memory consumption.

How Can Mobomo Help?

Mobomo’s team is highly experienced, not only in assisting with your conversion, but with the review of your code to ensure your environment is PHP 7 ready.  Our team of experts will review your code and uncover the exact amount of code that needs to be converted. There are a good number of factors that could come into play and affect your timeline. The more customizations and smaller plugins that your site contains, the more complex your code review and your eventual conversion could be. Overall, depending on the complexity of the code, your timeline could vary but this would take a maximum of 3 weeks.

Important Things to Know:

  1. How many contributed modules does your site contain?
  2. How many custom modules does your site contain?
  3. What does your environment look like?
Sep 20 2018
Sep 20

As we enter the month of September and start planning for 2019, it’s a good time to take stock of where Drupal is as a project and see where it’s headed next year and beyond.

Dries Buytaert, founder of Drupal, recently wrote his thoughts around the timeline for Drupal 9 and “end of life” for Drupal 7 and 8. We will look at current Drupal 8 adoption and assess where we sit in 2019 as well.

An important part of this discussion that deserves attention is the rise of Javascript as a programming language, in particular, the rise of React.js. This technology has put CMSs like Drupal in an interesting position. We will look at how React/Javascript are evolving the web and assess what that means for the future of Drupal.

Finally, we will wrap up with thoughts on what these changes mean for both developers and organizations that use Drupal today or evaluating Drupal.

Drupal 8 Adoption

As mentioned previously, Dries has offered his thoughts on the proposed timeline for Drupal 9 in a recent blog entry on his website (see below).

timeline: Drupal 9 will be released in 2020. Drupal 8 end of life is planned for 2021

In early September Drupal 8 released version 8.6.0 which included major improvements to the layout system, new media features, and better migration support. This is in addition to many other improvements that have been released since Drupal 8.0 was first unveiled in late 2015.

In terms of adoption, Drupal has picked up steam with 51% growth from April 2017 to April 2018.

Dries Keynote Drupalcon 2018

graph showing that as of April 2018, 241,000 sites run on Drupal

As encouraging is that news is, it’s still should be noted that Drupal 7’s popularity still far exceeds Drupal 8 both in current usage (800k compared to 210k+ sites) and in terms of growth year over year. 

Drupal’s weekly project usage from August, 2018

graph shows Drupal 7 adoption exceeds that of Drupal 8

Drupal 7 will reach its end of life likely around November 2021 with paid support extending the lifetime with commercial support (as was the case with Drupal 6). Will Drupal 8 reach the level of usage and popularity D7 has? Perhaps not but that is largely due to focus on more robust, “enterprise” level features.

Drupal as a CMS sits largely in between Wordpress and enterprise proprietary CMSs like Adobe CMS and Sitecore in the marketplace. With the release of Drupal 8, the project moved more into the direction of enterprise features (which could explain some of the fall-off in adoption).

Pantheon had two excellent presentations (also at Drupalcon Nashville) that dive deeper into Drupal’s position in relation to other projects, most notably Wordpress. I would recommend watching WordPress vs Drupal: How the website industry is evolving and What's possible with WordPress 5.0 for more information on this topic.

According to builtwith.com, Drupal still has a sizable chunk of Alexa’s Top Million Sites. It should also be noted that Drupal does better the higher you go up the list of those sites which underscores the project’s focus on the enterprise.

CMS market share (builtwith.com)

5% of Alexa's Top Million sites run on Drupal

Drupal usage statistics (builtwith.com)

8.5% of the top 10k websites run on Drupal

With the release of Drupal 8, Drupal’s target audience started consolidating more towards the enterprise user. In the future Drupal’s success as a project will be tied more closely to performance against platforms like Adobe CMS and Sitecore in the marketplace.

React (and Javascript) take over the world

The thing about Javascript is that it’s been around forever (in tech terms) but recently has taken off. I won’t detail all the reasons here. Seth Brown from Lullabot has one of the best write-ups I have seen from a Drupal community perspective. In short, the ability now to run Javascript both in the browser and on the server (Node.js) has led the surge in Javascript development. Github shows us that more projects are built with Javascript than any other technology and Stack Overflow’s survey tells us that Javascript is the current language of choice.

Stack Overflow 2018 survey results

Javascript is the most popular language

Github projects 2018

2.3 million Javascript Github projects

Dries recognizes the importance of Javascript and has spoken about this recently at MIT. In a bit, we will look at some of Dries’ ideas for the future in relation to the Drupal project.

A few years ago we saw several Javascript frameworks pop up. These became very popular for single page applications (SPA) but also had broader appeal because they could make any website feel more interactive. React.js & Ember.js were both released in 2015 and Angular.js is older but has started getting more attention around the same time.

A big issue that needed to be solved with these frameworks was how to address SEO. Initially, these frameworks only rendered the page in the browser which meant site content was largely hidden from search engines. For SPA’s this was not necessarily a deal breaker but this limited the broader adoption of this technology.

Only fairly recently have we seen solutions that are able to use the same framework to serve pages both in the browser and on the server. Why do I bring this up? Because this has been one of the more difficult challenges and React.js addresses it better than any other framework. There are many reasons why React.js adoption is exploding but this is why I believe React is king.

The State of Javascript report from 2017 is often referenced to illustrate React’s popularity (see below):

survey respondents indicated that they have used and would use Javascript again - more than other technologies

John Hannah also has some great graphs on javascriptreport.com that demonstrate React’s dominance in this space (see below).

Npm downloads by technology (1 month)

React downloads (1 month) exceed angular and vue

Npm downloads by technology (1 year)

React downloads (1 year) exceed angular and vue

Finally it should be noted that Facebook’s technology, GraphQL paired with React.js is also on the rise and intertwined with the growth of this technology. GraphQL will come into play when we look at how CMSs are adapting to the surge in Javascript and Frontend frameworks.

React and the CMS

Is React compatible with CMSs of ‘ole (e.g. Wordpress, Drupal, etc.)? Well, yes and no. You can integrate React.js with a Drupal or Wordpress theme like you can many other technologies. In fact, it’s very likely that Drupal’s admin interface will run on React at some point in the future. There is already an effort underway by core maintainers to do so. Whether or not the admin will be fully decoupled is an open question. 

Another example of React admin integration is none other than Wordpress’ implementation of React.js to create the much anticipated Gutenberg WYSIWYG editor.

Gutenberg editor

screenshot of empty Gutenberg editor

In terms of websites in the wild using React with Drupal, there have been solutions out there (TWC, NBA, and others) for many years that use Drupal in a “progressively decoupled” way. The “progressive” approach will still exist as an option in years to come. Dries wrote about this recently in his blog post entitled “How to decouple Drupal in 2018.”

The problem I have with this type of solution is that sometimes you get the best (and worst) of both worlds trying to bolt on a Javascript framework onto a classic templating system. The truth is that Drupal’s templating theme layer is going to have trouble adapting to the new world we now live in (addressed in detail at Drupalcon’s “Farewell to Twig”). 

The real power of React is when you can combine it with GraphQL, React router and other pieces to create a highly performant, interactive experience that users will demand in years to come. To accomplish this type of app-like experience, developers are increasingly looking to API’s to address this dilemma, which we will examine next.

CMS as an API

The last couple of years there have been many Cloud CMS-as-an-API services pop up that have been generating some attention (Contentful might be the most popular). At this time it doesn’t appear that these API’s have disrupted market share for Wordpress & Drupal but they do signify a movement towards the idea of using a CMS as a content service. 

The “Decoupled” movement in the Drupal community (previously known as “Headless”) has been a big topic of conversation for a couple of years now. Mediacurrent’s own Matt Davis has helped organize two “Decoupled Days” events to help the Drupal community consolidate ideas and approaches. Projects like Contenta CMS have helped advance solutions around a decoupled architecture. Dries has also addressed Drupal’s progress towards an “API-first” approach recently on his blog.

While cloud services like Contentful are intriguing there is still no better content modeling tool that Drupal. Additionally, Drupal 8 is already well underway to support JSON API and GraphQL, with the potential to move those modules into core in the near future.

As I look at the landscape of the modern technology stack, I believe Drupal will flourish in the enterprise space as a strong content API paired with the leading Javascript Frontend. React & GraphQL have emerged as the leading candidates to be that Frontend of record.

Next, we will look at a relatively new entrant to the family, JAM stacks, and see where they fit in with Drupal (if at all?) in the future.

JAMStacks - The Silver Bullet?

The popularity of Netlify hosting and static generators has created some buzz in the Drupal community, particularly Gatsby.js, which we will examine in a moment.

Netlify provides some great tooling for static hosted sites and even offers its own cloud CMS. Mediacurrent actually hosts our own website (Mediacurrent.com) on Netlify. Mediacurrent.com runs on Jekyll which integrates with a Drupal 8 backend so we are well aware of some of the benefits and drawbacks of running a static site.

Where Drupal fits into the JAM stack is as the ‘A’ (for API), with ‘J’ being the Javascript Frontend (i.e. React) and ‘M’ being the statically generated markup. Back in 2016 we liked this idea and settled on Jekyll as the tool of choice for our rebuild as it was the most popular and well supported project at the time.

Since then Gatsby.js has risen dramatically in popularity and has a robust source plugin system that enables it to be used as a Frontend for many platforms including Drupal and Wordpress.

The creator of Gatsby, former Drupal developer Kyle Matthews recently spoke on the subject at Decoupled Days 2018. While it’s hard to know if JAM stacks like Gatsby having staying power in the years ahead they do have a lot of appeal in that they simplify many of the decoupled “hard problems” developers commonly run into. The question of scalability is an important one yet to be answered completely but the upside is tremendous. In a nutshell, Gatsby provides an amazingly performant, React/GraphQL solution that can pull in content from practically any source (including Drupal).

Do JAM stacks like Gatsby have staying power? Will these close the complexity gap that blocks more sites (large or not) from decoupling? We will have to stay tuned but the possibilities are intriguing.

Looking Ahead

We have examined the state of Drupal as a project, future release plans and how it is adapting towards a future that is “API First” that also fits well with the React world in which we now live. 

The main takeaway I would offer here is that Drupal, while still an amazing tool for managing content, is better suited as a technology paired with a leading Frontend like React. With the web evolving from monolithic systems to more of a services-type approach, it makes sense to use a best-in-class content modeling tool like Drupal with a best-in-class FE framework like React.js. 

What does that mean for the average Drupal developer? My advice to Drupal developers is to “Learn Javascript, deeply.” There is no time like the present to get more familiar with the latest and greatest technology including GraphQL.

For organizations evaluating Drupal, I do think the “Decoupled” approach should be strongly considered when planning your next redesign or build. That being said, it’s important to have an understanding of how the pieces fit together as well as the challenges and risk to any approach. This article attempts to present a high-level overview of the technology landscape and where it’s headed but every organization’s needs are unique. At Mediacurrent we work with clients to educate them on the best solution for their organization. 

Have questions or feedback? Hit me up at https://twitter.com/drupalninja/

Sep 18 2018
Sep 18

A slick new feature was recently added to Drupal 8 starting with the 8.5 release  — out of the box off-canvas dialog support.

Off-canvas dialogs are those which slide out from off page. They push over existing content in order to make space for themselves while keeping the existing content unobstructed, unlike a traditional dialog popup. These dialogs are often used for menus on smaller screens. Most Drupal 8 users are familiar with Admin Toolbar's use of an off-canvas style menu tray, which is automatically enabled on smaller screens.

Admin toolbar off-canvas

Drupal founder Dries posted a tutorial and I finally got a chance to try it myself.

In my case, I was creating a form for reviewers to submit reviews of long and complicated application submissions. Reviewers needed to be able to easily access the entire application while entering their review. A form at the bottom of the screen would have meant too much scrolling, and a traditional popup would have blocked much of the content they needed to see. Therefore, an off-canvas style dialog was the perfect solution. 

Build your own

With the latest updates to Drupal core, you can now easily add your own off-canvas dialogs.

Create a page for Your off-canvas content 

The built in off-canvas integration is designed to load Drupal pages into the dialog window (and only pages as far as I can tell). So you will need either an existing page, such as a node edit form, or you'll need to create your custom own page through Drupal's routing system, which will contain your custom form or other content. In my case, I created a custom page with a custom form.

Create a Link

Once you have a page that you would like to render inside the dialog, you'll need to create a link to that page. This will function as the triggering element to load the dialog.

In my case, I wanted to render the review form dialog from the application full node display itself. I created an "extra field" using hook_entity_extra_field_info(), built the link in hook_ENTITY_TYPE_view(), and then configured the new link field using the Manage Display tab for my application entity. 

/*
 * Implements hook_entity_extra_field_info().
 */
function custom_entity_extra_field_info() {
  $extra['node']['application']['display']['review_form_link'] = array(
    'label' => t('Review Application'),
    'description' => t('Displays a link to the review form.'),
    'weight' => 0,
  );
  return $extra;
}

/**
 * Implements hook_ENTITY_TYPE_view().
 */
function custom_node_view(array &$build, Drupal\Core\Entity\EntityInterface $entity, Drupal\Core\Entity\Display\EntityViewDisplayInterface $display, $view_mode) {
  if ($display->getComponent('review_form_link')) {
    $build['review_link'] = array(
      '#title' => t('Review Application'),
      '#type' => 'link',
      '#url' => Url::fromRoute('custom.review_form', ['application' => $entity->id()]),
    );
  }
}

Add off-canvas to the link

Next you just need to set the link to open using off-canvas instead of as a new page.

There are four attributes to add to your link array in order to do this:

      '#attributes' => array(
        'class' => 'use-ajax',
        'data-dialog-renderer' => 'off_canvas',
        'data-dialog-type' => 'dialog',
        'data-dialog-options' => '{"width":"30%"}'
      ),
      '#attached' => [
        'library' => [
          'core/drupal.dialog.ajax',
        ],
      ],

The first three attributes are required to get your dialog working and the last is recommended, as it will let you control the size of the dialog.

Additionally, you'll need to attach the Drupal ajax dialog library. Before I added the library to my implementation, I was running into an issue where some user roles could access the dialog and others could not. It turned out this was because the library was being loaded for roles with access to the Admin Toolbar.

The rendered link will end up looking like:

<a href="https://www.zivtech.com/review-form/12345" class="use-ajax" data-dialog-options="{"width":"30%"}" data-dialog-renderer="off_canvas" data-dialog-type="dialog">Review Application</a>

And that's it! Off-canvas dialog is done and ready for action.

off-canvas-demo-gif
Aug 22 2018
Aug 22

At Mediacurrent, we hear a lot of questions — from the open source community and from our customers — about website accessibility. What are the must-have tools, resources, and modules? How often should I test? To address those and other top FAQs, we hosted a webinar with front end developers Ben Robertson and Tobias Williams, back end developer Mark Casias, and UX designer Becky Cierpich.

The question that drives all others is this: Why should one care about web accessibility? To kick-off the webinar, Ben gave a compelling answer. He covered many of the topics you’ve read about on the Mediacurrent blog: introducing WCAG Web Content Accessibility Guidelines, some the benefits of website accessibility (including improved usability and SEO) and the threats of non-compliance.

[embedded content]

Adam Kirby: Hi everybody, this is Adam Kirby. I'm the Director of Marketing here at Mediacurrent. Thanks everyone for joining us. Today we're going to go over website accessibility frequently asked questions. 

Our first question is: 

Are there automated tools I can use to ensure my site is accessible and what are the best free tools? 

Becky Cierpich: Yes! Automated tools that I like to use —and these are actually all free tools— are WEBAIM’s WAVE tool, you can use that as a browser extension. There's also Chrome Accessibility Developer Tools and Khan Academy has a Chrome Plugin called Tota11y. So with these things, you can get a report of all the errors and warnings on a page. Automated testing catches about 30 percent of errors, but it takes a human to sort through and intellectually interpret the results and then determine the most inclusive user experience. 

What's the difference between human and automated testing? 

Becky Cierpich: Well, as I said, the automated tools can catch 30 percent of errors and we need a human on the back end of that to interpret. And then we use the manual tools, things like Chrome Vox or VoiceOver for Mac, those are some things you can turn on if you want to simulate a user experience from the auditory side, you can do keyboard only navigation to simulate that experience. Those things will really help you to kind of drive behind the wheel of what another user's experiencing and catch any errors in the flow that may have come from, you know, the code not being up to up to spec. 

Then we also have color contrast checkers. WEBAIM has a good one for that and all these are free tools and they can allow you to test one color against another. You can verify areas that have too little contrast and test adjustments that'll fix the contrast. 

What do the terms WCAG, W3C, WAI, Section 508, and ADA Title III mean? 

Mark Casias: I'll take that one - everybody loves a good acronym. WCAG, these are Web Content Accessibility Guidelines. This is the actual document that gives you the ideas of what you need to change or what you want to a base your site on. W3C stands for World Wide Web Consortium - these are the people who control the web standardization. WAI is the Web Accessibility Initiative and refers to the section of the W3C that focuses on accessibility. 

Finally, Section 508 is part of the Rehabilitation Act of 1973, well it was added to that act in 1998, to require Federal agencies to make their electronic and IT  accessible to people with disabilities. ADA Title III is part of the Americans with Disabilities Act which focuses on private businesses, it mandates that they need to be fully accessible to individuals with disabilities. 

 What are the different levels of compliance? 

Tobias Williams: Briefly, the WCAG Web Content Accessibility Guidelines tell us that there are three levels - A, AA, and AAA, with AAA being the highest level of compliance. These are internationally agreed to, voluntary standards. Level A has the minimum requirements for the page to be accessible. Level AA builds on the accessibility of level A, examples include consideration of navigation placement and contrast of colors. Level AAA again builds on the previous level - certain videos have sign language translation and improved color contrast. 

To meet the standard of each level there are 5 requirements that are detailed on the WCAG site. Every every actionable part of the site has to be 100% compliant. WCAG doesn't require that a claim to a standard be made, and these grades are not specified by the ADA but are often referenced when assessing how accessible the site is.

Should we always aim for AAA standards?

Ben Robertson: How we approach it is we think you should aim for AA compliance. That's going to make sure that you're covering all the bases. You have to do an internal audit of who are your users and what are your priorities and who you're trying to reach. And then see out of those, where do the AAA guidelines come in and where can you get the biggest bang for your buck? I think that the smart way to do it is to prioritize. Because when you get to the AAA level, it can be a very intense process, like captioning every single video on your site. So you have to prioritize. 

What Drupal modules can help with accessibility?

Mark Casias: Drupal.org has a great page that lists a good portion of these modules for accessibility.  One that they don't have on there is the AddtoAny module that allows you to share your content. We investigated this for our work on the Grey Muzzle site and found this was the most accessible option. Here are some other modules you can try:

  • Automatic Alternative Text -The module uses the Microsoft Azure Cognitive Services API to generate an Alternative Text for images when no Alternative Text has been provided by user.
  • Block ARIA Landmark Roles -Inspired by Block Class, this module adds additional elements to the block configuration forms that allow users to assign a ARIA landmark role to a block.
  • CKEditor Abbreviation - Adds a button to CKEditor for inserting and editing abbreviations. If an existing abbr tag is selected, the context menu also contains a link to edit the abbreviation.
  • CKEditor Accessibility Checker - The CKEditor Accessibility Checker module enables the Accessibility Checker plugin from CKEditor.com in your WYSIWYG.
  • High contrast - Provides a quick solution to allow the user to switch between the active theme and a high contrast version of it. (Still in beta)
  • htmLawed  - The htmLawed module uses the htmLawed PHP library to restrict and purify HTML for compliance with site administrator policy and standards and for security. Use of the htmLawed library allows for highly customizable control of HTML markup.
  • Siteimprove - The Siteimprove plugin bridges the gap between Drupal and the Siteimprove Intelligence Platform.
  • Style Switcher - The module takes the fuss out of creating themes or building sites with alternate stylesheets.
  • Text Resize - The Text Resize module provides your end-users with a block that can be used to quickly change the font size of text on your Drupal site.

At what point in a project or website development should I think about accessibility? 

Becky Cierpich: I got this one! Well, the short answer is always and forever. Always think about accessibility. I work a lot at the front end of a project doing strategy and design. So what we try to do is bake it in from the very beginning. We'll take analytics data and then wecan get to know the audience that way. That's how you can kind of plan and prioritize your features. If you want to do AAA features, you can figure out who your users are before you go ahead and plan that out. Another thing we do is look at personas. You can create personas that have limitations and that way when you go in and design. You can be sure to capture those people who might be challenged by even things like a temporary disability, slow Internet connection or colorblind - things that people don't necessarily even think of this as a disability.

I would also say don't worry if you already have a site and you know, it's definitely not compliant or you're not sure because Mediacurrent can come in and audit, using the testing tools to interpret and prioritize and slowly you can get up speed over time. It's not something that you have to necessarily do overnight. 

How often should I check for accessibility compliance? 

Tobias Williams: I’ll take this one - I also work on the front end, implementing Becky’s designs. When you're building anything new for a site, you should be accessibility testing. test work, During cross-browser testing, we should also be checking that our code meets the accessibility standards we are maintaining.

Now that's easy to do on a new cycle because you're in the process of building a product that currently exists. I would say anytime you make any kind of change or you're focused on any kind of barrier of the fight, I would run a quick accessibility check. And then even if you don't address the changes straight away or at least you're aware of that, you can document them and work on them later. As far as an in-production site where you have a lot of content creators, or where independent groups work on features it is also a good idea to run quarterly spot checks. 

I've seen these on a few sites, but what is an accessibility statement and do I need one?

Becky Cierpich: An accessibility statement is similar to something like a privacy agreement. It's a legal document and there are templates to do it. It basically states clearly what level of accessibility the website is targeting. If you have any areas that still need improvement, you can acknowledge those and outline your plan to achieve those goals and when you're targeting to have that done. It can add a measure of legal protection while you're implementing any fixes. And if your site is up to code, it's a powerful statement to the public that your organization is recognizing the importance of an inclusive approach to your web presence. 

What are the legal ramifications of not having an accessible website? 

Ben Robertson: I'll jump in here, but I just want to make a disclaimer that I'm not a lawyer. Take what I say with several grains of salt! This whole space is pretty new in terms of legal requirements. The landmark case was  Winn-Dixie, the grocery store chain — it was filed under title III of ADA Act and they lost. It was brought up by a blind customer who could not use their website. The court order is available online and it's an interesting read but basically, there were no damages sought in the case. The court ordered that

they had to have an accessibility statement that said that they would follow WCAG 2.0. That's a great refresh for site editors to make sure that they're following best practices. They also mandated quarterly automated accessibility testing. 

I really think if you have these things in place already, you're really gonna mitigate pretty much all your risk. You can get out in front of it if you have a plan. 

If I have an SEO expert, do I need an accessibility expert as well? 

Tobias Williams: I'll explain what we do at Mediacurrent. We don't have one person who is an expert. We have a group of people. We have several developers, designers and other people on the team who are just interested in the subject and we meet once a week, we have a Slack channel where you just talk about accessibility. There are people who are most familiar with different aspects of it and that allows us to be better rounded as a group. 

I can see somebody being hired to be an accessibility expert but I think that the dialogue within a company about this issue is most important. The more people who are aware of it, the better. You can prevent problems before they occur. So, if I'm aware of the accessibility requirements of an item building, I'm going to build it the right way as opposed to having to be reviewed by the expert and then making changes. The more people who are talking about it and were involved in it and I'm the general level of knowledge, it goes a long way. We don't need to have experts as much as we need to have interested people. 

As a content editor, what's my role in website accessibility?

Mark Casias: Your role is very important in website accessibility. All the planning and site building that I do [as a developer] won't mean a thing if you don't attach an image alt tag to your images or you use a bunch of H1 title tags because you want the font size to be bigger and things like that. Content editors need to be aware of what they're doing and its impact on accessibility. They need to know the requirements and they need to make sure that their information is. keeping the website rolling in the right direction. Check out Mediacurrent’s Go-To-Guide for Website Accessibility for more on how to do this. 

Some of the technical requirements for accessibility seem costly and complex. What are the options for an organization?

Ben Robertson: Yeah, I totally agree. Sometimes you will get an accessibility audit back and you just see a long list of things that are red and wrong. It can seem overwhelming. I think there's really a couple of things to keep in mind here is that one, you don't have to do everything all at once. You can create an accessibility statement and you can create a plan and start working through that plan. Two, it really helps to have someone with experience or an experienced team to help you go through this process. There can be things that are a very high priority that are very easy to fix and there can be things that may be a low priority.

You can also think about it this way: if you got a report from a contractor that something you're building was not up to code, you would want to fix that. And so this is kind of a similar thing. People aren't going to be injured from using your website if it's inaccessible but it's the right thing to do. It's how websites are supposed to be built if you're following the guidelines, and it's really good to help your business overall. 

How much does it cost to have and maintain an accessible site? Should I set aside budget just for this? 

Adam Kirby: You will want to set aside a budget to create an accessible site. It is an expense. You're going to have to do a little bit more for your website in order to make sure it's successful. You're going to have to make changes. How much does it cost? That will vary and depend on where you are with your site build; if it’s an existing site, if you're launching a new site, the amount of content on your site and the variability of content types. So, unfortunately, the answer is it just depends. 

If you need help with an accessibility audit, resolving some known issues on your site, or convincing your leadership to take action on website accessibility, we’re here for you.  

Webinar Slides and Additional Resources 

Jul 26 2018
Jul 26

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name":"visreg",  "version":"1.0.0",  "description":"Website with visual regression testing",  "scripts":{    "test":"echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); constvisreg=require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); constvisreg=require('../../../../../../../../tests/config/globalHides.js');describe('Search Block',function(){  it('should look good',function(){    browser      .url('./')      .checkElement('.search-block-form',{hide:visreg.hide,remove:visreg.remove})      .forEach((item)=>{        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Jul 11 2018
Jul 11

Someone recently asked the following question in Slack. I didn’t want it to get lost in Slack’s history, so I thought I’d post it here:

Question: I’m setting a CSS background image inside my Pattern Lab footer template which displays correctly in Pattern Lab; however, Drupal isn’t locating the image. How is sharing images between PL and Drupal supposed to work?

My Answer: I’ve been using Pattern Lab’s built-in data.json files to handle this lately. e.g. you could do something like:

footer-component.twig:

... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %} ... {%footer_background_image=footer_background_image|default('/path/relative/to/drupal/root/footer-image.png')%}

This makes the image load for Drupal, but fails for Pattern Lab.

At first, to fix that, we used the footer-component.yml file to set the path relative to PL. e.g.:

footer-component.yml:

footer_background_image: /path/relative/to/pattern-lab/footer-image.png footer_background_image:/path/relative/to/pattern-lab/footer-image.png

The problem with this is that on every Pattern Lab page, when we included the footer copmonent, we had to add that line to the yml file for the page. e.g:

basic-page.twig:

... {% include /whatever/footer-component.twig %} ... {%include/whatever/footer-component.twig%}

basic-page.yml:

... footer_background_image: /path/relative/to/pattern-lab/footer-image.png ... footer_background_image:/path/relative/to/pattern-lab/footer-image.png

Rinse and repeat for each example page… That’s annoying.

Then we realized we could take advantage of Pattern Labs global data files.

So with the same footer-component.twig file as above, we can skip the yml files, and just add the following to a data file.

theme/components/_data/paths.json: (* see P.S. below)

{ "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png" }     "footer_background_image":"/path/relative/to/pattern-lab/footer-image.png"

Now, we can include the footer component in any example Pattern Lab pages we want, and the image is globally replaced in all of them. Also, Drupal doesn’t know about the json files, so it pulls the default value, which of course is relative to the Drupal root. So it works in both places.

We did this with our icons in Emulsify:

_icon.twig

paths.json

End of the answer to your original question… Now for a little more info that might help:

P.S. You can create as many json files as you want here. Just be careful you don’t run into name-spacing issues. We accounted for this in the header.json file by namespacing everything under the “header” array. That way the footer nav doesn’t pull our header menu items, or vise-versa.

example homepage home.twigthat pulls menu items for the header and the footer from data.json files

header.json

footer.json

Jun 06 2018
Jun 06

I recently had the privilege of helping PRI.org launch a new React Frontend for their Drupal 7 project. Although I was fairly new to using React, I was able to lean on Four Kitchens’ senior JavaScript engineering team for guidance. I thought I might take the opportunity to share some things I learned along the way in terms of organization, code structuring and packages.

Organization

As a lead maintainer of Emulsify, I’m no stranger to component-driven development and building a user interface from minimal, modular components. However, building a library of React components provided me with some new insights worth mentioning.

Component Variations

If a component’s purpose starts to diverge, it may be a good time to split the variations in your component into separate components. A perfect example of this can be found in a button component. On any project of scale, you will likely have a multitude of buttons ranging from actual <button> elements to links or inputs. While these will likely share a number of qualities (e.g., styling), they may also vary not only in the markup they use but interactions as well. For instance, here is a simple button component with a couple of variations:

const Button = props => { const { url, onClick } = props; if (url) { return ( <a href={url}> ... </a> ); } return ( <button type="button" onClick={onClick}> ... </button> ); }; constButton=props=>{  const{url,onClick}=props;  if(url){    return(      <ahref={url}>        ...      </a>    );  return(    <buttontype="button"onClick={onClick}>      ...    </button>

Even with the simplicity of this example, why not separate this into two separate components? You could even change this component to handle that fork:

function Button(props) { ... return url ? <LinkBtn {...props}/> : <ButtonBtn {...props} />; } functionButton(props){  returnurl?<LinkBtn{...props}/>:<ButtonBtn{...props}/>;

React makes this separation so easy, it really is worth a few minutes to define components that are distinct in purpose. Also, testing against each one will become a lot easier as well.

Reuse Components

While the above might help with encapsulation, one of the main goals of component-driven development is reusability. When you build/test something well once, not only is it a waste of time and resources to build something nearly identical but you have also opened yourself to new and unnecessary points of failure. A good example from our project is creating a couple different types of toggles.  For accessible, standardized dropdowns, we introduced the well-supported, external library Downshift.:

In a separate part of the UI, we needed to build an accordion menu:

Initially, this struck me as two different UI elements, and so we built it as such. But in reality, this was an opportunity that I missed to reuse the well-built and tested Downshift library (and in fact, we have a ticket in the backlog to do that very thing). This is a simple example, but as the complexity of the component (or a project) increases, you can see where reusage would become critical.

Flexibility

And speaking of dropdowns, React components lend themselves to a great deal of flexibility. We knew the “drawer” part of the dropdown would need to contain anything from an individual item to a list of items to a form element. Because of this, it made sense to make the drawer contents as flexible as possible. By using the open-ended children prop, the dropdown container could simply just concern itself with container level styling and the toggling of the drawer. See below for a simplified version of the container code (using Downshift):

export default class Dropdown extends Component { static propTypes = { children: PropTypes.node }; static defaultProps = { children: [] }; render() { const { children } = this.props; return ( <Downshift> {({ isOpen }) => ( <div className=”dropdown”> <Button className=”btn” aria-label="Open Dropdown" /> {isOpen && <div className=”drawer”>{children}</div>} </div> )} </Downshift> ); } } exportdefaultclassDropdownextendsComponent{  staticpropTypes={    children:PropTypes.node  staticdefaultProps={    children:[]  render(){    const{children}=this.props;    return(      <Downshift>        {({isOpen})=>(          <divclassName=dropdown>            <ButtonclassName=btnaria-label="Open Dropdown"/>            {isOpen&&<divclassName=drawer>{children}</div>}          </div>        )}      </Downshift>    );

This means we can put anything we want inside of the container:

<Dropdown> <ComponentOne> <ComponentTwo> <span>Whatever</span> </Dropdown> <Dropdown>  <ComponentOne>  <ComponentTwo>  <span>Whatever</span></Dropdown>

This kind of maximum flexibility with minimal code is definitely a win in situations like this.

Code

The Right Component for the Job

Even though the React documentation spells it out, it is still easy to forget that sometimes you don’t need the whole React toolbox for a component. In fact, there’s more than simplicity at stake, writing stateless components may in some instances be more performant than stateful ones. Here’s an example of a hero component that doesn’t need state following AirBnB’s React/JSX styleguide:

const Hero = ({title, imgSrc, imgAlt}) => ( <div className=”hero”> <img data-src={imgSrc} alt={imgAlt} /> <h2>{title}</h2> </div> ); export default Hero; constHero=({title,imgSrc,imgAlt})=>(  <divclassName=hero>    <imgdata-src={imgSrc}alt={imgAlt}/>    <h2>{title}</h2>  </div>exportdefaultHero;

When you actually need to use Class, there are some optimizations you can make to at least write cleaner (and less) code. Take this Header component example:

import React from 'react'; class Header extends React.Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReactfrom'react';classHeaderextendsReact.Component{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

In this snippet, we can start by simplifying the React.Component extension:

import React, { Component } from 'react'; class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReact,{Component}from'react';classHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

Next, we can export the component in the same line so we don’t have to at the end:

import React, { Component } from 'react'; export default class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } importReact,{Component}from'react';exportdefaultclassHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX

Finally, if we make the toggleOpen() function into an arrow function, we don’t need the binding in the constructor. And because our constructor was really only necessary for the binding, we can now get rid of it completely!

export default class Header extends Component { state = { isMenuOpen: false }; toggleOpen = () => { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); }; render() { // JSX } } exportdefaultclassHeaderextendsComponent{  state={isMenuOpen:false};  toggleOpen=()=>{    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX

Proptypes

React has some quick wins for catching bugs with built-in typechecking abilities using React.propTypes. When using a Class component, you can also move your propTypes inside the component as static propTypes. So, instead of:

export default class DropdownItem extends Component { ... } DropdownItem.propTypes = { .. propTypes }; DropdownItem.defaultProps = { .. default propTypes }; exportdefaultclassDropdownItemextendsComponent{DropdownItem.propTypes={  ..propTypesDropdownItem.defaultProps={  ..defaultpropTypes

You can instead have:

export default class DropdownItem extends Component { static propTypes = { .. propTypes }; static defaultProps = { .. default propTypes }; render() { ... } } exportdefaultclassDropdownItemextendsComponent{  staticpropTypes={    ..propTypes  staticdefaultProps={    ..defaultpropTypes  render(){    ...

Also, if you want to limit the value or objects returned in a prop, you can use PropTypes.oneOf and PropTypes.oneOfType respectively (documentation).

And finally, another place to simplify code is that you can deconstruct the props option in the function parameter definition like so. Here’s a component before this has been done:

const SvgLogo = props => { const { title, inline, height, width, version, viewBox } = props; return ( // JSX ) } constSvgLogo=props=>{  const{title,inline,height,width,version,viewBox}=props;  return(    // JSX

And here’s the same component after:

const SvgLogo = ({ title, inline, height, width, version, viewBox }) => ( // JSX ); constSvgLogo=({title,inline,height,width,version,viewBox})=>(

Packages

Finally, a word on packages. React’s popularity lends itself to a plethora of packages available. One of our senior JavaScript engineers passed on some sage advice to me that is worth mentioning here: every package you add to your project is another dependency to support. This doesn’t mean that you should never use packages, merely that it should be done judiciously, ideally with awareness of the package’s support, weight and dependencies. That said, here are a couple of packages (besides Downshift) that we found useful enough to include on this project:

Classnames

If you find yourself doing a lot of classname manipulation in your components, the classnames utility is a package that helps with readability. Here’s an example before we applied the classnames utility:

<div className={`element ${this.state.revealed === true ? revealed : ''}`} > <divclassName={`element${this.state.revealed===true?revealed:''}`}>

With classnames you can make this much more readable by separating the logic:

import classNames from 'classnames/bind'; const elementClasses = classNames({ element: true, revealed: this.state.revealed === true }); <div classname={elementClasses}> importclassNamesfrom'classnames/bind';constelementClasses=classNames({  element:true,  revealed:this.state.revealed===true<divclassname={elementClasses}>

React Intersection Observer (Lazy Loading)

IntersectionObserver is an API that provides a way for browsers to asynchronously detect changes of an element intersecting with the browser window. Support is gaining traction and a polyfill is available for fallback. This API could serve a number of purposes, not the least of which is the popular technique of lazy loading to defer loading of assets not visible to the user.  While we could have in theory written our own component using this API, we chose to use the React Intersection Observer package because it takes care of the bookkeeping and standardizes a React component that makes it simple to pass in options and detect events.

Conclusions

I hope passing on some of the knowledge I gained along the way is helpful for someone else. If nothing else, I learned that there are some great starting points out there in the community worth studying. The first is the excellent React documentation. Up to date and extensive, this documentation was my lifeline throughout the project. The next is Create React App, which is actually a great starting point for any size application and is also extremely well documented with best practices for a beginner to start writing code.

May 18 2018
May 18

The Content Moderation core module was marked stable in Drupal 8.5. Think of it like the contributed module Workbench Moderation in Drupal 7, but without all the Workbench editor Views that never seemed to completely make sense. The Drupal.org documentation gives a good overview.

Content Moderation requires the Workflows core module, allowing you to set up custom editorial workflows. I've been doing some work with this for a new site for a large organization, and have some tips and tricks.

Less Is More

Resist increases in roles, workflows, and workflow states and make sure they are justified by a business need. Stakeholders may ask for many roles and many workflow states without knowing the increased complexity and likelihood of editorial confusion that results.

If you create an editorial workflow that is too strict and complex, editors will tend to find ways to work around the  system. A good compromise is to ask that the team tries something simple first and adds complexity down the line if needed.

Try to use the same workflow on all content types if you can. It makes a much simpler mental model for everyone.

Transitions are Key

Transitions between workflow states will be what you assign as permissions to roles. Typically, you'll want to lock down who can publish content, allowing content contributors to create new drafts only.

Transitions Image from Drupal.orgTransitions between workflow states must be thought through

You might want some paper to map out all the paths between workflow states that content might go through. The transitions should be named as verbs. If you can't think of a clear, descriptive verb that applies, you can go with 'Set state to %your_state" or "Mark as %your_state." Don't sweat the names of transitions too much though; they don't seem to ever appear in an editor-facing way anyway.

Don't forget to allow editors to undo transitions. If they can change the state from "Needs Work" to "Needs Review," make sure they can change it back to "Needs Work."

You must allow Non-Transitions

Make sure the transitions include non-transitions. The transitions represent which options will be available for the state when you edit content. In the above (default core) example, it is not possible to edit archived content and maintain the same state of archived. You'd have to change the status to published and then back to archived. In fact, it would be very easy to accidentally publish what you had archived, because editing the content will set it back to published as the default setting. Therefore, make sure that draft content can stay as draft when edited, etc. 

Transition Ordering is Crucial

Ordering of the transitions here is very important because the state options on the content editing form will appear as a select list of states ordered by the transition order, and it will default to the first available one.

If an editor misses setting this option correctly, they will simply get the first transition, so make sure that first transition is a good default. To set the right order, you have to map each state to what should be its default value when editing. You may have to add additional transitions to make this all make sense.

As for the ordering of workflow states themselves, this will only affect ordering when states are listed, for example in a Views exposed filter of workflow states or within the workflows administration.

Minimize Accidental Transitions

But why wouldn't my content's workflow state stay the same by default when editing the content (assuming the user has access to a transition that keeps it the same)? I have to set an order correctly to keep a default value from being lost?

Well, that's a bug as of 8.5.3 that will be fixed in the next 8.5 bugfix release. You can add the patch to your composer.json file if you're tired of your workflow states getting accidentally changed.

Test your Workflow

With all the states, transitions, transition ordering, roles, and permissions, there are plenty of opportunities for misconfiguration even for a total pro with great attention to detail like yourself. Make sure you run through each scenario using each role. Then document the setup in your site's editor documentation while it's all fresh and clear in your mind.

What DOES Published EVEN MEAN ANYMORE?

With Content Moderation, the term "published" now has two meanings. Both content and content revisions can be published (but only content can be unpublished).

For content, publishing status is a boolean, as it has always been. When you view published content, you will be viewing the latest revision, which is in a published workflow state.

For a content revision, "published" is a workflow state.

Therefore, when you view the content administration page, which shows you content, not content revisions, status refers to the publishing status of the content, and does not give you any information on whether there are unpublished new revisions.

Where's my Moderation Dashboard?

From the content administration page, there is a tab for "moderated content." This is where you can send your editors to see if there is content with drafts they need to review. Unfortunately, it's not a very useful report since it has neither filtering nor sorting. Luckily work has been done recently to make the Views integration for Content Moderation/Workflows decent, so I was able to replace this dashboard with a View and shared the config.

Using Views for a Moderation DashboardMy Views-based Content Moderation dashboard.

Reviewer Access

In a typical editorial workflow, content editors create draft edits and then need to solicit feedback and approval from stakeholders or even a legal team. To use content moderation, these stakeholders need to have Drupal accounts and log in to look at the "Latest Revision" tab on the content. This is an obstacle for many organizations because the stakeholders are either very busy, not very web-savvy, or both.

You may get requests for a workflow in which content creation and review takes place on a non-live environment and then require some sort of automated content deployment process. Content deployment across environments is possible using the Deploy module, but there is a lot of inherent complexity involved that you'll want to avoid if you can.

I created an Access Latest module that allows editors to share links with an access token that lets reviewers see the latest revision without logging in.

Access Latest lets reviewers see drafts without logging inAccess Latest lets reviewers see drafts without logging in

Log Messages BUG

As of 8.5.3, you may run into a bug in which users without "administer content" permission cannot add a revision log message when they edit content. There are a fewissues related to this, and the fix should be out in the next bugfix release. I had success with this patch and then re-saving all my content types.

Mar 08 2018
Mar 08

This post was originally published on May 22, 2013 and last updated March 8, 2018 thanks to some helpful input by Steve Elkins.

Drupal 7 is a haus at combining CSS & JS files. This can help boost page performance & optimization easily, but if not used right, can do the complete opposite. In this post, we’ll go over how to load JS & CSS files based on conditionals like URL, module, node, views and more.

Before we dive in, get somewhat familiar with the drupal_add_js and drupal_add_css functions. We’ll use these to load the actual JS and CSS files.

hook_init – runs on every page

/**
 * Implements hook_init()
 *
 * @link https://api.drupal.org/api/drupal/modules%21system%21system.api.php/function/hook_init/7.x
 */
function HOOK_init() {

  // Using the equivalent of Apache's $_SERVER['REQUEST_URI'] variable to load based on URL
  // @link https://api.drupal.org/api/drupal/includes!bootstrap.inc/function/request_uri/7
  if (request_url() === 'your-url-path') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
}

Using hook_init is one of the simplest methods to load specific JS and CSS files (don’t forget to replace HOOK with the theme or module machine name).

Be careful, this method get’s ran on every page, so it’s best to use this method only when you actually need to check every page for your conditional. A good example, loading module CSS and JS files. A bad example, loading node-specific CSS and JS files. We’ll go over that next.

There’s also a similar preprocess function, template_preprocess_page you could use, but it too get’s ran on every page and is essentially the same as hook_init.

template_preprocess_node – runs on node pages

/**
 * Implements template_preprocess_node()
 *
 * @link https://api.drupal.org/api/drupal/modules%21node%21node.module/function/template_preprocess_node/7.x
 */
function TEMPLATE_preprocess_node(&amp;$vars) {
  // Add JS &amp; CSS by node type
  if( $vars['type'] == 'your-node-type') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
   
  // Add JS &amp; CSS to the front page
  if ($vars['is_front']) {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
   
  // Given an internal Drupal path, load based on node alias.
  if (drupal_get_path_alias(&quot;node/{$vars['#node']-&gt;nid}&quot;) == 'your-node-id') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
}

Using template_preprocess_node is perfect when loading JS and CSS files based on nodes (don’t forget to replace TEMPLATE with the theme machine name). Since it only get’s run on nodes, it’s great to use when you want to load CSS and JS files on specific node types, front pages, node URLs, etc.

template_preprocess_views_view – runs every view load

/**
 * Implements template_preprocess_views_view()
 *
 * @link https://api.drupal.org/api/views/theme%21theme.inc/function/template_preprocess_views_view/7.x-3.x
 */
function TEMPLATE_preprocess_views_view(&amp;$vars) {
  // Get the current view info
  $view = $vars['view'];

  // Add JS/CSS based on view name
  if ($view-&gt;name == 'view_name') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }

  // Add JS/CSS based on current view display
  if ($view-&gt;current_display == 'current_display_name') {
    drupal_add_js( /* parameters */ );
    drupal_add_css( /* parameters */ );
  }
}

Using template_preprocess_node is useful when loading JS and CSS files when a particular view is being used (don’t forget to replace TEMPLATE with the theme machine name).

Helpful Methods for Conditionals

Here’s a few helpful Drupal methods you can use for your conditionals. Have one you use often? Let me know in the comments below.

  • request_uri – Returns the equivalent of Apache’s $_SERVER[‘REQUEST_URI’] variable.
  • drupal_get_path_alias – Given an internal Drupal path, return the alias set by the administrator.

<

h3>Looking like a foreign language to you?

Not a developer or just lost looking at the code snipplets above? Shoot me a question in the comments below, or give these ‘plug-and-play’ modules a try for a GUI alternative:

Feb 21 2018
Feb 21

Selected sessions for Drupalcon Nashville have just been announced! Mediacurrrent will be presenting seven sessions and hosting a training workshop. 

From exploring new horizons in decoupled Drupal to fresh perspectives on improving editorial UX and achieving GDPR compliance, check out what the Mediacurrent team has in store for Drupalcon 2018:
 

Speakers: Matt Davis, Director of Emerging Technology at Mediacurrent and Jeremy Dickens, Senior Drupal Developer at The Weather Company / IBM
Session Track: Horizons

During the course of an ongoing decoupling project for weather.com, the team found that the lack of page configurability was a distinct pain point for site administrators and product owners. To meet this challenge, the weather.com team built Project Moonracer, a Drupal 8-based solution that allowed for the direct modification of page configuration on a completely decoupled front-end by developing a unique set of data models to move page configuration back into the hands of the site owners.  

Takeaways:

  • Gain a greater understanding of the decoupled UI problem space as a whole
  • See specific API and UI considerations and lessons learned from our experience
  • Catch a glimpse into some possible futures of editorial interfaces in an increasingly decoupled world

Speaker: Bob Kepford, Lead Drupal Architect at Mediacurrent
Session Track: Back End Development 

Wouldn’t it be nice if you could type one command that booted your vagrant box, started displaying watchdog logs, set up the correct Drush alias, and provided easy access to your remote servers? Or maybe you use tools like Grunt, Gulp, or Sass. What if you could launch all of your tools for a project with one command? In this session, attendees will see how to use the terminal every day to get work done efficiently and effectively

You’ll learn:

  • How to use free command line applications to get work done.
  • How to better use the command line tools you already know.
  • How to customize your command line to behave the way you want it to. I guarantee attendees will walk away with at least one new tip, trick, or tool.

Speaker: Jay Callicott, VP of Technical Operations at Mediacurrent 
Session Track: Site Building 

If you have ever googled to find “top Drupal modules” you probably have read Mediacurrent’s popular, long-running blog series on the top modules for Drupal, authored by our own Jay Callicott. In this session, follow him on a leisurely stroll through the best modules that Drupal 8 has to offer as Jay presents an updated list of his top picks. Like a guided tour of the Italian countryside, you can sit back and enjoy as your guide discusses the benefits of each module. By the end of this session, you will have been introduced to at least a few modules that will challenge the boundaries of your next project.

Speakers: Mediacurrent's Dawn Aly, VP of Digital Strategy and Mark Shropshire, Open Source Security Lead
Session Track: Business

Data security legislation like the GDPR (enforcement begins May 28th, 2018) allows users to control how and if their personal data is used by companies. This shift in control fundamentally changes how companies can collect, store, and use information about prospects and customers. While understanding and implementing privacy related regulation in web projects is a necessity, related knowledge and skill sets become a real business differentiator and a key part of a user’s privacy experience (PX).

Key Topics:

  • Practical interpretation of the GDPR 
  • How to determine if you are at risk for compliance 
  • Repeatable process for assessing security risks in Drupal websites 
  • Security by design
  • Impact to data, analytics, and personalization strategies 

Speakers: Kevin Basarab, Director of Development at Mediacurrent and Mike Priscella, Engineering Manager at Thrillist/ Group Nine Media. 
Session Track: Ambitious Digital Experiences 

In this session, we'll dive into how Group Nine Media (parent company of Thrillist.com, TheDodo.com, and others) are evolving the Drupal 8 editorial user experience and contributing that back to the community. We'll not only look into their use case but also explore what modules and options are out there for improving editorial UX without custom development work.

  • How is design/UX reversing to focus on the editorial experience?
  • What contrib modules currently enhance the editorial experience?
  • How can a better editorial experience be beneficial to your client? 

Speakers: Mediacurrent Senior Front End Developer Mario Hernandez; Cristina Chumillas, Designer and Frontend Developer at Ymbra; Lauri Eskola, Drupal Developer at Druid Oy
Session Track: Core Conversations 

The Out-of-the-Box initiative team is working on improving the first-time user experience of Drupal. The team is creating a new installation profile with the main goal of demonstrating how powerful Drupal is for creating beautiful websites for real life use cases.

The alpha version for The Out of the Box initiative has been committed to Drupal 8.6.x. But, what is it and what will it bring to core?
 

Speakers: A panel of community organizers, including Mediacurrent Senior Developer April Sides 
Session Track: Building Community

This conversation is a space for camp organizers (and attendees) to discuss all things event planning, from venue selection and budgeting to session programming and swag. 

Training Presenters: Mediacurrent Senior Front End Developers Mario Hernandez and Eric Huffman

With the component-based approach becoming the standard for Drupal 8 theming, we’re beginning to see some slick front end environments show up in Drupal themes. The promise that talented front enders with little Drupal knowledge can jump right in is much closer to reality.  However, before diving into this new front end bliss there are still some gotchas, plus lots of baked in goodies Drupal provides that one will need to have a handle on before getting started.

This training will focus on the UI_Patterns module, which although still in Release Candidate state, it already solves many problems originated from the Drupal integration process.

Additional Resources
Drupalcon Baltimore 2017 - SEO, I18N, and I18N SEO| Blog 
Drupalcon: Not Just for Developers| Blog 
The Real Value of Drupalcon | Blog 

Feb 01 2018
Feb 01

Paragraphs is a powerful Drupal module that makes gives editors more flexibility in how they design and layout the content of their pages. However, they are special in that they make no sense without a host entity. If we talk about Paragraphs, it goes without saying that they are to be attached to other entities.
In Drupal 8, individual migrations are built around an entity type. That means we implement a single migration for each entity type. Sometimes we draw relationships between the element being imported and an already imported one of a different type, but we never handle the migration of both simultaneously.
Migrating Paragraphs needs to be done in at least two steps: 1) migrating entities of type Paragraph, and 2) migrating entities referencing imported Paragraph entities.

Migration of Paragraph entities

You can migrate Paragraph entities in a way very similar to the way of migrating every other entity type into Drupal 8. However, a very important caveat is making sure to use the right destination plugin, provided by the Entity Reference Revisions module:

destination: plugin: ‘entity_reference_revisions:paragraph’ default_bundle: paragraph_type destination:plugin:entity_reference_revisions:paragraphdefault_bundle:paragraph_type

This is critical because you can be tempted to use something more common like entity:paragraph which would make sense given that Paragraphs are entities. However, you didn’t configure your Paragraph reference field as a conventional Entity Reference one, but as an Entity reference revisions field, so you need to use an appropriate plugin.

An example of the core of a migration of Paragraph entities:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: 'feed.url/endpoint' ids: id: type: integer item_selector: '/elements' fields: - name: id label: Id selector: /element_id - name: content label: Content selector: /element_content process: field_paragraph_type_content/value: content destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: paragraph_type migration_dependencies: { } plugin:urldata_fetcher_plugin:httpdata_parser_plugin:jsonurls:'feed.url/endpoint'    type:integeritem_selector:'/elements'    name:id    label:Id    selector:/element_id    name:content    label:Content    selector:/element_contentfield_paragraph_type_content/value:contentdestination:plugin:'entity_reference_revisions:paragraph'default_bundle:paragraph_typemigration_dependencies:{  }

To give some context, this assumes the feed being consumed has a root level with an elements array filled with content arrays with properties like element_id and element_content, and we want to convert those content arrays into Paragraphs of type paragraph_type in Drupal, with the field_paragraph_type_content field storing the text that came from the element_content property.

Migration of the host entity type

Having imported the Paragraph entities already, we then need to import the host entities, attaching the appropriate Paragraphs to each one’s field_paragraph_type_content field. Typically this is accomplished by using the migration_lookup process plugin (formerly migration).

Every time an entity is imported, a row is created in the mapping table for that migration, with both the ID the entity has in the external source and the internal one it got after being imported. This way the migration keeps a correlation between both states of the data, for updating and other purposes.

The migration_lookup plugin takes an ID from an external source and tries to find an internal entity whose ID is linked to the external one in the mapping table, returning its ID in that case. After that, the entity reference field will be populated with that ID, effectively establishing a link between the entities in the Drupal side.

In the example below, the migration_lookup returns entity IDs and creates references to other Drupal entities through the field_event_schools field:

field_event_schools: plugin: iterator source: event_school process: target_id: plugin: migration_lookup migration: schools source: school_id field_event_schools:  plugin:iterator  source:event_school  process:    target_id:      plugin:migration_lookup      migration:schools      source:school_id

However, while references to nodes or terms basically consist of the ID of the referenced entity, when using the entity_reference_revisions destination plugin (as we did to import the Paragraph entities), two IDs are stored per entity. One is the entity ID and the other is the entity revision ID. That means the return of the migration_lookup processor is not an integer, but an array of them.

process: field_paragraph_type_content: plugin: iterator source: elements process: temporary_ids: plugin: migration_lookup migration: paragraphs_migration source: element_id target_id: plugin: extract source: '@temporary_ids' index: - 0 target_revision_id: plugin: extract source: '@temporary_ids' index: - 1 field_paragraph_type_content:  plugin:iterator  source:elements  process:    temporary_ids:      plugin:migration_lookup      migration:paragraphs_migration      source:element_id    target_id:      plugin:extract      source:'@temporary_ids'      index:        -0    target_revision_id:      plugin:extract      source:'@temporary_ids'      index:        -1

What we do then is, instead of just returning an array (it wouldn’t work obviously), use the extract process plugin with it to get the integer IDs needed to create an effective reference.

Summary

In summary, it’s important to remember that migrating Paragraphs is a two-step process at minimum. First, you must migrate entities of type Paragraph. Then you must migrate entities referencing those imported Paragraph entities.

More on Drupal 8

Top 5 Reasons to Migrate Your Site to Drupal 8

Creating your Emulsify 2.0 Starter Kit with Drush

Jan 18 2018
Jan 18
January 18th, 2018

What are Spectre and Meltdown?

Have you noticed your servers or desktops are running slower than usual? Spectre and Meltdown can affect most devices we use daily. Cloud servers, desktops, laptops, and mobile devices. For more details go to: https://meltdownattack.com/

How does this affect performance?

We finally have some answers to how this is going to affect us. After Pantheon patched their servers they released an article showing the 10-30% negative performance impact that servers are going to have. For the whole article visit: https://status.pantheon.io/incidents/x9dmhz368xfz

I can say that I personally have noticed my laptop’s CPU is running at much higher percentages than before the update for similar tasks.
Security patches are still being released for many operating systems, but traditional desktop OSs appear to have been covered now. If you haven’t already, make sure your OS is up to date. Don’t forget to update the OS on your phone.

Next Steps?

So what can we do in the Drupal world? First, you should follow up with your hosting provider and verify they have patched your servers. Then you need to find ways to counteract the performance loss. If you are interested in performance recommendations, Four Kitchens offers both frontend and backend performance audits.

As a quick win, if you haven’t already, upgrade to PHP7 which should give you a performance boost around 30-50% on PHP processes. Now that you are more informed about what Spectre and Meltdown are, help with the performance effort by volunteering or sponsoring a developer on January 27, 2018 and January 28, 2018 for the Drupal Global Sprint Weekend 2018, specifically on performance related issues: https://groups.drupal.org/node/517797

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Dec 20 2017
Dec 20

One of the most common requests we get in regards to Emulsify is to show concrete examples of components. There is a lot of conceptual material out there on the benefits of component-driven development in Drupal 8—storing markup, CSS, and JavaScript together using some organizational pattern (à la Atomic Design), automating the creation of style guides (e.g., using Pattern Lab) and using Twig’s include, extends and embed functions to work those patterns into Drupal seamlessly. If you’re reading this article you’re likely already sold on the concept. It’s time for a concrete example!

In this tutorial, we’ll build a full site header containing a logo, a search form, and a menu – here’s the code if you’d like to follow along. We will use Emulsify, so pieces of this may be specific to Emulsify and we will try and note those where necessary. Otherwise, this example could, in theory, be extended to any Drupal 8 project using component-driven development.

Planning Your Component

The first step in component-driven development is planning. In fact, this may be the definitive phase in component-driven development. In order to build reusable systems, you have to break down the design into logical, reusable building blocks. In our case, we have 3 distinct components—what we would call in Atomic Design “molecules”—a logo, a search form, and a menu. In most component-driven development systems you would have a more granular level as well (“atoms” in Atomic Design). Emulsify ships with pre-built and highly flexible atoms for links, images, forms, and lists (and much more). This allows us to jump directly into project-specific molecules.

So, what is our plan? We are going to first create a molecule for each component, making use of the atoms listed above wherever possible. Then, we will build an organism for the larger site header component. On the Drupal side, we will map our logo component to the Site Branding block, the search form to the default Drupal search form block, the menu to the Main Navigation block and the site header to the header region template. Now that we have a plan, let’s get started on our first component—the logo.

The Logo Molecule

Emulsify automatically provides us with everything we need to print a logo – see components/_patterns/01-atoms/04-images/00-image/image.twig. Although it is an image atom, it has an optional img_url variable that will wrap the image in a link if present. So, in this case, we don’t even have to create the logo component. We merely need a variant of the image component, which is easy to do in Pattern Lab by duplicating components/_patterns/01-atoms/04-images/00-image/image.yml and renaming it as components/_patterns/01-atoms/04-images/00-image/image~logo.yml (see Pattern Lab documentation).

Next, we change the variables in the image~logo.yml as needed and add a new image_link_base_class variable, naming it whatever we like for styling purposes. For those who are working in a new installation of Emulsify alongside this tutorial, you will notice this file already exists! Emulsify ships with a ready-made logo component. This means we can immediately jump into mapping our new logo component in Drupal.

Connecting the Logo Component to Drupal

Although you could just write static markup for the logo, let’s use the branding block in Drupal (the block that supplies the theme logo or one uploaded via the Appearance Settings page). These instructions assume you have a local Drupal development environment complete with Twig debugging enabled. Add the Site Branding block to your header region in the Drupal administrative UI to see your branding block on your page. Inspect the element to find the template file in play.

In our case there are two templates—the outer site branding block file and the inner image file. It is best to use the file that contains the most relevant information for your component. Seeing as we need variables like image alt and image src to map to our component, the most relevant file would be the image file itself. Since Emulsify uses Stable as a base theme, let’s check there first for a template file to use. Stable uses core/themes/stable/templates/field/image.html.twig to print images, so we copy that file down to its matching directory in Emulsify creating templates/fields/image.html.twig (this is the template for all image fields, so you may have to be more specific with this filename). Any time you add a new template file, clear the cache registry to make sure that Drupal recognizes the new file. Now the goal in component-driven development is to have markup in components that simply maps to Drupal templates, so let’s replace the default contents of the image.html.twig file above ( <img{{ attributes }}> ) with the following:

{% include "@atoms/04-images/00-image/image.twig" with { img_url: "/", img_src: attributes.src, img_alt: attributes.alt, image_blockname: "logo", image_link_base_class: "logo", } %} {%include"@atoms/04-images/00-image/image.twig"with{  img_url:"/",  img_src:attributes.src,  img_alt:attributes.alt,  image_blockname:"logo",  image_link_base_class:"logo",

We’re using the Twig include statement to use our markup from our original component and pass a mixture of static (url, BEM classes) and dynamic (img alt and src) content to the component. To figure out what Drupal variables to use for dynamic content, see first the “Available variables” section at the top of the Drupal Twig file you’re using and then use the Devel module and the kint function to debug the variables themselves. Also, if you’re new to seeing the BEM class variables (Emulsify-specific), see our recent post on why/how we use these variables (and the BEM function) to pass in BEM classes to Pattern Lab and the Drupal Attributes object. Basically, this include statement above will print out:

<a class="logo" href="https://www.fourkitchens.com/"> <img class="logo__img" src=”/themes/emulsify/logo.svg" alt="Home"> </a> <aclass="logo"href="/">    <imgclass="logo__img"src=/themes/emulsify/logo.svg" alt="Home">

We should now see our branding block using our custom component markup! Let’s move on to the next molecule—the search form.

The Search Form Molecule

Component-driven development, particularly the division of components into controlled, separate atomic units, is not always perfect. But the beauty of Pattern Lab (and Emulsify) is that there is a lot of flexibility in how you markup a component. If the ideal approach of using a Twig function to include other smaller elements isn’t possible (or is too time consuming), simply write custom HTML for the component as needed for the situation! One area where we lean into this flexibility is in dealing with Drupal’s form markup. Let’s take a look at how you could handle the search block. First, let’s create a form molecule in Pattern Lab.

Form Wrapper

Create a directory in components/_patterns/02-molecules entitled “search-form” with a search-form.twig file with the following contents (markup tweaked from core/themes/stable/templates/form/form.html.twig):

<form {{ bem('search')}}> {% if children %} {{ children }} {% else %} <div class="search__item"> <input title="Enter the terms you wish to search for." size="15" maxlength="128" class="form-search"> </div> <div class="form-actions"> <input type="submit" value="Search" class="form-item__textfield button js-form-submit form-submit"> </div> {% endif %} </form> <form{{bem('search')}}>  {%ifchildren%}    {{children}}  {%else%}    <divclass="search__item">      <inputtitle="Enter the terms you wish to search for."size="15"maxlength="128"class="form-search">    </div>    <divclass="form-actions">      <inputtype="submit"value="Search"class="form-item__textfield button js-form-submit form-submit">    </div>  {%endif%}

In this file (code here) we’re doing a check for the Drupal-specific variable “children” in order to pass one thing to Drupal and another to Pattern Lab. We want to make the markup as similar as possible between the two, so I’ve copied the relevant parts of the markup by inspecting the default Drupal search form in the browser. As you can see there are two classes we need on the Drupal side. The first is on the outer <form>  wrapper, so we will need a matching Drupal template to inherit that. Many templates in Drupal will have suggestions by default, but the form template is a great example of one that doesn’t. However, adding a new template suggestion is a minor task, so let’s add the following code to emulsify.theme:

/** * Implements hook_theme_suggestions_HOOK_alter() for form templates. */ function emulsify_theme_suggestions_form_alter(array &$suggestions, array $variables) { if ($variables['element']['#form_id'] == 'search_block_form') { $suggestions[] = 'form__search_block_form'; } } * Implements hook_theme_suggestions_HOOK_alter() for form templates.functionemulsify_theme_suggestions_form_alter(array&$suggestions,array$variables){  if($variables['element']['#form_id']=='search_block_form'){    $suggestions[]='form__search_block_form';

After clearing the cache registry, you should see the new suggestion, so we can now add the file templates/form/form--search-block-form.html.twig. In that file, let’s write:
{% include "@molecules/search-form/search-form.twig" %} {%include"@molecules/search-form/search-form.twig"%}

The Form Element

We have only the “search__item” class left, for which we follow a similar process. Let’s create the file components/_patterns/02-molecules/search-form/_search-form-element.twig, copying the contents from core/themes/stable/templates/form/form-element.html.twig and making small tweaks like so:

{% set classes = [ 'js-form-item', 'search__item', 'js-form-type-' ~ type|clean_class, 'search__item--' ~ name|clean_class, 'js-form-item-' ~ name|clean_class, title_display not in ['after', 'before'] ? 'form-no-label', disabled == 'disabled' ? 'form-disabled', errors ? 'form-item--error', ] %} {% set description_classes = [ 'description', description_display == 'invisible' ? 'visually-hidden', ] %} <div {{ attributes.addClass(classes) }}> {% if label_display in ['before', 'invisible'] %} {{ label }} {% endif %} {% if prefix is not empty %} <span class="field-prefix">{{ prefix }}</span> {% endif %} {% if description_display == 'before' and description.content %} <div{{ description.attributes }}> {{ description.content }} </div> {% endif %} {{ children }} {% if suffix is not empty %} <span class="field-suffix">{{ suffix }}</span> {% endif %} {% if label_display == 'after' %} {{ label }} {% endif %} {% if errors %} <div class="form-item--error-message"> {{ errors }} </div> {% endif %} {% if description_display in ['after', 'invisible'] and description.content %} <div{{ description.attributes.addClass(description_classes) }}> {{ description.content }} </div> {% endif %} </div>   setclasses=[    'js-form-item',    'search__item',    'js-form-type-'~type|clean_class,    'search__item--'~name|clean_class,    'js-form-item-'~name|clean_class,    title_displaynotin['after','before']?'form-no-label',    disabled=='disabled'?'form-disabled',    errors?'form-item--error',  setdescription_classes=[    'description',    description_display=='invisible'?'visually-hidden',<div{{attributes.addClass(classes)}}>  {%iflabel_displayin['before','invisible']%}    {{label}}  {%endif%}  {%ifprefixisnotempty%}    <spanclass="field-prefix">{{prefix}}</span>  {%endif%}  {%ifdescription_display=='before'anddescription.content%}    <div{{description.attributes}}>      {{description.content}}    </div>  {%endif%}  {{children}}  {%ifsuffixisnotempty%}    <spanclass="field-suffix">{{suffix}}</span>  {%endif%}  {%iflabel_display=='after'%}    {{label}}  {%endif%}  {%iferrors%}    <divclass="form-item--error-message">      {{errors}}    </div>  {%endif%}  {%ifdescription_displayin['after','invisible']anddescription.content%}    <div{{description.attributes.addClass(description_classes)}}>      {{description.content}}    </div>  {%endif%}

This file will not be needed in Pattern Lab, which is why we’ve used the underscore at the beginning of the name. This tells Pattern Lab to not display the file in the style guide. Now we need this markup in Drupal, so let’s add a new template suggestion in emulsify.theme like so:

/** * Implements hook_theme_suggestions_HOOK_alter() for form element templates. */ function emulsify_theme_suggestions_form_element_alter(array &$suggestions, array $variables) { if ($variables['element']['#type'] == 'search') { $suggestions[] = 'form_element__search_block_form'; } }   * Implements hook_theme_suggestions_HOOK_alter() for form element templates.functionemulsify_theme_suggestions_form_element_alter(array&$suggestions,array$variables){    if($variables['element']['#type']=='search'){      $suggestions[]='form_element__search_block_form';

And now let’s add the file templates/form/form-element--search-block-form.html.twig with the following code:

{% include "@molecules/search-form/_search-form-element.twig" %} {%include"@molecules/search-form/_search-form-element.twig"%}

We now have the basic pieces for styling our search form in Pattern Lab and Drupal. This was not the fastest element to theme in a component-driven way, but it is a good example of complex concepts that will help when necessary. We hope to make creating form components a little easier in future releases of Emulsify, similar to what we’ve done in v2 with menus. And speaking of menus…

The Main Menu

In Emulsify 2, we have made it a bit easier to work with another complex piece of Twig in Drupal 8, which is the menu system. The files that do the heavy-lifting here are components/_patterns/02-molecules/menus/_menu.twig  and components/_patterns/02-molecules/menus/_menu-item.twig  (included in the first file). We also already have an example of a main menu component in the directory

themes/emulsify/components/_patterns/02-molecules/menus/main-menu themes/emulsify/components/_patterns/02-molecules/menus/main-menu

which is already connected in the Drupal template

templates/navigation/menu--main.html.twig templates/navigation/menu--main.html.twig

Obviously, you can use this as-is or tweak the code to fit your situation, but let’s break down the key pieces which could help you define your own menu.

Menu Markup

Ignoring the code for the menu toggle inside the file, the key piece from themes/emulsify/components/_patterns/02-molecules/menus/main-menu/main-menu.twig is the include statement:

<nav id="main-nav" class="main-nav"> {% include "@molecules/menus/_menu.twig" with { menu_class: 'main-menu' } %} </nav> <navid="main-nav"class="main-nav">  {%include"@molecules/menus/_menu.twig"with{    menu_class:'main-menu'

This will use all the code from the original heavy-lifting files while passing in the class we need for styling. For an example of how to stub out component data for Pattern Lab, see components/_patterns/02-molecules/menus/main-menu/main-menu.yml. This component also shows you how you can have your styling and javascript live alongside your component markup in the same directory. Finally, you can see a more simple example of using a menu like this in the components/_patterns/02-molecules/menus/inline-menu component. For now, let’s move on to placing our components into a header organism.

The Header Organism

Now that we have our three molecule components built, let’s create a wrapper component for our site header. Emulsify ships with an empty component for this at components/_patterns/03-organisms/site/site-header. In our usage we want to change the markup in components/_patterns/03-organisms/site/site-header/site-header.twig to:

<header class="header"> <div class="header__logo"> {% block logo %} {% include "@atoms/04-images/00-image/image.twig" %} {% endblock %} </div> <div class="header__search"> {% block search %} {% include "@molecules/search-form/search-form.twig" %} {% endblock %} </div> <div class="header__menu"> {% block menu %} {% include "@molecules/menus/main-menu/main-menu.twig" %} {% endblock %} </div> </header> <headerclass="header">  <divclass="header__logo">    {%blocklogo%}      {%include"@atoms/04-images/00-image/image.twig"%}    {%endblock%}  </div>  <divclass="header__search">    {%blocksearch%}      {%include"@molecules/search-form/search-form.twig"%}    {%endblock%}  </div>  <divclass="header__menu">    {%blockmenu%}      {%include"@molecules/menus/main-menu/main-menu.twig"%}    {%endblock%}  </div>

Notice the use of Twig blocks. These will help us provide default data for Pattern Lab while giving us the flexibility to replace those with our component templates on the Drupal side. To populate the default data for Pattern Lab, simply create components/_patterns/03-organisms/site/site-header/site-header.yml and copy over the data from components/_patterns/01-atoms/04-images/00-image/image~logo.yml and components/_patterns/02-molecules/menus/main-menu/main-menu.yml. You should now see your component printed in Pattern Lab.

Header in Drupal

To print the header organism in Drupal, let’s work with the templates/layout/region--header.html.twig file, replacing the default contents with:

{% extends "@organisms/site/site-header/site-header.twig" %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {%extends"@organisms/site/site-header/site-header.twig"%}{%blocklogo%}  {{elements.emulsify_branding }}{%endblock%}{%blocksearch%}  {{elements.emulsify_search }}{%endblock%}{%blockmenu%}  {{elements.emulsify_main_menu }}{%endblock%}

Here, we’re using the Twig extends statement to be able to use the Twig blocks we created in the component. You can also use the more robust embed statement when you need to pass variables like so:

{% embed "@organisms/site/site-header/site-header.twig" with { variable: "something", } %} {% block logo %} {{ elements.emulsify_branding }} {% endblock %} {% block search %} {{ elements.emulsify_search }} {% endblock %} {% block menu %} {{ elements.emulsify_main_menu }} {% endblock %} {% endembed %} {%embed"@organisms/site/site-header/site-header.twig"with{  variable:"something",  {%blocklogo%}    {{elements.emulsify_branding }}  {%endblock%}  {%blocksearch%}    {{elements.emulsify_search }}  {%endblock%}  {%blockmenu%}    {{elements.emulsify_main_menu }}  {%endblock%}{%endembed%}

For our purposes, we can simply use the extends statement. You’ll notice that we are using the elements variable. This variable is currently not listed in the Stable region template at the top, but is extremely useful in printing the blocks that are currently in that region. Finally, if you’ve added the file, be sure and clear the cache registry—otherwise, you should now see your full header in Drupal.

Final Thoughts

Component-driven development is not without trials, but I hope we have touched on some of the more difficult ones in this article to speed you on your journey. If you would like to view the branch of Emulsify where we built this site header component, you can see that here. Feel free to sift through and reverse-engineer the code to figure out how to build your own component-driven Drupal project!

This fifth episode concludes our five-part video-blog series for Emulsify 2.x. Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach

Just need the videos? Watch them all on our channel.

Download Emulsify

Dec 14 2017
Dec 14

When working with Pantheon, you’re presented with the three typical environments: Dev, Test and Live. This scheme is very common in major hosting providers, and not without a reason: it allows to plan and execute an effective and efficient development process that takes every client need into consideration. We can use CircleCI to manage that process.

CircleCI works based in the circle.yml file located at the root of projects. It is a script of all the stuff the virtual machine will do for you in the cloud, including testing and delivery. The script is triggered by a commit into the repository unless you have it configured to just react to commits to branches with open pull requests. It is divided into sections, each representing a phase of the Build-Test-Deploy process.

In the deployment section, you can put instructions in order to deploy your code to your web servers. A common deployment would look like the following:

deployment: dev: branch: master commands: - ./merge_to_master.sh deployment:    branch:master    commands:      -./merge_to_master.sh

This literally means, perform the operations listed under commands every time a commit is merged into the master branch. You may think there is not a real reason to use a deployment block like this to do an actual deployment. And it’s true, you can do whatever you want there. It’s ideal to perform deployments, but in essence, the deployment section allows you to implement conditional post-build subscripts that react differently depending on the nature of the action that triggered the whole build.

The Drops 8 Pantheon upstream for Drupal 8 comes with a very handy circle.yml that can help you set up a basic CircleCI workflow in a matter of minutes. It heavily relies on the use of Pantheon’s Terminus CLI and a couple of plugins like the excellent Terminus Build Tools Plugin, that provides probably the most important call of the whole script:

terminus build:env:create -n "$TERMINUS_SITE.dev" "$TERMINUS_ENV" --yes --clone-content --db-only --notify="$NOTIFY" terminusbuild:env:create-n"$TERMINUS_SITE.dev""$TERMINUS_ENV"--yes--clone-content--db-only--notify="$NOTIFY"

The line above creates a multidev environment in Pantheon, merging the code and generated assets in Circle’s VM into the code coming from the dev environment, and also clones the database from there. You can then use drush to update that database with the changes in configuration that you just merged in.

Once you get to the deployment section, you already have a functional multidev environment. The deployment happens by merging the artifact back into dev:

deployment: build-assets: branch: master commands: - terminus build:env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes deployment:    build-assets:        branch:master        commands:            -terminusbuild:env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes

This workflow assumes a simple git workflow where you just create feature branches and merge them into master where they’re ready. It also takes the deployment process just to the point code reaches dev. This is sometimes not enough.

Integrating release branches

When in a gitflow involving a central release branch, the perfect environment to host a site that completely reflects the state of the release branch is the dev environment, After all, development is only being actively done in that branch. Assuming your release branch is statically called develop:

deployment: dev: branch: develop commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh test: branch: master commands: # Deploy to DEV environment. - terminus build-env:merge -n "$TERMINUS_SITE.$TERMINUS_ENV" --yes --delete - ./rebuild_dev.sh # Deploy to TEST environment. - terminus env:deploy $TERMINUS_SITE.test --sync-content - ./rebuild_test.sh - terminus env:clone-content "$TERMINUS_SITE.test" "dev" --yes deployment:    branch:develop    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh    branch:master    commands:        # Deploy to DEV environment.      -terminusbuild-env:merge-n"$TERMINUS_SITE.$TERMINUS_ENV"--yes--delete      -./rebuild_dev.sh        # Deploy to TEST environment.      -terminusenv:deploy$TERMINUS_SITE.test--sync-content      -./rebuild_test.sh      -terminusenv:clone-content"$TERMINUS_SITE.test""dev"--yes

This way, when you merge into the release branch, the multidev environment associated with it will get merged into dev and deleted, and dev will be rebuilt.

The feature is available in Dev right after merging into the release branch.

The same happens on release day when the release branch is merged into master, but after dev is rebuilt, it is also deployed to the test environment:

The release branch goes all the way to the test environment.

A few things to notice about this process:

  • The --sync-content option brings database and files from the live environment to test at the same time code is coming there from dev. By rebuilding test, we’re now able to test the latest changes in code against the latest changes in content, assuming live is your primary content entry point.
  • The last Terminus command takes the database from test and sends it back to dev. So, to recap, the database originally came from live, was rebuilt in test using dev’s fresh code, and now goes to dev. At this moment, test and dev are identical. Just until the next commit is thrown into the release branch.
  • This process facilitates testing. While the next release is already in progress and transforming dev, the client can take all the time to give the final approval for what’s in test. Once that happens, the deployment to live should occur in a semi-automatic way at most. But nothing really prevents you from using this same approach to automate also the deployment to live. Well, nothing but good judgment.
  • By using circle.yml to handle the deployment process, you contribute to keep workflow configuration centralized and accessible. With the appropriate system in place, you can trigger a complete and fully automated deployment just by doing a commit to Github, and all you’ll ever need to know about the process is in that single file.
Nov 13 2017
Nov 13

Welcome to the fourth episode in our video series for Emulsify 2.x. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to best use a DRY Twig approach when working in Emulsify. This blog post accompanies a tutorial video, embedded at the end of this post.

DRYing Out Your Twigs

Although we’ve been using a DRY Twig approach in Emulsify since before the 2.x release, it’s a topic worth addressing because it is unique to Emulsify and provides great benefit to you workflow. After all, what drew you to component-driven development in the first place? Making things DRY of course!

In component-driven development, we build components once and reuse them together in different combination—like playing with Lego. In Emulsify, we use Sass mixins and BEM-style CSS to make our CSS as reusable and isolated as possible. DRY Twig simply extends these same benefits to the HTML itself. Let’s look at an example:

Non-DRY Twig:

<h2 class=”title”> <a class=”title__link” href=”/”>Link Text</a> </h2> <h2class=title><aclass=title__link” href=/>LinkText</a>

DRY Twig:

<h2 class=”title”> {% include "@atoms/01-links/link/link.twig" with { "link_content": “Link Text”, "link_url": “/”, "link_class": “title__link”, } %} </h2> <h2class=title>{%include"@atoms/01-links/link/link.twig"with{"link_content":LinkText,"link_url":/,"link_class":title__link”,

The code with DRY Twig is more verbose, but by switching to this method, we’ve now removed a point of failure in our HTML. We’re not repeating the same HTML everywhere! We write that HTML once and reuse it everywhere it is needed.

The concept is simple, and it is found everywhere in the components directory that ships in Emulsify. HTML gets written mostly as atoms and is simply reused in larger components using the default include, extends or embed functions built into Twig. We challenge you to try this in a project, and see what you think.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Oct 26 2017
Oct 26
October 26th, 2017

Welcome to the third episode in our video series for Emulsify 2.x. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how Emulsify works with the BEM Twig extension. This blog post accompanies a tutorial video, embedded at the end of this post.

Background

In Emulsify 2.x, we have enhanced our support for BEM in Drupal by creating the BEM Twig extension. The BEM Twig extension makes it easy to deliver classes to both Pattern Lab and Drupal while using Drupal’s Attributes object. It also has the benefit of simplifying our syntax greatly. See the code below.

Emulsify 1.x:

{% set paragraph_base_class_var = paragraph_base_class|default('paragraph') %} {% set paragraph_modifiers = ['large', 'red'] %} <p class="{{ paragraph_base_class_var }}{% for modifier in paragraph_modifiers %} {{ paragraph_base_class_var }}--{{ modifier }}{% endfor %}{% if paragraph_blockname %} {{ paragraph_blockname }}__{{ paragraph_base_class_var }}{% endif %}"> {% block paragraph_content %} {{ paragraph_content }} {% endblock %} </p> {%setparagraph_base_class_var=paragraph_base_class|default('paragraph')%}{%setparagraph_modifiers=['large','red']%}<pclass="{{ paragraph_base_class_var }}{% for modifier in paragraph_modifiers %} {{ paragraph_base_class_var }}--{{ modifier }}{% endfor %}{% if paragraph_blockname %} {{ paragraph_blockname }}__{{ paragraph_base_class_var }}{% endif %}">  {%blockparagraph_content%}    {{paragraph_content }}  {%endblock%}

Emulsify 2.x:

<p {{ bem('paragraph', ['large', 'red']) }}> {% block paragraph_content %} {{ paragraph_content }} {% endblock %} </p> <p{{bem('paragraph',['large','red'])}}>  {%blockparagraph_content%}    {{paragraph_content }}  {%endblock%}

In both Pattern Lab and Drupal, this function above will create p class=”paragraph paragraph--large paragraph--red”, but in Drupal it will use the equivalent of p{{ attributes.addClass('paragraph paragraph--large paragraph--red') }}, appending these classes to whatever classes core or other plugins provide as well. Simpler syntax + Drupal Attributes support!

We have released the BEM Twig function open source under the Drupal Pattern Lab initiative. It is in Emulsify 2.x by default, but we wanted other projects to be able to benefit from it as well.

Usage

The BEM Twig function accepts four arguments, only one of which is required.

Simple block name:
h1 {{ bem('title') }}

In Drupal and Pattern Lab, this will print:

h1 class="title"

Block with modifiers (optional array allowing multiple modifiers):

h1 {{ bem('title', ['small', 'red']) }}

This creates:

h1 class="title title--small title--red"

Element with modifiers and block name (optional):

h1 {{ bem('title', ['small', 'red'], 'card') }}

This creates:

h1 class="card__title card__title--small card__title--red"

Element with block name, but no modifiers (optional):

h1 {{ bem('title', '', 'card') }}

This creates:

h1 class="card__title"

Element with modifiers, block name and extra classes (optional, in case you need non-BEM classes):

h1 {{ bem('title', ['small', 'red'], 'card', ['js-click', 'something-else']) }}

This creates:

h1 class="card__title card__title--small card__title--red js-click something-else"

Element with extra classes only (optional):

h1 {{ bem('title', '', '', ['js-click']) }}

This creates:

h1 class="title js-click"

Ba da BEM, Ba da boom

With the new BEM Twig extension that we’ve added to Emulsify 2.x, you can easily deliver classes to Pattern Lab and Drupal, while keeping a nice, simple syntax. Thanks for following along! Make sure you check out the other posts in this series and their video tutorials as well!

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach| Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Oct 13 2017
Oct 13
October 13th, 2017

Welcome to the second episode in our new video series for Emulsify. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to teach you how to create an Emulsify 2.0 starter kit with Drush. This blog post follows the video closely, so you can skip ahead or repeat sections in the video by referring to the timestamps for each section.

PURPOSE [00:15]

This screencast will specifically cover the Emulsify Drush command. The command’s purpose is to setup a new copy of the Emulsify theme.

Note: I used the word “copy” here and not “subtheme” intentionally. This is because the subtheme of your new copy is Drupal Core’s Stable theme, NOT Emulsify.

This new copy of Emulsify will use the human-readable name that your provide, and will build the necessary structure to get you on your way to developing a custom theme.

REQUIREMENTS [00:45]

Before we dig in too deep I recommend that you have the following installed first:

  • a Drupal 8 Core installation
  • the Drush CLI command at least major version 8
  • Node.js preferably the latest stable version
  • a working copy of the Emulsify demo theme 2.X or greater

If you haven’t already watched the Emulsify 2.0 composer install presentation, please stop this video and go watch that one.

Note: If you aren’t already using Drush 9 you should consider upgrading as soon as possible because the next minor version release of Drupal Core 8.4.0 is only going to work with Drush 9 or greater.

RECOMMENDATIONS [01:33]

We recommend that you use PHP7 or greater as you get some massive performance improvements for a very little amount of work.

We also recommend that you use composer to install Drupal and Emulsify. In fact, if you didn’t use Composer to install Emulsify—or at least run Composer install inside of Emulsify—you will get errors. You will also notice errors if npm install failed on the Emulsify demo theme installation.

AGENDA [02:06]

Now that we have everything setup and ready to go, this presentation will first discuss the theory behind the Drush script. Then we will show what you should expect if the installation was successful. After that I will give you some links to additional resources.

BACKGROUND [02:25]

The general idea of the command is that it creates a new theme from Emulsify’s files but is actually based on Drupal Core’s Stable theme. Once you have run the command, the demo Emulsify theme is no longer required and you can uninstall it from your Drupal codebase.

WHEN, WHERE, and WHY? [02:44]

WHEN: You should run this command before writing any custom code but after your Drupal 8 site is working and Emulsify has been installed (via Composer).

WHERE: You should run the command from the Drupal root or use a Drush alias.

WHY: Why you should NOT edit the Emulsify theme’s files. If you installed Emulsify the recommended way (via Composer), next time you run composer update ALL of your custom code changes will be wiped out. If this happens I really hope you are using version control.

HOW TO USE THE COMMAND? [03:24]

Arguments:

Well first it requires a single argument, the human-readable name. This name can contain spaces and capital letters.

Options:

The command has defaults set for options that you can override.

This first is the theme description which will appear within Drupal and in your .info file.

The second is the machine-name; this is the option that allows you to pick the directory name and the machine name as it appears within Drupal.

The third option is the path; this is the path that your theme will be installed to, it defaults to “themes/custom” but if you don’t like that you can change it to any directory relative to your web root.

Fourth and final option is the slim option. This allows advanced users who don’t need demo content or don’t want anything but the bare minimum required to create a new theme.

Note:

Only the human_readable_name is required, options don’t have to appear in any order, don’t have to appear at all, or you can only pass one if you just want to change one of the defaults.

SUCCESS [04:52]

If your new theme was successfully created you should see the successful output message. In the example below I used the slim option because it is a bit faster to run but again this is an option and is NOT required.

The success message contains information you may find helpful, including the name of the theme that was created, the path where it was installed, and the next required step for setup.

THEME SETUP [05:25]

Setting up your custom theme. Navigate to your custom theme on the command line. Type the yarn and watch as pattern lab is downloaded and installed. If the installation was successful you should see a pattern lab successful message and your theme should now be visible within Drupal.

COMPILING YOUR STYLE GUIDE [05:51]

Now that we have pattern lab successfully installed and you committed it to you version control system, you are probably eager to use it. Emulsify uses npm scripts to setup a local pattern lab instance for display of your style guide.

The script you are interested in is yarn start. Run this command for all of your local development. You do NOT have to have a working Drupal installation at this point to do development on your components.

If you need a designer who isn’t familiar with Drupal to make some tweaks, you will only have to give them your code base, have them use yarn to install, and yarn start to see your style guide.

It is however recommended the initial setup of your components is done by someone with background knowledge of Drupal templates and themes as the variables passed to each component will be different for each Drupal template.

For more information on components and templates keep an eye out for our soon to come demo components and screencasts on building components.

VIEWING YOUR STYLE GUIDE [07:05]

Now that you have run yarn start you can open your browser and navigate to the localhost URL that appears in your console. If you get an error here you might already have something running on port 3000. If you need to cancel this script hit control + c.

ADDITIONAL RESOURCES [07:24]

Thank you for watching today’s screencast, we hope you found this presentation informative and enjoy working with Emulsify 2.0. If you would like to search for some additional resources you can go to emulsify.info or github.com/fourkitchens/emulsify.

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Chris Martin
Chris Martin

Chris Martin is a support engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Oct 05 2017
Oct 05
October 5th, 2017

Welcome to the first episode in our new video series for Emulsify. Emulsify 2.x is a new release that embodies our commitment to component-driven design within Drupal. We’ve added Composer and Drush support, as well as open-source Twig functions and many other changes to increase ease-of-use.

In this video, we’re going to get you up and running with Emulsify. This blog post accompanies a tutorial video, which you can find embedded at the end.

Emulsify is, at it’s core, a prototyping tool. At Four Kitchens we also use it as a Drupal 8 theme starter kit. Depending on how you want to use it, the installation steps will vary. I’ll quickly go over how to install and use Emulsify as a stand alone prototyping tool, then I’ll show you how we use it to theme Drupal 8 sites.

Emulsify Standalone

Installing Emulsify core as a stand alone tool is a simple process with Composer and NPM (or Yarn).

  1. composer create-project fourkitchens/emulsify --stability dev --no-interaction emulsify
  2. cd emulsify
  3. yarn install (or npm install, if you don’t have yarn installed)

Once the installation process is complete, you can start it with either npm start or yarn start:

  1. yarn start

Once it’s up, you can use the either the Local or External links to view the Pattern Lab instance in the browser. (The External link is useful for physical device testing, like on your phone or tablet, but can vary per-machine. So, if you’re using hosted fonts, you might have to add a bunch of IPs to your account to accommodate all of your developers.)

The start process runs all of the build and watch commands. So once it’s up, all of your changes are instantly reflected in the browser.

I can add additional colors to the _color-vars.scss file, Edit the card.yml example data, or even update the 01-card.twig file to modify the structure of the card component.

That’s really all there is to using Emulsify as a prototyping tool. You can quickly build out your components using component-driven design without having to have a full web server, and site, up and running.

Emulsify in a Composer-Based Drupal 8 Installation

It’s general best practice to install Drupal 8 via Composer, and that’s what we do at Four Kitchens. So, we’ve built Emulsify 2 to work great in that environment. I won’t cover the details of installing Drupal via Composer since that’s out of scope for this video, and there are videos that cover that already. Instead, I’ll quickly run through that process, and then come back and walk through the details of how to install Emulsify in a Composer-based Drupal 8 site.

Okay, I’ve got a fresh Drupal 8 site installed. Let’s install Emulsify alongside it.

From the project root, we’ll run the composer require command:

  • composer require fourkitchens/emulsify

Next, we’ll enable Emulsify and its dependencies:

  • cd web
  • drush en emulsify components unified_twig_ext -y

At this point, we highly recommend you use the Drush script that comes with Emulsify to create a custom clone of Emulsify for your actual production site. The reason is that any change you make to Emulsify core will be overwritten when you update Emulsify, and there’s currently no real good way to create a child theme of a component-based, Pattern Lab -powered, Drupal theme. So, the Drush script simply creates a clone of Emulsify and makes the file renaming process into a simple script.

We have another video covering the Drush script, so definitely watch that for all of the details. For this video though, I’ll just use emulsify core, since I’m not going to make any customizations.

  • cd web/themes/contrib/emulsify/ (If you do create a clone with the drush script, you’ll cd web/themes/custom/THEME_NAME/)
  • yarn install

  • yarn start

Now we have our Pattern Lab instance up and running, accessible at the links provided.

We can also head over to the “Appearance” page on our site, and set our theme as the default. When we do that, and go back to the homepage, it looks all boring and gray, but that’s just because we haven’t started doing any actual theming yet.

At this point, the theme is installed, and you’re ready to create your components and make your site look beautiful!

[embedded content]

Thanks for following our Emulsify 2.x tutorials. Miss a post? Read the full series is here.

Pt 1: Installing Emulsify | Pt 2: Creating your Emulsify 2.0 Starter Kit with Drush | Pt 3: BEM Twig Function | Pt 4: DRY Twig Approach | Pt 5: Building a Full Site Header in Drupal

Just need the videos? Watch them all on our channel.

Download Emulsify

Web Chef Brian Lewis
Brian Lewis

Brian Lewis is a frontend engineer at Four Kitchens, and is passionate about sharing knowledge and learning new tools and techniques.

Jul 13 2017
Jul 13
July 13th, 2017

When creating the Global Academy for continuing Medical Education (GAME) site for Frontline, we had to tackle several complex problems in regards to content migrations. The previous site had a lot of legacy content we had to bring over into the new system. By tackling each unique problem, we were able to migrate most of the content into the new Drupal 7 site.

Setting Up the New Site

The system Frontline used before the redesign was called Typo3, along with a suite of individual, internally-hosted ASP sites for conferences. Frontline had several kinds of content that displayed differently throughout the site. The complexity with handling the migration was that a lot of the content was in WYSIWYG fields that contained large amounts of custom HTML.

We decided to go with Drupal 7 for this project so we could more easily use code that was created from the MDEdge.com site.

“How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

The GAME website redesign greatly improved the flow of the content and how it was displayed on the frontend, and part of that improvement was displaying specific pieces of content in different sections of the page. The burning question that plagued us when tackling this problem was “How are we going to extract the specific pieces of data and get them inserted into the correct fields in Drupal?”

Before we could get deep into the code, we had to do some planning and setup to make sure we were clear in how to best handle the different types of content. This also included hammering out the content model. Once we got to a spot where we could start migrating content, we decided to use the Migrate module. We grabbed the current site files, images and database and put them into a central location outside of the current site that we could easily access. This would allow us to re-run these migrations even after the site launched (if we needed to)!

Migrating Articles

This content on the new site is connected to MDEdge.com via a Rest API. One complication is that the content on GAME was added manually to Typo3, and wasn’t tagged for use with specific fields. The content type on the new Drupal site had a few fields for the data we were displaying, and a field that stores the article ID from MDedge.com. To get that ID for this migration, we mapped the title for news articles in Typo3 to the tile of the article on MDEdge.com. It wasn’t a perfect solution, but it allowed us to do an initial migration of the data.

Conferences Migration

For GAME’s conferences, since there were not too many on the site, we decided to import the main conference data via a Google spreadsheet. The Google doc was a fairly simple spreadsheet that contained a column we used to identify each row in the migration, plus a column for each field that is in that conference’s content type. This worked out well because most of the content in the redesign was new for this content type. This approach allowed the client to start adding content before the content types or migrations were fully built.

Our spreadsheet handled the top level conference data, but it did not handle the pages attached to each conference. Page content was either stored in the Typo3 data or we needed to extract the HTML from the ASP sites.

Typo3 Categories to Drupal Taxonomies

To make sure we mapped the content in the migrations properly, we created another Google doc mapping file that connected the Typo3 categories to Drupal taxonomies. We set it up to support multiple taxonomy terms that could be mapped to one Typo3 category.
[NB: Here is some code that we used to help with the conversion: https://pastebin.com/aeUV81UX.]

Our mapping system worked out fantastically well. The only problem we encountered was that since we were allowing three taxonomy terms to be mapped to one Typo3 category, the client noticed some use cases where too many taxonomies were assigned to content that had more than one Typo3 category in certain use cases. But this was a content-related issue and required them to re-look at this document and tweak it as necessary.

Slaying the Beast:
Extracting, Importing, and Redirecting

One of the larger problems we tackled was how to get the HTML from the Typo3 system and the ASP conference sites into the new Drupal 7 setup.

The ASP conference sites were handled by grabbing the HTML for each of those pages and extracting the page title, body, and photos. The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

Grabbing the data from the Typo3 sites presented another challenge because we had to figure out where the different data was stored in the database. This was a uniquely interesting process because we had to determine which tables were connected to which other tables in order to figure out the content relationships in the database.

The migration of the conference sites was challenging because we were dealing with different HTML for different sites and trying to get get all those differences matched up in Drupal.

A few things we learned in this process:

  • We found all of the content on the current site was in these tables (which are connected to each other): pages, tt_content, tt_news, tt_news_cat_mm and link_cache.
  • After talking with the client, we were able to grab content based on certain Typo3 categories or the pages hierarchy relationship. This helped fill in some of the gaps where a direct relationship could not be made by looking at the database.
  • It was clear that getting 100% of the legacy content wasn’t going to be realistic, mainly because of the loose content relationships in Typo3. After talking to the client we agreed to not migrate content older than a certain date.
  • It was also clear that—given how much HTML was in the content—some manual cleanup was going to be required.

Once we were able to get to the main HTML for the content, we had to figure out how to extract the specific pieces we needed from that HTML.

Once we had access to the data we needed, it was a matter of getting it into Drupal. The migrate module made a lot of this fairly easy with how much functionality it provided out of the box. We ended up using the prepareRow() method a lot to grab specific pieces of content and assigning them to Drupal fields.

Handling Redirects

We wanted to handle as many of the redirects as we could automatically, so the client wouldn’t have to add thousands of redirects and to ensure existing links would continue to work after the new site launched. To do this we mapped the unique row in the Typo3 database to the unique ID we were storing in the custom migration.

As long as you are handling the unique IDs properly in your use of the Migration API, this is a great way to handle mapping what was migrated to the data in Drupal. You use the unique identifier stored for each migration row and grab the corresponding node ID to get the correct URL that should be loaded. Below are some sample queries we used to get access to the migrated nodes in the system. We used UNION queries because the content that was imported from the legacy system could be in any of these tables.

SELECT destid1 FROM migrate_map_cmeactivitynode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_cmeactivitycontentnode WHERE sourceid1 IN(:sourceid) UNION SELECT destid1 FROM migrate_map_conferencepagetypo3node WHERE sourceid1 IN(:sourceid) … SELECTdestid1FROMmigrate_map_cmeactivitynodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_cmeactivitycontentnodeWHEREsourceid1IN(:sourceid)UNIONSELECTdestid1FROMmigrate_map_conferencepagetypo3nodeWHEREsourceid1IN(:sourceid)

Wrap Up

Migrating complex websites is rarely simple. One thing we learned on this project is that it is best to jump deep into migrations early in the project lifecycle, so the big roadblocks can be identified as early as possible. It also is best to give the client as much time as possible to work through any content cleanup issues that may be required.

We used a lot of Google spreadsheets to get needed information from the client. This made things much simpler on all fronts and allowed the client to start gathering needed content much sooner in the development process.

In a perfect world, all content would be easily migrated over without any problems, but this usually doesn’t happen. It can be difficult to know when you have taken a migration “far enough” and you are better off jumping onto other things. This is where communication with the full team early is vital to not having migration issues take over a project.

Web Chef Chris Roane
Chris Roane

When not breaking down and solving complex problems as quickly as possible, Chris volunteers for a local theater called Arthouse Cinema & Pub.

Jun 29 2017
Jun 29
June 29th, 2017

Recently I was working in a Drupal 8 project and we were using the improved Features module to create configuration container modules with some special purposes. Due to client architectural needs, we had to move the /features folder into a separate repository. We basically needed to make it available to many sites in a way we could keep doing active development over it, and we did so by making the new repo a composer dependency of all our projects.

One of the downsides of this new direction was the effects in CircleCI builds for individual projects, since installing and reverting features was an important part of it. For example, to make a new feature module available, we’d push it to this ‘shared’ repo, but to actually enable it we’d need to push the bit change in the core.extension.yml config file to our project repo. Yes, we were using a mixed approach: both features and conventional configuration management.

So a new pull request would be created in both repositories. The problem for Circle builds—given the approach previously outlined—is that builds generated for the pull request in the project repository would require the master branch of the ‘shared’ one. So, for the pull request in the project repo, we’d try to build a site by importing configuration that says a particular feature module should be enabled, and that module wouldn’t exist (likely not present in shared master at that time, still a pull request), so it would totally crash.

There is probably no straightforward way to solve this problem, but we came with a solution that is half code, half strategy. Beyond technical details, there is no practical way to determine what branch of the shared repo should be required for a pull request in the project repo, unless we assume conventions. In our case, we assumed that the correct branch to pair with a project branch was one named the same way. So if a build was a result of a pull request from branch X, we could try to find a PR from branch X in the shared repo and if it existed, that’d be our guy. Otherwise we’d keep pulling master.

So we created a script to do that: &lt;?php $branch = $argv[1]; $github_token = $argv[2]; $github_user = $argv[3]; $project_user = $argv[4]; $shared_repos = array( 'organization/shared' ); foreach ($shared_repos as $repo) { print_r("Checking repo $repo for a pull request in a '$branch' branch...\n"); $pr = <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch, $github_token, $github_user, $project_user, $repo); if (!empty($pr)) { print_r("Found. Requiring...\n"); exec("<strong class="markup--strong markup--pre-strong">composer require $repo:dev-$branch</strong>"); print_r("$repo:dev-$branch pulled.\n"); } else { print_r("Nothing found.\n"); } } function <strong class="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name, $github_token, $github_user, $project_user, $repo) { $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name"); curl_setopt($ch,CURLOPT_RETURNTRANSFER,true); curl_setopt($ch, CURLOPT_USERPWD, "$github_user:$github_token"); curl_setopt($ch, CURLOPT_USERAGENT, "$github_user"); $output=json_decode(curl_exec($ch), TRUE); curl_close($ch); return $output; } $branch=$argv[1];$github_token=$argv[2];$github_user=$argv[3];$project_user=$argv[4];$shared_repos=array(  'organization/shared'foreach($shared_reposas$repo){  print_r("Checking repo $repo for a pull request in a '$branch' branch...\n");  $pr=<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch,$github_token,$github_user,$project_user,$repo);if(!empty($pr)){    print_r("Found. Requiring...\n");    exec("<strong class="markup--strongmarkup--pre-strong">composer require $repo:dev-$branch</strong>");    print_r("$repo:dev-$branch pulled.\n");  else{    print_r("Nothing found.\n");function<strongclass="markup--strong markup--pre-strong">getPRObjectFromBranch</strong>($branch_name,$github_token,$github_user,$project_user,$repo){  $ch=curl_init();    curl_setopt($ch,CURLOPT_URL,"https://api.github.com/repos/$repo/pulls?head=$project_user:$branch_name");  curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);  curl_setopt($ch,CURLOPT_USERPWD,"$github_user:$github_token");  curl_setopt($ch,CURLOPT_USERAGENT,"$github_user");$output=json_decode(curl_exec($ch),TRUE);  curl_close($ch);  return$output;

As you probably know, Circle builds are connected to the internet, so you can make remote requests. What we’re doing here is using the Github API in the middle of a build in the project repo to connect to our shared repo with cURL and try to find a pull request whose branch name matches the one we’re building over. If the request returned something then we can safely say there is a branch named the same way than the current one and with an open pull request in the shared repo, and we can require it.

What’s left for this to work is actually calling the script:

- php scripts/require_feature_branch.php "$CIRCLE_BRANCH" "$GITHUB_TOKEN" "$CIRCLE_USERNAME" "$CIRCLE_PROJECT_USERNAME" -phpscripts/require_feature_branch.php"$CIRCLE_BRANCH""$GITHUB_TOKEN""$CIRCLE_USERNAME""$CIRCLE_PROJECT_USERNAME"

We can do this at any point in circle.yml, since composer require will actually update the composer.json file, so any other composer interaction after executing the script should take your requirement in consideration. Notice that the shared repo will be required twice if you have the requirement in your composer.json file. You could safely remove it from there if you instruct to require the master branch when no matching branch has been found in the script, although this could have unintended effects in other types of environments, like for local development.

Note: A quick reference about the parameters passed to the script:

$GITHUB_TOKEN: #Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a> $CIRCLE_*: #CircleCI vars, automatically available $GITHUB_TOKEN:#Generate from <a class="markup--anchor markup--pre-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fsettings%2Ftokens">https://github.com/settings/tokens</a>$CIRCLE_*:#CircleCI vars, automatically available

[Editor’s Note: The post “Running CircleCI Builds Based on Many Repositories” was originally published on Joel Travieso’s Medium blog.]

Web Chef Joel Travieso
Joel Travieso

Joel focuses on the backend and architecture of web projects seeking to constantly improve by considering the latest developments of the art.

Web Chef Dev Experts
Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 31 2017
May 31
May 31st, 2017

In the last post, we created a nested accordion component within Pattern Lab. In this post, we will walk through the basics of integrating this component into Drupal.

Requirements

Even though Emulsify is a ready-made Drupal 8 theme, there are some requirements and background to be aware of when using it.

Emulsify is currently meant to be used as a starterkit. In contrast to a base theme, a starterkit is simply enabled as-is, and tweaked to meet your needs. This is purposeful—your components should match your design requirements, so you should edit/delete example components as needed.

There is currently a dependency for Drupal theming, which is the Components module. This module allows one to define custom namespaces outside of the expected theme /templates directory. Emulsify comes with predefined namespaces for the atomic design directories in Pattern Lab (atoms, molecules, organisms, etc.). Even if you’re not 100% clear currently on what this module does, just know all you have to do is enable the Emulsify theme and the Components module and you’re off to the races.

Components in Drupal

In our last post we built an accordion component. Let’s now integrate this component into our Drupal site. It’s important to understand what individual components you will be working with. For our purposes, we have two: an accordion item (<dt>, <dd>) and an accordion list (<dl>). It’s important to note that these will also correspond to 2 separate Drupal files. Although this can be built in Drupal a variety of ways, in the example below, each accordion item will be a node and the accordion list will be a view.

Accordion Item

You will first want to create an Accordion content type (machine name: accordion), and we will use the title as the <dt> and the body as the <dd>. Once you’ve done this (and added some Accordion content items), let’s add our node template Twig file for the accordion item by duplicating templates/content/node.html.twig into templates/content/node--accordion.html.twig. In place of the default include function in that file, place the following:

{% include "@molecules/accordion-item/accordion-item.twig"
   with {
      "accordion_term": label,
      "accordion_def": content.body,
   }
%}

As you can see, this is a direct copy of the include statement in our accordion component file except the variables have been replaced. Makes sense, right? We want Drupal to replace those static variables with its dynamic ones, in this case label (the node title) and content.body. If you visit your accordion node in the browser (note: you will need to rebuild cache when adding new template files), you will now see your styled accordion item!

But something’s missing, right? When you click on the title, the body field should collapse, which comes from our JavaScript functionality. While JavaScript in the Pattern Lab component will automatically work because Emulsify compiles it to a single file loaded for all components, we want to use Drupal’s built-in aggregation mechanisms for adding JavaScript responsibly. To do so, we need to add a library to the theme. This means adding the following code into emulsify.libraries.yml:

accordion:
  js:
    components/_patterns/02-molecules/accordion-item/accordion-item.js: {}

Once you’ve done that and rebuilt the cache, you can now use the following snippet in any Drupal Twig file to load that library [NB: read more about attach_library]:

{{ attach_library('emulsify/accordion') }}

So, once you’ve added that function to your node–accordion.html.twig file, you should have a working accordion item. Not only does this function load your accordion JavaScript, but it does so in a way that only loads it when that Twig file is used, and also takes advantage of Drupal’s JavaScript aggregation system. Win-win!

Accordion List

So, now that our individual accordion item works as it should, let’s build our accordion list. For this, I’ve created a view called Accordion (machine name: accordion) that shows “Content of Type: Accordion” and a page display that shows an unformatted list of full posts.

Now that the view has been created, let’s copy views-view-unformatted.html.twig from our parent Stable theme (/core/themes/stable/templates/views) and rename it views-view-unformatted--accordion.html.twig. Inside of that file, we will write our include statement for the accordion <dl> component. But before we do that, we need to make a key change to that component file. If you go back to the contents of that file, you’ll notice that it has a for loop built to pass in Pattern Lab data and nest the accordion items themselves:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

In Drupal, we don’t want to iterate over this static list; all we need to do is provide a single variable for the  Views rows to be passed into. Let’s tweak our code a bit to allow for that:

<dl class="accordion-item">
  {% if drupal == true %}
    {{ accordion_items }}
  {% else %}
    {% for listItem in listItems.four %}
      {% include "@molecules/accordion-item/accordion-item.twig"
        with {
          "accordion_term": listItem.headline.short,
          "accordion_def": listItem.excerpt.long
        }
      %}
    {% endfor %}
  {% endif %}
</dl>

You’ll notice that we’ve added an if statement to check whether “drupal” is true—this variable can actually be anything Pattern Lab doesn’t recognize (see the next code snippet). Finally, in views-view-unformatted--accordion.html.twig let’s put the following:

{% set drupal = true %}
{% include "@organisms/accordion/accordion.twig"
  with {
    "accordion_items": rows,
  }
%}

At the view level, all we need is this outer <dl> wrapper and to just pass in our Views rows (which will contain our already component-ized nodes). Rebuild the cache, visit your view page and voila! You now have a fully working accordion!

Conclusion

We have now not only created a more complex nested component that uses JavaScript… we have done it in Drupal! Your HTML, CSS and JavaScript are where they belong (in the components themselves), and you are merely passing Drupal’s dynamic data into those files.

There’s definitely a lot more to learn; below is a list of posts and webinars to continue your education and get involved in the future of component-driven development and our tool, Emulsify.

Recommended Posts

  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to Part Three of our frontend miniseries on style guides! In this installment, we cover the bits and pieces of atomic design using Pattern Lab.
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 24 2017
May 24
May 24th, 2017

In the last post, we introduced Emulsify and spoke a little about the history that went into its creation. In this post, we will walk through the basics of Emulsify to get you building lovely, organized components automatically added to Pattern Lab.

Prototyping

Emulsify is at its most basic level a prototyping tool. Assuming you’ve met the requirements and have installed Emulsify, running the tool is as simple as navigating to the directory and running `npm start`. This task takes care of building your Pattern Lab website, compiling Sass to minified CSS, linting and minifying JavaScript.

Also, this single command will start a watch task and open your Pattern Lab instance automatically in a browser. So now when you save a file, it will run the appropriate task and refresh the browser to show your latest changes. In other words, it is an end-to-end prototyping tool meant to allow a developer to start creating components quickly with a solid backbone of automation.

Component-Based Theming

Emulsify, like Pattern Lab, expects the developer to use a component-based building approach. This approach is elegantly simple: write your DRY components, including your Sass and JavaScript, in a single directory. Automation takes care of the Sass compilation to a single CSS file and JavaScript to a single JavaScript file for viewing functionality in Pattern Lab.

Because Emulsify leverages the Twig templating engine, you can build each component HTML(Twig) file and then use the Twig functions include, embed and extends to combine components into full-scale layouts. Sound confusing? No need to worry—there are multiple examples pre-built in Emulsify. Let’s take a look at one below.

Simple Accordion

Below is a simple but common user experience—the accordion. Let’s look at the markup for a single FAQ accordion item component:

<dt class="accordion-item__term">What is Emulsify?</dt>
<dd class="accordion-item__def">A Pattern Lab prototyping tool and Drupal 8 base theme.</dd>

If you look in the components/_patterns/02-molecules/accordion-item directory, you’ll find this Twig file as well as the CSS and JavaScript files that provide the default styling and open/close functionality respectively. (You’ll also see a YAML file, which is used to provide data for the component in Pattern Lab.)

But an accordion typically has multiple items, and HTML definitions should have a dl wrapper, right? Let’s take a look at the emulsify/components/_patterns/03-organisms/accordion/accordion.twig markup:

<dl class="accordion-item">
  {% for listItem in listItems.four %}
    {% include "@molecules/accordion-item/accordion-item.twig"
      with {
        "accordion_item": listItem.headline.short,
        "accordion_def": listItem.excerpt.long
      }
    %}
  {% endfor %}
</dl>

Here you can see that the only HTML added is the dl wrapper. Inside of that, we have a Twig for loop that will loop through our list items and for each one include our single accordion item component above. The rest of the component syntax is Pattern Lab specific (e.g., listItems, headline.short, excerpt.long).

Conclusion

If you are following along in your own local Emulsify installation, you can view this accordion in action inside your Pattern Lab installation. With this example, we’ve introduced not only the basics of component-based theming, but we’ve also seen an example of inheriting templates using the Twig include function. Using this example as well as the other pre-built components in Emulsify, we have what we need to start prototyping!

In the next article, we’ll dive into how to implement Emulsify as a Drupal 8 theme and start building a component-based Drupal 8 project. You can also view a recording of a webinar we made in March. Until then, see you next week!

Recommended Posts

  • Webinar presented by Brian Lewis and Evan Willhite 15-March-2017, 1pm-2pm CDT Modern web applications are not built of pages, but are better thought of as a collection of components, assembled…
  • Welcome to the final post of our frontend miniseries on style guides! In this installment, the Web Chefs talk through how we chose Pattern Lab over KSS Node for Four…
  • Shared Principles There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization.…
Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
May 17 2017
May 17
May 17th, 2017

Shared Principles

There is no question that the frontend space has exploded in the past decade, having gone from the seemingly novice aspect of web development to a first-class specialization. At the smaller agency level, being a frontend engineer typically involves a balancing act between a general knowledge of web development and keeping up with frontend best practices. This makes it all the more important for agency frontend teams to take a step back and determine some shared principles. We at Four Kitchens did this through late last summer and into fall, and here’s what we came up with. A system working from shared principles must be:

1. Backend Agnostic

Even within Four Kitchens, we build websites and applications using a variety of backend languages and database structures, and this is only a microcosm of the massive diversity in modern web development. Our frontend team strives to choose and build tools that are portable between backend systems. Not only is this a smart goal internally but it’s also an important deliverable for our clients as well.

2. Modular

It seems to me the frontend community has spent the past few years trying to find ways to incorporate best practices that have a rich history in backend programming languages. We’ve realized we, too, need to be able to build code structures that can scale without brittleness or bloat. For this reason, the Four Kitchens frontend team has rallied around component-based theming and approaches like BEM syntax. Put simply, we want the UI pieces we build to be as portable as the structure itself: flexible, removable, DRY.

3. Easy to Learn

Because we are aiming to build tools that aren’t married to backend systems and are modular, this in turn should make them much more approachable. We want to build tools that help a frontend engineer who works in any language to quickly build logically organized component-based prototypes quickly and with little ramp-up.

4. Open Source

Four Kitchens has been devoted to the culture of open-source software from the beginning, and we as a frontend team want to continue that commitment by leveraging and building tools that do the same.

Introducing Emulsify

Knowing all this, we are proud to introduce Emulsify—a Pattern Lab prototyping tool and Drupal 8 starterkit theme. Wait… Drupal 8 starterkit you say? What happened to backend agnostic? Well, we still build a lot in Drupal, and the overhead of it being a starterkit theme is tiny and unintrusive to the prototyping process. More on this in the next post.
[NB: Check back next week for our next Emulsify post!]

With these shared values, we knew we had enough of a foundation to build a tool that would both hold us accountable to these values and help instill them as we grow and onboard new developers. We also are excited about the flexibility that this opens up in our process by having a prototyping tool that allows any frontend engineer with knowledge in any backend system (or none) to focus on building a great UI for a project.

Next in the series, we’ll go through the basics of Emulsify and explain its out-of-the-box strengths that will get you prototyping in Pattern Lab and/or creating a Drupal 8 theme quickly.

Recommended Posts

Evan Willhite
Evan Willhite

Evan Willhite is a frontend engineer at Four Kitchens who thrives on creating delightful digital experiences for users, clients, and fellow engineers. He enjoys running, hot chicken, playing music, and being a homebody with his family.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Mar 07 2017
Mar 07

This weekend’s DrupalCamp London wasn’t my first Drupal event at all, I’ve been to 3 DrupalCon Europe, 4 DrupalCamp Dublin, and a few other DrupalCamps in Ireland and lots of meetups, but in this case I experienced a lot of ‘first times’ that I want to share.

This was the first time I’d attended a Drupal event representing a sponsor organisation, and as a result the way I experienced it was completely different.

Firstly, you focus more on your company’s goals, rather than your personal aims. In this case I was helping Capgemini UK to engage and recruit people for our open positions. This allowed me to socialise more and try to connect with people. We also had T-shirts so it was easier to attract people if you have something free for them. I was also able to have conversations with other sponsors to see why did they sponsor the event, some were also recruiting, but most of them were selling their solutions to prospective clients, Drupal developers and agencies.

The best of this experience was the people I found in other companies and the attendees approaching us for a T-shirt or a job opportunity.

New member of Capgemini UK perspective

As a new joiner in the Capgemini UK Drupal team I attended this event when I wasn’t even a month old in the company, and I am glad I could attend this event at such short notice in my new position, I think this tells a lot about the focus on training and career development Capgemini has and how much they care about Drupal.

As a new employee of the company this event allowed me to meet more colleagues from different departments or teams and meet them in a non-working environment. Again the best of this experience was the people I met and the relations I made.

I joined Capgemini from Ireland, so I was also new to the London Drupal community, and the DrupalCamp gave me the opportunity to connect and create relationships with other members of the Drupal community. Of course they were busy organising this great event, but I was able to contact some of the members, and I have to say they were very friendly when I approached any of the crew or other local members attending the event. I am very happy to have met some friendly people and I am committed to help and volunteer my time in future events, so this was a very good starting point. And again the best were the people I met.

Non-session perspective

As I had other duties I couldn’t attend all sessions. But I was able to attend some sessions and the Keynotes, with special mention to the Saturday keynote from Matt Glaman, it was very motivational and made me think anyone could evolve as a developer if they try and search the resources to get the knowledge. And the closing keynote from Danese Cooper was very inspirational as well about what Open Source is and what should be, and that we, the developers, have the power to make it happen. And we could also enjoy Malcom Young’s presentation about Code Reviews.

Conclusion

Closing this article I would like to come back to the best part of the DrupalCamp for me this year, which was the people. They are always the best part of the social events. I was able to catch up with old friends from Ireland, engage with people considering a position at Capgemini and introduce myself to the London Drupal community, so overall I am very happy with this DrupalCamp London and I will be happy to return next year. In the meantime I will be attending some Drupal meetups and trying to get involve in the community, so don’t hesitate to contact me if you have any question or you need my help.

Mar 02 2017
Mar 02
March 2nd, 2017

You might have heard about high availability before but didn’t think your site was large enough to handle the extra architecture or overhead. I would like to encourage you to think again and be creative.

Background

Digital Ocean has a concept they call a floating IPs. A Floating IP is an IP address that can be instantly moved from one Droplet to another Droplet in the same data center. This idea is great, it allows you to keep your site running in the event of failure.

Credit

I have to give credit to BlackMesh for handling this process quite well. The only thing I had to do was create the tickets to change the architecture and BlackMesh implemented it.

Exact Problem

One of our support clients had the need for a complete site relaunch due to a major overhaul in the underlying architecture of their code. Specifically, they had the following changes:

  1. Change in the site docroot
  2. Migration from a single site architecture to a multisite architecture based on domain access
  3. Upgrade of PHP version that required a server replacement/upgrade in linux distribution version

Any of these individually could have benefited from this approach. We just bundled all of the changes together to delivering minimal downtime to the sites users.

Solution

So, what is the right solution for a data migration that takes well over 3 hours to run? Site downtime for hours during peak traffic is unacceptable. So, the answer we came up with was to use a floating IP that can easily change the backend server when we are ready to flip the switch. This allows us to migrate our data on a new separate server using it’s own database (essentially having two live servers at the same time).

Benefits

Notice that we won’t need to change the DNS records here which meant we didn’t have to wait for DNS records to propagate all over the internet. The new site was live instantly.

Additional Details

Some other notes during the transition that may lead to separate blog posts:

  1. We created a shell script to handle the actual deployment and tested it before the actual “go live” date to minimize surprises.
  2. A private network was created to allow the servers to communicate to each other directly and behind the scenes.
  3. To complicate this process, during development (prelaunch) the user base grew so much we had to off load the Solr server on to another machine to reduce server CPU usage. This means that additional backend servers were also involved in this transition.

Go-Live (Migration Complete)

After you have completed your deployment process, you are ready to switch the floating ip to the new server. In our case we were using “keepalived” which responds to a health check on the server. Our health check was a simple php file that responded with the text true or false. So, when we were ready to switch we just changed the health checks response to false. Then we got an instant switch from the old server to the new server with minimal interruption.

Acceptable Losses

There were a few things we couldn’t get around:

  1. The need for a content freeze
  2. The need for a user registration freeze

The reason for this was that our database was the database updates required the site to be in maintenance mode while being performed.

A problem worth mentioning:

  1. The database did have a few tables that would have to have acceptable losses. The users sessions table and cache_form table both were out of sync when we switched over. So, any new sessions and saved forms were unfortunately lost during this process. The result is that users would have to log in again and fill out forms that weren’t submitted. In the rare event that a user changed their name or other fields on their preferences page those changes would be lost.

Additional Considerations

  1. Our mail preferences are handled by third parties
  2. Comments aren’t allowed on this site

Recommended Posts

  • Engineers find solving complex problems exciting, but as I’ve matured as an engineer, I’ve learned that complexity isn’t always as compelling as simplicity.
  • Cloudflare Bug May Have Created Security Leak Cloudflare, a major internet host, had some unusual circumstances that caused their servers to output information that contained private information such as HTTP…
  • When you already have a design and are working with finalized content, high fidelity wireframes might be just what the team needs to make decisions quickly.
Chris Martin
Chris Martin

Chris Martin is a junior engineer at Four Kitchens. When not maintaining websites he can be found building drones, computers, robots, and occasionally traveling to China.

Development

Blog posts about backend engineering, frontend code work, programming tricks and tips, systems architecture, apps, APIs, microservices, and the technical side of Four Kitchens.

Read more Development
Dec 01 2016
Dec 01

Startups and products can move faster than agencies that serve clients as there is no feedback loops and manual QA steps by an external authority that can halt a build going live.

One of the roundtable discussions that popped up this week while we’re all in Minsk is that agencies which practice Agile transparently as SystemSeed do see a common trade-off. CI/CD (Continuous Integration / Continuous Deployment) isn’t quite possible as long as you have manual QA and that lead time baked-in.

Non-Agile (or “Waterfall”) agencies can potentially supply work faster but without any insight by the client, inevitably then needing change requests which I’ve always visualised as the false economy of Waterfall as demonstrated here: 

image

Would the client prefer Waterfall+change requests and being kept in the dark throughout the development but all work is potentially delivered faster (and never in the final state), or would they prefer full transparency, having to check all story details, QA and sign off as well as multi-stakeholder oversight… in short - it can get complicated.

CI and CD isn’t truly possible when a manual review step is mandatory. Today we maintain a thorough manual QA by ourselves and our clients before deploy using a “standard” (feature branch -> dev -> stage -> production) devops process, where manual QA and automated test suites occur both at the feature branch level and just before deployment (Stage). Pantheon provides this hosting infrastructure and makes this simple as visualised below:

image

This week we brainstormed Blue & Green live environments which may allow for full Continuous Integration whereby deploys are automated whenever scripted tests pass, specifically without manual client sign off. What this does is add a fully live clone of the Production environment to the chain whereby new changes are always deployed out to the clone of live and at any time the system can be switched from pointing at the “Green” production environment, to the “Blue” clone or back again.

image

Assuming typical rollbacks are simple and databases are either in sync or both Green and Blue codebases link to a single DB, then this theory is well supported and could well be the future of devops. Especially when deploys are best made “immediately” and not the next morning or in times of low traffic.

In this case clients would be approving work already deployed to a production-ready environment which will be switched to as soon as their manual QA step is completed.

One argument made was that our Pantheon standard model allows for this in Stage already, we just need an automated process to push from Stage to Live once QA is passed. We’ll write more on this if our own processes move in this direction.

Nov 16 2016
Nov 16
November 16th, 2016

Speed Up Migration Development

One of the things that Drupal developers do for clients is content migration. This process uses hours of development time and often has one developer dedicated to it for the first half of the project. In the end, the completeness of the migration depends partly on how much time your client is willing to spend on building out migration for each piece of their content and settings. If you’ve come here, you probably want to learn how to speed up your migration development so you can move on to more fun aspects of a project.

The Challenge

Our client, the NYU Wagner Graduate School of Public Service was no exception when they decided to move to Drupal 8. Since our client had 65 content types and 84 vocabularies to weed through, our challenge was to build all those migrations into their budget and schedule.

The Proposed Solution

Since this was one of our first Drupal 8 sites, I was the first to dig my hands into the migration system. I was particularly stoked in the fact that everything in Drupal 8 is considered to be an entity. This opened up a bunch of possibilities. Also, the new automated migration system—Migrate Drupal—that came with core was particularly intriguing. In fact, our client had started down the path of using Migrate Drupal to upgrade their site to D8. Given they had field collections, entity references, and the fact that the Migrate Drupal module was still very much experimental for Drupal 7 upgrades, this didn’t pan out with a complete migration of their data.

The proposed solution was to use the --configure-only method on the drush tool migrate-upgrade. Doing so would build out templated upgrade configurations that would move all data from Drupal 7 or Drupal 6 to Drupal 8. The added bonus is that you can use that as a starting point and modify them from there.

Migration in Drupal 7 vs Drupal 8

Since we have the 100 mile high view of what the end game is, lets talk a little about why and how this works. In Drupal 7 Migrations are strictly class-based. You can see an example of a Drupal 7 migration in the Migrate Example module. The structure of the migration tends to be one big blob of logic (broken up by class functions of course) around a single migration. Here are the parts:

  • Class Constructor: where you define your source, destination, and field mapping
  • Prepare: a function where you do all your data processing

In Drupal 8, the concept of a migration has been abstracted out into the various parts that makes them reusable and feel more like “building with blocks” approach. You can find an example inside the Migrate Plus module. Here are the parts:

  • Source Plugins: a class defining the query, initial data alteration, keys, and fields provided by the source
  • Destination Plugins: a class defining how to store the data received in Drupal 8
  • Process Plugins: a class defining how to transform data from the source to something that can be used by the destination or other process plugins; you can find a full list of what comes with core in Migrate’s documenation
  • Migration Configuration: a configuration file that brings the configuration of all the source, destination, and process plugins to make a migration

Now yall might have noticed I left out hook_prepare_row. Granted, this is still available. It was also a key way many people used to manipulate data across several source fields that behaved the same. With the ideal of process plugins, you can now abstract out that functionality and use it in your field mapping.

How “Migrate Drupal” Makes the Process Better

There are tons of reasons to use Migrate Drupal to start your migration.

It builds a migration from your Drupal site

You might have seen above that I mentioned that Migrate Drupal provides a templated set of configurations. This is a product of some very elaborate migration detection classes. This means you will get all the configurations for:

  • content types
  • field setup
  • field configuration
  • various site settings
  • taxonomy vocabularies
  • custom blocks
  • content and their revisions
  • etc…

These will be built specifically for the site you are migrating from. This results in tons of configuration files—my first attempt created over 140 migration YAML files.

It’s hookable

Hookable means that it’s not just a part of core thing and that it’s expandable. That means that contributed modules can provide their own templates for their entities and field types, allowing Migrate Drupal to move over that data too. For example, it is completely possible (and in progress) for the Field Collection module to build in migration templates so that the migration will know how to import a field collection field. Not only that, the plugins provided by the contributed modules can be used in your custom migrations as well.

No initial need for redirection of content

Here’s an interesting one, everything comes over pretty much verbatim. Node IDs, term IDs, etc. are exactly the same. URL aliases come over, too, by default. Theoretically, you could have the same exact site from D7 on D8 if you ported over the theme.

More time to do the alterations the client wants

Since you aren’t spending your time building all the initial source plugins, process plugins, destination plugins, and configurations, you now have more time to alter the migrations to fit the new content model, or work with the new spiffy module like paragraphs.

How-To: Start a Migration with “Migrate Drupal”

Ok so here is the technical part. From here on is a quick How-To that gets you up and going. Things you will need are:

  • a Drupal 6 or 7 site
  • your brand new Drupal 8 site
  • a text editor
  • drush

1. Do a little research and install contrib modules.

We first need to find out if our contrib modules that are installed on our Drupal 6/7 site are available and have a migration component to them in Drupal 8. Once we identify the ones that we can use, go ahead and install them in Drupal 8 so they can help you do the migration. Here are a couple of thoughts:

Is the field structure the same as in Drupal 6/7? The entity destination plugin is a glorified way to say $entity->create($data); $entity->save();. Given this, if you know that on Drupal 6/7 that the field value was, for example…

[
  'value' => 'This is my value.',
  'format' => 'this is my format'
]

…and that it’s the same on Drupal 8, then you can rest easy. The custom field will be migrated over perfectly.

Is there a cckfield process plugin in the Drupal 8 Module for the custom field type? When porting fields, there is an automated process of detecting field types. If the field type you are pulling from equates to a known set of field types by the cckfield migration plugin, it will be used. You can find these in src/Plugin/migrate/cckfield of any given module. The Text core module has an example.

Is there a migration template for your entity or field in the Drupal 8 module? A migration template tells the Drupal Migrate module that there are other migrations that need to be created. In the case of the Text module. you will see one for the teaser length configuration. There can be multiple and look like migrations themselves, but are appended to in such a way to make them special for your site. You can find these in
migration_templates in the module.

Are there source, process, or destination plugins in the Drupal 8 module? These all help you (or the Migrate Drupal module) move content from your old site to your new one. It’s very possible that there are plugins not wired up to be used in an automated way yet, but that doesn’t keep you from using them! Look for them in src/plugin/migrate.

2. Install the contrib migrate modules.

First you must install all the various contributed modules that help you build these configurations and test your migrations. Using your favorite build method, add the following modules to your project:

NOTE: Keep in mind that you will need to be mindful of the version that goes with what version of Drupal Core. Example 8.x-1.x goes with Drupal 8.0.*, 8.x-2.x goes with Drupal 8.1.*, and 8.x-3.x goes with Drupal 8.2.*.

3. Set up the upgrade/migrate databases.

Be sure to give your database an key. The default is ‘upgrade’ for drush migrate-upgrade and ‘migrate’ for drush migrate-import. I personally stick with ‘migrate’ and just be sure to give the custom settings to migrate-upgrade. I use drush migrate-import a ton more than drush migrate-upgrade.

$databases = [
  'default' => [
    'default' => [
      'database' => 'drupal',
      'username' => 'user',
      'password' => 'pass',
      'host' => 'localhost',
      'port' => '',
      'driver' => 'mysql',
      'prefix' => '',
    ],
  ],
  'migrate' => [
    'default' => [
      'database' => 'migrate',
      'username' => 'user',
      'password' => 'pass',
      'host' => 'localhost',
      'port' => '',
      'driver' => 'mysql',
      'prefix' => '',
    ],
  ],
];

4. Export the migration configuration.

First I want to give credit to Mike Ryan for originally documenting this process. Without it, or his help in IRC, you wouldn’t have gotten this article today.

Go ahead and import your Drupal 6/7 database if you aren’t connecting to a live instance in your database settings with your preferred method. Take your pick:

  • drush sql-sync
  • drush sql-drop --database=migrate; gunzip -c /path/to/migrate.sql.gz | drush sqlc --database=migrate

Next run Migrate Upgrade to get your configuration built and stored in the Drupal 8 site.

drush migrate-upgrade --legacy-db-key=migrate --configure-only

Finally store your configuration. I prefer just to stick it in the sync directory created by Drupal 8 (or in my case configure for checking into Git).

drush config-export sync -y

I’m verbose about the directory because we usually have one for local development stored in the local key also. You can leave off the word sync if you only have a single sync directory.

5. Update your migration group with the info for the migration.

This is a quick and simple step. Find migrate_plus.migration_group.migrate_drupal_7.yml or migrate_plus.migration_group.migrate_drupal_6.yml and set the shared configuration. I usually make mine look like this:

langcode: en
status: true
dependencies: {  }
id: migrate_drupal_7
label: 'Import from Drupal 7'
description: 'Migrations originally generated from drush migrate-upgrade --configure-only'
source_type: 'Drupal 7'
module: null
shared_configuration:
  source:
    key: migrate

5. Alter the configuration.

Ok here comes the fun part. You should now have all the configurations to import everything. You could in fact now run drush mi --all and in theory get a complete migration of your old site to your new site in the data sense.

With that said, you will most likely need to make alterations. For example, in my migration we didn’t want all of the filters migrated over. Instead, we wanted to define the filters first, and then use a map to map filters from one type to another. So I did a global find across all the migration files for:

    plugin: migration
    migration: upgrade_d7_filter_format
    source: format

And replaced it with the following:

    plugin: static_map
    source: format
    map:
      php_code: filter_null
      filtered_html: basic_html

Another example of a change you can make is the change of the source plugin. This allows you to change the data you wanted. For example, I extended the node source plugin to add a where-clause so that I could only get data created after a certain time.

namespace Drupalwg_drupal7_migratePluginmigratesource;

use DrupalnodePluginmigratesourced7Node as MigrateD7Node;
use DrupalmigrateRow;

/**
 * Drupal 7 nodes source from database.
 *
 * @MigrateSource(
 *   id = "wg_d7_node",
 *   source_provider = "node"
 * )
 */
class Node extends MigrateD7Node {

  /**
   * {@inheritdoc}
   */
  public function query() {
    $query = parent::query();
    // If we pass in a timestamp... only get things created since then.
    if (isset($this->configuration['highwater'])) {
      $query->condition('n.created', $this->configuration['highwater'], '>=');
    }
    return $query;
  }

}

Lastly, you may want to change the destination configuration. By default, the configuration of the migration will go to a content type with the same name. It may be possible that you changed the name of the content type or are merging several content types together. Simply altering…

destination:
  plugin: 'entity:node'
  default_bundle: page

…to be…

destination:
  plugin: 'entity:node'
  default_bundle: landing_page

…may be something you need to do.

Once you are done altering the migration save the configuration files. You can use the sync directory or if you plan on distributing it in a module, you can use the
config/install folder of you module.

Rebuild your site with the new configuration via your preferred method, or simply run drush config-import sync -y.

6. Migrate the data.

This is the last step. When you are ready, migrate the data either by running each of the migrations individually using --force, run the migration even though other pieces haven’t, use the --execute-dependencies, or just go ahead and go for the gold drush migrate-import --all

Caveats

So finally after you go through all the good news, there are a few valid points that need to be made about the limitations of this method.

IDs are verbatim due to the complexity of dependencies

So this means that the migrations are currently expecting all the nids, tids, fids, and other IDs, to be exactly what they were on Drupal 6 or 7. This causes issues when your client is building new staged data. You have three options in this case:

  1. Alter the node, node_revision, file_managed, taxonomy_term_data, users, and probably some others I’m missing here that house the main entities that entity reference fields will need, so that their keys are something your client will not reach on their current production site while you are developing.
  2. Do not start adding or altering content on Drupal 8 until all migrations are done.
  3. Go through all the migrations and add migration process plugins where an entity is referenced, and then remove the main id from the migration of that entity.

In my case, I went with the first solution because this realization hit me kinda late. Our plan was to migrate now for data so our client would have something to show their stakeholders, and then migrate again later to get the latest data before going live.

There are superfluous migrations

You will always find out that you don’t want to keep the settings verbatim to the Drupal 6 or 7 site. This means you will have to remove that migration and remove it’s dependency from all the other migrations that depend on it. Afterwords, you will need to make sure that that case is covered. I shared an example in this article where we decided to go ahead and configure new filter formats. Another example may be that you don’t even give a crap about the dblog settings from your old Drupal site.

Final Thoughts

For NYU Wagner, we were able to save a ton of time having the migrations built out for us to start with. Just the hours spent on building the field configurations for the majority of the content types that were to stay the same was worth it. It was also a great bridge into “How Do Migrations Work?” We now have a more complete custom migration for our client in a fraction of the time once our feature set was nailed down, than if we were to go build out the migrations one at a time. Happy migrating.

Allan Chappell
Allan Chappell

Allan brings technological know-how and grounds it with some simple country living. His interests include DevOps, animal husbandry (raising rabbits and chickens), hiking, and automated testing.

Oct 27 2016
Oct 27

In a previous article on this blog, I talked about why code review is a good idea, and some aspects of how to conduct them. This time I want to dig deeper into the practicalities of reviewing code, and mention a few things to watch out for.

Code review is the first line of defence against hackers and bugs. When you approve a pull request, you’re putting your name to it - taking a share of responsibility for the change.

Once bad code has got into a system, it can be difficult to remove. Trying to find problems in an existing codebase is like looking for an unknown number of needles in a haystack, but when you’re reviewing a pull request it’s more like looking in a handful of hay. The difficult part is recognising a needle when you see one. Hopefully this article will help you with that.

Code review shouldn’t be a box-ticking exercise, but it can be helpful to have a list of common issues to watch out for. As well as the important question of whether the change will actually work, the main areas to consider are:

  • Security
  • Perfomance
  • Accessibility
  • Maintainability

I’ll touch on these areas in more detail - I’ll be talking about Drupal and PHP in particular, but a lot of the points I’ll make are relevant to other languages and frameworks.

Security

I don’t claim to be an expert on security, and often count myself lucky that I work in what my colleague Andrew Harmel-Law calls “a creative-inventive market, not a safety-critical one”.

Having said that, there are a few common things to keep an eye out for, and developers should be aware of the OWASP top ten list of vulnerabilities. When working with Drupal, you should bear in mind the Drupal security team’s advice for writing secure code. For me, the most important points to consider are:

Does the code accept user input without proper sanitisation?

In short - don’t trust user input. The big attack vectors like XSS and SQL injection are based on malicious text strings. Drupal provides several types of text filtering - the appropriate filter depends on what you’re going to do with the data, but you should always run user input through some kind of sanitisation.

Are we storing sensitive data anywhere we shouldn’t be?

Security isn’t just about stopping bad guys getting in where they shouldn’t. Think about what kind of data you have, and what you’re doing with it. Make sure that you’re not logging people’s private data inappropriately, or passing it across network in a way you shouldn’t. Even if the site you’re working on doesn’t have anything as sensitive as the Panama papers, you have a legal, professional, and personal responsibility to make sure that you’re handling data properly.

Performance

When we’re considering code changes, we should always think about what impact they will have on the end user, not least in terms of how quickly a site will load. As Google recently reminded us, page load speed is vital for user engagement. Slow, bloated websites cost money, both in terms of mobile data charges and lost revenue.

Does the change break caching?

Most Drupal performance strategies will talk about the value of caching. The aim of the game is to reduce the amount of work that your web server does. Ideally, the web server won’t do any work for a page request from an anonymous user - the whole thing will be handled by a reverse proxy cache, such as Varnish. If the request needs to go to the web server, we want as much of the page as possible to be served from an object cache such as Redis or Memcached, to minimise the number of database queries needed to render the page.

Are there any unnecessary uses of $_SESSION?

Typically, reverse proxy servers like Varnish will not cache pages for authenticated users. If the browser has a session, the request won’t be served by Varnish, but by the web server.

Here’s an illustration of why this is so important. This graph shows the difference in response time on a load test environment following a deployment that included some code to create sessions. There were some other changes that impacted performance, but this was the big one. As you can see, overall response time increased six-fold, with the biggest increase in the time spent by the web server processing PHP (the blue sections on the graphs), mainly because a few lines of code creating sessions had slipped through the net.

Graph showing dramatic increase in PHP evaluation time

Are there any inefficient loops?

The developers’ maxims “Don’t Repeat Yourself” and “Keep It Simple Stupid” apply to servers as well. If the server is doing work to render a page, we don’t want that work to be repeated or overly complex.

What’s the front end performance impact?

There’s no substitute for actually testing, but there are a few things that you can keep an eye out for when reviewing change. Does the change introduce any additional HTTP requests? Perhaps they could be avoided by using sprites or icon fonts. Have any images been optimised? Are you making any repeated DOM queries?

Accessibility

Even if you’re not an expert on accessibility, and don’t know ARIA roles, you can at least bear in mind a few general pointers. When it comes to testing, there’s a good checklist from the Accessibility Project, but here are some things I always try to think about when reviewing a pull request.

Will it work on a keyboard / screen reader / other input or output device ?

Doing proper accessibility testing is difficult, and you may not have access to assistive technology, but a good rule of thumb is that if you can navigate using only a keyboard, it will probably work for someone using one of the myriad input devices. Testing is the only way to be certain, but here are a couple of simple things to remember when reviewing CSS changes: hover and focus should usually go together, and you should almost never use outline: none;.

Are you hiding content appropriately?

One piece of low-hanging fruit is to make sure that text is available to screen readers and other assistive technology. Any time I see display: none; in a pull request, alarm bells start ringing. It’s usually not the right way to hide content.

Maintainability

Hopefully the system you’re working on will last for a long time. People will have to work on it in the future. You should try to make life easier for those people, not least because you’ll probably be one of them.

Reinventing the wheel

Are you writing more code than you need to? It may well be that the problem you’re looking at has already been solved, and one of the great things about open source is that you’re able to recruit an army of developers and testers you may never meet. Is there already a module for that?

On the other hand, even if there is an existing module, it might not always make sense to use it. Perhaps the contributed module provides more flexibility than our project will ever need, at a performance cost. Maybe it gives us 90% of what we want, but would force us to do things in a certain way that would make it difficult to get the final 10%. Perhaps it isn’t in a very healthy state - if so, perhaps you could fix it up and contribute your fixes back to the community, as I did on a recent project.

If you’re writing a custom module to solve a very specific problem, could it be made more generic and contributed to the community? A couple of examples of this from the Capgemini team are Stomp and Route.

One of the jobs of the code reviewer is to help draw the appropriate line between the generic and the specific. If you’re reviewing custom code, think about whether there’s prior art. If the pull request includes community-contributed code, you should still review it. Don’t assume that it’s perfect, just because someone’s given it away for nothing.

Appropriate API usage

Is your team using your chosen frameworks as they were intended? If you see someone writing a custom function to solve a problem that’s already been solved, maybe you need to share a link to the API docs for the existing solution.

Introducing notices and errors

If your logs are littered with notices about undefined variables or array indexes, not only are you likely to be suffering a performance hit from the logging, but it’s much harder to separate the signal from the noise when you’re trying to investigate something.

Browser support

Remember that sometimes, it’s good to be boring. As a reviewer, one of your jobs is to stop your colleagues from getting carried away with shiny new features like ES6, or CSS variables. Tools like Can I Use are really useful in being able to check what’s going to work in the browsers that you care about.

Code smells

Sometimes, code seems wrong. As I learned from Larry Garfield’s excellent presentation on code smells at the first Drupalcon I went to, code smells are indications of things that might be a deeper problem. Rather than re-hash the points Larry made, I’d recommend reading his slides, but it is worth highlighting some of the anti-patterns he discusses.

Functions or objects that do more than one thing

A function should have a function. Not two functions, or three. If an appropriate comment or function name includes “and”, it’s a sign you should be splitting the function up.

Functions that sometimes do different things

Another bad sign is the word “or” in the comment. Functions should always do the same thing.

Excessive complexity

Long functions are usually a sign that you might want to think about refactoring. They tend to be an indicator that the code is more complex than it needs to be. The level of complexity can be measured, but you don’t need a tool to tell you that if a function doesn’t fit on a screen, it’ll be difficult to debug.

Not being testable

Even if functions are simple enough to write tests for, do they depend on a whole system? In other words, can they be genuinely unit tested?

Lack of documentation

There’s more to be said on the subject of code comments than I can go into here, but suffice to say code should have useful, meaningful comments to help future maintainers understand it.

Tight coupling

Modules should be modular. If two parts of a system need to interact, they should have a clearly defined and documented interface.

Impurity

Side effects and global variables should generally be avoided.

Sensible naming

Is the purpose of a function or variable obvious from the name? I don’t want to rehash old jokes, but naming things is difficult, and it is important.

Why would you comment out lines of code? If you don’t need it, delete it. The beauty of version control is that you can go back in time to see what code used to be there. As long as you write a good commit message, it’ll be easy enough to find. If you think that you might need it later, put it behind a feature toggle so that the functionality can be enabled without a code release.

Specificity

In CSS, IDs and !important are the big code smells for me. They’re a bad sign that a specificity arms race has begun. Even if you aren’t going to go all the way with a system like BEM or SMACSS, it’s a good idea to keep specificity as low as possible. The excellent articles on CSS specificity by Harry Roberts and Chris Coyier are good starting points for learning more.

Standards

It’s important to follow coding standards. The point of this isn’t to get some imaginary Scout badge - code that follows standards is easier to read, which makes it easier to understand, and by extension easier to maintain. In addition, if you have your IDE set up right, it can warn you of possible problems, but those warnings will only be manageable if you keep your code clean.

Deployability

Will your changes be available in environments built by Continuous Integration? Do you need to set default values of variables which may need overriding for different environments? Just as your functions should be testable, so should your configuration changes. As far as possible, aim to make everything repeatable and automatable - if a release needs any manual changes it’s a sign that your team may need to be thinking with more of a DevOps mindset.

Keep Your Eyes On The Prize

With all this talk of coding style and standards, don’t get distracted by trivialities - it is worth caring about things like whitespace and variable naming, but remember that it’s much more important to think about whether the code actually does what it is supposed to. The trouble is that our eyes tend to fixate on those sort of things, and they cause unnecessary cognitive load.

Pre-commit hooks can help to catch coding standards violations so that reviewers don’t need to waste their time commenting on them. If you’re on a big project, it will almost certainly be worth investing some time in integrating your CI server and your code review tool, and automating checks for issues like code style, unit tests, mess detection - in short, all the things that a computer is better at spotting than humans are.

Does the code actually solve the problem you want it to? Rather than just looking at the code, spend a couple of minutes reading the ticket that it is associated with - has the developer understood the requirements properly? Have they approached the issue appropriately? If you’re not sure about the change, check out the branch locally and test it in your development environment.

Even if there’s nothing wrong with the suggested change, maybe there’s a better way of doing it. The whole point of code review is to share the benefit of the team’s various experiences, get extra eyes on the problem, and hopefully make the end product better.

I hope that this has been useful for you, and if there’s anything you think I’ve missed, please let me know via the comments.

Oct 26 2016
Oct 26

One of my biggest pet-peeves is creating Drupal 7 empty menu link titles since there’s no out-of-the-box solution. As a result it can be difficult to create stylized links, such as icons or background images. After many frustrating sessions I finally sat down to find a way to make this happen. Consequently, I began to think this was an impossibility and was unable to find a solution already in existence that did exactly what I needed it to do. However, Drupal 7 empty menu link titles are absolutely possible with just this one little snippet! Have no fear, theme_menu_link to the rescue!

Using <none> to Create Drupal 7 Empty Menu Link Titles

drupal 7 empty menu link titles

drupal 7 empty menu link titles

First of all, you must start by using the snippet provided below and will need to enter <none> as the link title in order to render it empty. To accomplish this, in your theme’s template.php file add:

/**
 * Implements theme_menu_link().
 *
 * @link https://api.drupal.org/api/drupal/includes!menu.inc/function/theme_menu_link/7.x
 */
function your_theme_menu_link($vars)
{
  $element = $vars['element'];
  $sub_menu = '';

  if ($element['#below']) {
    $sub_menu = drupal_render($element['#below']);
  }

  if ( '<none>' === $element['#title'] )
  {
    $element['#title'] = '';
  }

  $output = l($element['#title'], $element['#href'], $element['#localized_options']);

  return '<li' . drupal_attributes($element['#attributes']) . '>' . $output . $sub_menu . "</li>\n";
}

theme_menu_link returns HTML for menu and submenu links, so that it can be used to alter the menu’s output. In the above, we’re checking to see if the menu link title has been set to <none>. If this has been done correctly it removes the title text while leaving the link intact. Be sure to check your work before moving on because sometimes the simplest mistakes in the beginning will cause you grief in the future.

Finally, be sure to clear your cache if you’re not seeing the change!

Have you already added this snippet and yet not seen anything change? It’s probably due to Drupal not yet seeing the new hook in your code. In order to fix this, you’ll need to clear your cache so that Drupal will register the changes you’ve made. Most of the time, this will fix issues like these quickly rather than spending hours trying to debug something that truly isn’t broken.

You can also use this hook to alter other menu link attributes. For instance, if you wanted to avoid using the Menu attributes module you’re able to use this hook to add or remove classes.

For more information on theme_menu_link, see https://api.drupal.org/api/drupal/includes!menu.inc/function/theme_menu_link/7.x.

Is there a Drupal module that can handle this?

Of course there is! Icon API accomplishes this in addition to some extra features you may find useful. I’ve never used it personally, and I’d rather stay away from modules that don’t produce production releases. Consequently, if you have created or know of a module that can handle this feel free to shoot me a line in the comments below and I’ll gladly take a look at it to potentially include in this post as a footnote. In conclusion, don’t ever believe something is impossible because you’re often just a few days of banging your head against the wall away from a breakthrough!

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web