Jun 19 2019
Jun 19

Earlier this month we launched a redesign of the Commonwealth Fund’s Health System Data Center, a platform for exploring state health system data through custom tables, graphs and maps. With interactive visualizations covering dozens of topics, roughly 100 indicators, and tens of thousands of individual metrics, the platform helps make underlying data actionable for advocates, policy makers, and journalists tackling healthcare system issues all over the country.

We used Drupal 8 to build the data center backend, and we used React and Highcharts to render its interactive charts and graphs. Drupal 8’s flexible entity storage system made it a perfect fit for housing the data. Its capabilities for leveraging third-party APIs and JavaScript libraries made integrating with React and Highcharts far simpler than other alternatives.

We’re all incredibly excited to see the Health System Data Center live. For us at Aten, this is the latest in a series of project launches dealing with data visualization. Along the way, we’ve been working on a collection of tools specifically tailored to the unique needs of data-intensive projects. Here are six Drupal 8 modules that help solve specific challenges when working with data. (Note that some of these are sandbox modules. While sandbox modules don’t have official releases, you can still download the code, try them out, and of course, get involved in the issue queue!)

Six Drupal Modules for Working with Datasets

Datasets

We’ve worked on a lot of data projects that use a common architecture. Typically, projects include a collection of Datasets, each of which references a variable number of specific Indicators and Metrics. This module provides custom entities and related functionality for quickly deploying this common architecture.

JS Entity

When embedding multiple instances of a Javascript application on a page (in this specific case, a React app), we often need a way to very quickly pass data to the DOM. This module provides a configurable approach for defining which fields should be passed directly to Drupal’s JavaScript API for each specific view mode. It also offers a number of configuration options, including the ability to rename properties (or field names) to match what your application is looking for.

JS Component

Here at Aten, data projects often involve dynamic visualizations built as JavaScript applications (specifically with React or Vue) that are both embedded within a page rendered by Drupal and leverage data stored in Drupal’s entity system. This module provides an easy way for developers to define JavaScript apps entirely in YAML configuration files, which are exposed in Drupal automatically as blocks. Since they are ultimately just blocks, defined applications can be added to pages by any of the typical means.

MarkJS Search

We often need a way for users to quickly search through a lengthy list of indicators. This module provides fast, responsive highlighting and filtering for search input by leveraging the 3rd-party Mark.js JavaScript library.

Entity Importer

Site owners need a way to keep data accurate, relevant and up-to-date. This module provides a drag-and-drop interface for Drupal’s migrate functionality, making it easy to upload datasets as a series of CSV files. (Learn more from an earlier post: Entity Import: A User Interface for Drupal 8 Migrations.)

Entity Extra Field

When working with JavaScript applications exposed in Drupal as custom blocks, we often want a way to push those blocks directly into the node view page. This module provides a way for site builders to define Extra Fields on entities, which can be blocks, views, or tokens. Extra Fields can be placed and rearranged like any other entity field. (Entity Extra Fields module leverages Drupal’s “Extra Field” system. To learn more about Extra Fields, read Placing Components with Drupal's Extra Fields).

Let’s Talk

If you’re considering a data project for your organization and having trouble getting started, we’d love to help – whether that means talking through long-term goals, responding to a formal RFP, or anything in between. Get in touch and let’s talk about your data.

Apr 17 2019
Apr 17

One of the challenges front-end developers face is adding new components to entity templates that exist outside of what is defined in the Field API; or in other words, adding dynamic components that aren’t really fields. Often this can be easily done by throwing the custom markup in a .html.twig file and calling it a day. But if you’re working on something that needs to be reusable, or if you’re collaborating with a site builder who doesn’t write code, the custom template route can be limiting.

Enter hook_entity_extra_field_info().

Content Moderation: A “Pseudo-Field” in Core

Drupal’s documentation says this hook “exposes ‘pseudo-field’ components on content entities.” You can see this hook in action with the Content Moderation module in core. All moderation-enabled entities can have an option box, placed via that entity’s Manage Display page, that contains a widget to update an entity’s moderation state in place rather than clicking through to the edit page.

Drupal's extra fields interface

The moderation option isn’t a real field. Rather, it’s what Drupal calls a “Pseudo Field.” But by using hook_entity_extra_field_info(), you wouldn’t know the difference. The moderation option can be moved around and configured for various display modes, just like “real” fields.

Using hook_entity_extra_field_info in a Custom Module

On a recent project, we needed to integrate a newer commenting service called Coral Talk. After searching, we learned that no module existed to integrate this service in Drupal. This presented a perfect use case for an Extra Field, and only needed two hooks for the bulk of the work:

/**
 * Implements hook_entity_extra_field_info().
 */
function coral_talk_entity_extra_field_info() {
  // Load commenting configuration.
  $config = \Drupal::config(coral_talk.settings');
  $extra = [];
 
  // Loop over the content types configured to have comments
  // and get their bundle name.
  foreach ($config->get('content_types') as $bundle) {
    if ($bundle) {
      // Add info for Extra Field to nodes only, specific to configured
      // content types. This determines what shows on Manage Display.
      $extra['node'][$bundle]['display'][‘coral_talk_comments'] = [
        'label' => t(‘Coral Talk Comments'),
        'description' => t('Place commenting on the page.'),
        'weight' => 100,
        'visible' => TRUE,
      ];
    }
  }
 
  // Return our new extra field.
  return $extra;
}

After a cache clear, this new field will appear on the configured content types’ Manage Display page and can be placed on the content type along with the other fields for that content type. Now that the field is defined, it needs some info for what should be rendered to the page. This is handled by Drupal’s hook_ENTITY_TYPE_view() hook.

/**
 * Implements hook_ENTITY_TYPE_view().
 */
function coral_talk_node_view(
  array &$build,
  \Drupal\Core\Entity\EntityInterface $entity,
  \Drupal\Core\Entity\Display\EntityViewDisplayInterface $display,
  $view_mode
) {
  // 1. Check to see if our new field should be rendered on the entity display.
  // 2. Determine whether the user has permission to add comments.
  $condition = (
    $display->getComponent(‘coral_talk_comments') &&
    \Drupal::currentUser()->hasPermission('create coral comment')
  );
 
  if ($condition) {
    $config = \Drupal::config(coral_talk.settings');
 
    // Add the new field to the $build array with a call to a custom theme
    // hook to render the comments. Pass necessary config into comment
    // settings.
    $build[‘coral_talk_comments'] = [
      '#theme' => 'coral_talk_comments',
      '#domain' => $config->get('domain') ?? '',
    ];
  }
}

After another cache clear, we’ll now see our comments being rendered to our content types in whichever view mode they’re enabled on. The moves setup of comments outside of code and into a place that’s more accessible and flexible for various users.

This approach is great for simple scenarios. One drawback, however, is that it’s not possible to define any custom configuration options for these pseudo fields. Each extra field is identical, and any configuration has to be hard coded in these hooks. This presents challenges for site builders, who might want to configure comments differently per content type however. Fortunately, there is a solution in contrib that changes how Extra Fields are defined and allows for developers to add configuration to each field. In the next post, we’ll explore the Extra Field Settings Provider module.

Apr 04 2019
Apr 04

With all the excitement about decoupled Drupal over the past several years, I wanted to take a moment to articulate a few specific factors that make headless a good approach for a project – as well as a few that don’t. Quick disclaimer: this is definitely an oversimplification of an otherwise complex subject, and is based entirely on our experience here at Aten. Others will draw different conclusions, and that’s great. In fact, the diversity of perspectives and conclusions about use cases for headless underscores just how incredibly flexible Drupal is. So here’s our take.

First, What is Decoupled?

I’ll keep this real short: decoupled (or headless) Drupal basically means using Drupal for backend content management, and using a separate framework (Angular, React, etc.) for delivering the front-end experience. It completely decouples content presentation (the head) from content management (the body), thus “headless”. There are tons of resources about this already, and I couldn’t come close to covering it as well as others already have. See Related Reading at the end of this post for more info.

Decoupled and Progressively Decoupled

For the purpose of this post, Decoupled Drupal means any Drupal backend that uses a separate technology stack for the front-end. Again, there’s lots of great material on the difference between “decoupled” and “progressively decoupled”. In this post, just pretend they mean the same thing. You can definitely build a decoupled app on top of your traditional Drupal stack, and there are often good reasons to do exactly that.

Why Decoupled?

Decoupled Drupal provides massive flexibility for designing and developing websites, web apps, native apps and other digital products. With the decoupled approach, designers and front-end developers can conspire to build whatever experience they wish, virtually without limitation. It’s great for progressive web apps, where animations, screen transitions and interactivity are particularly important. Decoupled is all but necessary for native apps, where content is typically managed on a centralized server and published via an API to instances of the app running on people’s devices. In recent years, Decoupled Drupal has gained popularity even for more “traditional” websites, again primarily because of the flexibility it provides.

Pros and Cons

I’m not going to list pros and cons per se. Other articles do that. I’m more interested in looking at the specific reasons we’ve chosen to leverage a decoupled approach for some projects, and the reasons we’ve chosen not to for others. I’m going to share our perspective about when to go decoupled, and when not to go decoupled.

When to Decouple

Here are a few key questions we typically ask when evaluating a project as a fit for Decoupled Drupal:

  • Do you have separate backend and front-end development resources? Because decoupled means building a completely separate backend (Drupal) from front-end (Angular, React, etc.), you will need a team capable of building and maintaining both. Whether it’s in-house, freelance or agency support, Decoupled Drupal usually requires two different development teams collaborating together to be successful. Organizations with both front-end devs and backend devs typically check “yes” on this one. Organizations with a few generalist developers, site builders or web admin folks should pause and think seriously about whether or not they have the right people in place to support a decoupled project.
  • Are you building a native app and want to use Drupal to manage your content, data, users, etc.? If yes, we’re almost certainly talking decoupled. See “Why Decoupled?” above.
  • Do you envision publishing content or data across multiple products or platforms? Example: we recently built an education product that serves both teachers and their early childhood students. We needed a classroom management app for the former, and an “activity explorer” with links to interactive games for the latter. Multiple products pulling from a single backend is often a good fit for decoupled.
  • Is interactivity itself a primary concern? There are plenty of cases where the traditional web experience – click a link, load a new page – just doesn’t do justice. Interactive data visualizations and maps are great examples. If your digital project requires app-like interaction with transitions, animations, and complex user flows, you will likely benefit from an expressive front-end framework like Ember or Angular. In those cases, decoupled is often a great fit.
  • Does working around Drupal’s rich feature set and interface create more work in the long run? Drupal ships with a ton of built-in features for managing and viewing content: node edit screens, tabs, subtabs, node view pages, admin screens, and on and on. Sometimes you just don’t need all of that. For some applications, working around Drupal’s default screens is more work than building something custom. In some cases, you may want to take advantage of Drupal’s flexible content model to store content, but need a completely different interface for adding and managing that content. Consider evites as a hypothetical example. The underlying content structure could map nicely to nodes or custom entities in Drupal. The process for creating an invitation, customizing it, adding recipients, previewing and sending, however, is something else altogether. Decoupled Drupal would allow you to build a front-end experience (customizing your invitation) exactly as you need it, while storing the actual content (the invite) and handling business logic (saving, sending, etc.) in Drupal.
  • Do you want the hottest technology? Sometimes it’s important to be at the cutting edge. We respect that 100%. Decoupled Drupal provides incredible flexibility and empowers teams to build rich, beautiful experiences with very few limitations. Further, it allows (virtually requires) your teams – both front-end and backend – to work with the very latest development tools and frameworks.

When Not to Decouple

Conversely, here are a few key questions we ask that might rule out Decoupled Drupal for a project:

  • First, take another look at the list above. If you answered “no” to all of them, Decoupled Drupal might not be a great fit for your project. Still not sure? Great, carry on...
  • Are you hoping to “turn on” features in Drupal and use them more-or-less as-is? One of the major draws for Drupal is the massive ecosystem of free modules available from the open source community. Need Disqus comments on your blog? Simply install the Disqus Drupal module and turn it on. How about a Google Map on your contact page? Check out the Simple Google Maps module. Want to make a slideshow from a list of images? No problem: there are modules for that, too. With Decoupled Drupal, the ability to simply “turn-on” front-end functionality goes away, since Drupal is no longer directly managing your website front-end.
  • Do your front-end requirements match Drupal’s front-end capabilities out-of-the-box? We work with a number of research and publishing organizations whose design goals closely align with Drupal’s capabilities. I’m hard pressed to recommend a decoupled approach in those cases, absent some other strong driver (see above).

Related Reading

Mar 26 2019
Mar 26

When it comes to testing in software development, the range of options is huge. From unit testing on the backend through browser compatibility testing on the front end, there are a variety of testing approaches that will save you, your clients, and their audiences, time and headache. Katalon Recorder is a quick, simple way to get started with testing and to see the value that automated tests provide within a matter of minutes.

What is Katalon Recorder?

Katalon Recorder (KR) is a Selenium-driven browser plugin for Chrome and FireFox that lets you control your browser with simple commands instead of actual clicking, typing, tabbing, and scrolling. Put simply, KR can interact with your web application and report back when things don’t go as planned. Katalon Recorder aims to emulate human actions such as clicking, typing, and verifying the status of onscreen content - and as such works very well as an automated replacement for human testing.

How does it work?

With Katalon Recorder, you can record your browser actions - such as clicking through your menu items - and then play those actions back as automated commands. You can also handcraft a wide variety of commands that assert the existence of HTML elements or copy, among a host of other things. The successful playback of well crafted tests indicates that your menus, content, and HTML structure haven't changed — in other words your application is behaving as expected.

Katalon Recorder

The Basics: Record and Playback

After clicking Record KR will bring your browser into focus, then log all of your interactions as individual commands. Once Stop is clicked, those commands can be played back, saved to a file, shared with others to play in their browsers, or modified to fine-tune functionality. With Katalon Recoder’s Record feature setting up initial tests that mirror human-driven clickthroughs takes moments of your time and can then be played back by anyone anywhere — including non-technical staff or even client teams.

Katalon Recorder

Creating Complex, Rigorous Tests

Katalon Recorder allows you to organize one or more individual commands as Test Cases, and one or more Test Cases as Test Suites. Complicated tests can be created by chaining together several Test Suites. You could, for example, write tests that log a test user in, search for a product by SKU, click into the results, add the product to their cart, navigate to the cart and assert the product is there, then complete the purchase using test financial data. All of those actions except assert the product is in the cart can be recorded from your interactions. That means that in many cases, the amount of time that it takes for you to perform an action on your website is, using the recorder feature, the amount of time it takes you to write the automated test.

Flexibility Via Hand-Crafted Commands

In some cases the rigidity of recorded actions is a drawback. If, for example, you want to search for the tag Home Appliances and then click into the product Test Toaster, but you aren’t sure where in the search results that item will be, a recorded action informed by precise HTML structure might fall short. In those cases, you can use a combination of CSS and XPATH selectors to find and interact with your elements regardless of where exactly in the DOM they exist.

Katalon Recorder

Storing Variables with Javascript

Sometimes a human tester needs to remember something, like the name or unique ID of a piece of content, in order to proceed with their test. Let’s say, for example, you’re testing a Drupal site wherein you first want to create a new Person node, then associate it via an entity reference field with a Group node on that node’s creation form. Using Katalon Recorder’s storeEval command you can use Javascript to accomplish that by saving a variable.

Once you have saved the form for your Person node, you’ll get redirected to something like http://mysite.dev/node/887 where 887 is the node ID for your content. The storeEval command lets you save the ID number to a variable that we can access later in our tests. See the image below:

Katalon Recorder

Katalon Recorder covers a lot of bases. Whether you're using just the Record option for building basic spot-checks, or combining advanced features to create rigorous and complex functional testing, it's surprising what can be achieved in so little time — especially given KR's very tenable learning curve. While the examples above are exceedingly simple, in some recent projects we’ve combined thousands of commands across dozens of test cases that provide thorough regression testing and automated QA — and it all started with the click of a Record button.

Mar 06 2019
Mar 06

I recently wrote an article about Flexible Authoring with Structured Content. In this follow-up post, I'm going to dig into more detail on one specific approach we've been working on: Entity Reference with Layout.

If you use the Paragraphs module and wish there was a way to more easily control the layout of referenced paragraphs on a particular node, check out Entity Reference with Layout, a new module for Drupal 8. Entity Reference with Layout combines structured content (a la Paragraphs) with expressive layout control (a la Layout Discovery in core). Now you can quickly add new sections without leaving the content edit screen, choose from available layouts, add text or media to specific regions, drag them around, edit them, delete them, add more, and so on. The experience is easy-to-use, fast, and expressive.

Background

Structured Content FTW.

We’ve been working with Drupal for a very long time: since version 4.7, way back in 2006. We love the way Drupal handles structured content – something that has only improved over the years with support for important concepts like “fieldable entities” and “entity references.” Integration with flexible rendering systems like Views, and in more recent years the ability to quickly expose content to services for headless, decoupled applications, relies largely on structured content. With structured content, editors can “Create Once, Publish Everywhere (COPE),” a widely-recognized need for modern web authoring. Drupal’s support for structured content is an important advantage.

Drupal, We Have a Problem.

But Drupal’s interface for creating structured content– the part that editors use daily, often many times per day – is lagging. In the era of SquareSpace, WIX, and Gutenberg, Drupal’s clunky authoring interface leaves much to be desired and is quickly becoming a disadvantage.

Complex form for adding different types of content called paragraphs

Paragraphs to the Rescue. Sort Of.

There have been a number of really interesting steps forward for Drupal’s authoring interface as of late. Layout Builder is powerful and flexible and soon to be a full-fledged part of Drupal core. Gutenberg, an expressive authoring experience first developed for Wordpress, now offers a Drupal version. The Paragraphs module solves similar problems, providing a way for authors to create structured content that is incredibly flexible.

We started using Paragraphs years ago, soon after it was first introduced in Drupal 7. We liked the way it combined structure (Paragraphs are fieldable entities) with flexibility (Paragraphs can be dragged up and down and reordered). We used nested Paragraphs to give authors more control over layout. The approach was promising; it seemed flexible, powerful, and easy-to-use.

For more complex applications, though, nested Paragraphs proved anything but easy-to-use. They could be profoundly complicated. Managing intricate layouts with nested Paragraphs was downright difficult.

If only there was a way to have it both ways: Drupal Paragraphs plus easy layout control. Well of course, now there is.

Introducing Entity Reference with Layout

We created Entity Reference with Layout to give authors an expressive environment for writing structured content. As the name suggests, Entity Reference with Layout is an entity reference field type that adds an important element to the equation: layout. It leverages the layout discovery system in Drupal Core, allowing editors to quickly add new paragraphs into specific regions. The authoring experience is expressive and easy, with drag-and-drop layout controls.

Give Entity Reference with Layout a Whirl

Entity Reference with Layout is available on Drupal.org. Installation is quick and easy (we recommend composer, but you can also just download the files). The module is experimental and still under active development; check it out and let us know what you think. We’d love to hear feedback, bug reports, or feature requests in the issue queue. And if you think your organization’s web editors might benefit from this approach and want to learn more, drop us a line and we’ll follow up!

Feb 27 2019
Feb 27

If you’ve ever wished there was a way to easily import CSV data into your Drupal website then this post is for you. With the new Entity Import module, website admins can easily add and configure new importers that map CSV source files to Drupal 8 entities. Entity Import adds a user interface for Drupal 8’s core migration functionality. It lets you add importers for any entity in the system, map source data to entity fields, configure process pipelines, and of course, run the import.

Why Another Drupal Import Module?

In Drupal 8 there are already several good options for importing and migrating content using the admin interface.

The Feeds module is one approach for importing content, with a great user interface and huge community of support. Feeds has been around for 7+ years, has been downloaded 1.8 million times, and, according to its Drupal project page, is being used on more than 100,000 websites.

The Migrate module, a part of core for Drupal 8, is an incredibly powerful framework for migrating content into Drupal. With Migrate, developers can create sophisticated migrations and leverage powerful tools like rollback, Drush commands, and countless process plugins for incredibly complex pipelines.

We use both Migrate and Feeds extensively. (Check out this recent post from Joel about getting started with Migrate.) Recently, though, we needed something slightly different: First, we wanted to provide Drupal admins with a user interface for configuring complex migrations with dependencies. Second, we needed Drupal admins to be able to easily run new imports using their configured migrations by simply uploading a CSV. Essentially, we wanted to give Drupal admins an easy-to-use control panel built on top of Drupal core’s migrate system.

My First Use Case: Complex Data Visualizations

Here at Aten, we do a lot of work helping clients with effective data visualizations. Websites like the Guttmacher Institute’s data center and the Commonwealth Fund’s 2018 interactive scorecard are great examples. When I started working on another data-heavy project a few months ago, I needed to build a system for importing complex datasets for dynamic visualizations. Drupal’s core migration system was more than up for the task, but it lacks a simple UI for admins. So I set about building one, and created Entity Import.

Getting Started with Entity Import

Download and Install

Entity Import is a Drupal module and can be downloaded at https://Drupal.org/project/entity_import. Alternatively, you can install Entity Import with composer:

composer require drupal/entity_import

Entity Import is built directly on top of Drupal’s Migrate module, and no other modules are required.

Importer Configuration

Adding New Importers

Once the Entity Import module is installed, go to Admin > System > Entity Import to add new importers. Click “Add Importer.”

For each importer, you will need to provide:

  • Name - In my case I used “Dataset,” “Indicators,” and “Topics.” Generally speaking, I would name your importers after whatever types of data you are importing.
  • Description - An optional description field to help your Drupal administrators.
  • Page - Toggle this checkbox if you want to create an admin page for your importer. Your administrators will use the importer page to upload their CSV files.
  • Source Plugin - At the time of this writing, Entity Import provides just one source plugin: CSV. The system is fully pluggable, and I hope to add others – like XML or even direct database connections – in the future.
  • Plugin Options - Once you choose your source Plugin (i.e. CSV) you’ll have plugin-specific configuration options. For CSVs, you can choose whether or not to include a header row, as well as whether or not to support multiple file uploads.
  • Entity Type - Specify which entity type your data should be imported into.
  • Bundles - Once you pick a entity type, you can choose one or more bundles that your data can be imported into.
Field Mappings

Configuring Importers

Each Importer you add will have its own configuration options, available under Admin > Configuration > System > Entity Importer. Configuration options include:

  • Mappings - Add and configure mappings to map source data to the appropriate destination fields on your entity bundles. (Important side note: when creating mappings, the human readable name can be whatever you wish; the machine name, however, needs to match the column header from your source data.)
  • Process Plugins - When you add new mappings you can specify one or more process plugins directly in the admin interface. This is where things get interesting – and really powerful. Drupal 8 provides a number of process plugins for running transformations on data to be migrated (read more about process plugins in Joel’s recent migrations post). With Entity Import, you can specify one or more process plugins and drag them into whatever order you wish, building process pipelines as complicated (or simple) as you need. Your admins can even manage dependencies on other imports; for example, mapping category IDs for an article importer to categories from a category importer. Again, no coding necessary. Entity Import provides a user interface for leveraging some of Migrate’s most powerful functionality.
Import Screen

Importing the Data

Once you’ve added and configured importers, go to Admin > Content > [Importer Name] to run a new import. You’ll see three tabs, as follows:

  • Import - This is the main import screen with upload fields to upload your CSV(s). If your import has dependencies specified in any of its process plugins, the import(s) it depends on will show up on this screen as well. (TODO: Currently, the interface for managing multiple, interdependent imports is a little complicated. I’d like to make it easier for users to visualize the status of all interdependent migrations at-a-glance.)
  • Actions - After you run an import, you can rollback using the options on the actions tab. (TODO: I’d like to add other actions as well; for example, the ability to change or reset import statuses.)
  • Logs - Migration logs are listed for each import, allowing admins to quickly see if there are any errors. You can quickly filter logs by message type (i.e. “notice” or “error”).
Import Actions

My Second Use Case: Importing Courses for a University

Soon after wrapping up a working prototype for the data visualization project I mentioned above, I was tasked with another project. A prominent university client needed to quickly import an entire course catalog into Drupal. Beyond running a single import, this particular organization needed the ability to upload new CSVs and update the catalog at any point in the future. The use case was a perfect match for Entity Import. I installed the module, spent a few minutes adding and configuring the course importer, and presto!

Next Steps for Entity Import

Writing clean, reusable code packaged as modules is a huge benefit for Drupal development workflows. Even better, Drupal.org module pages provide really great tools for collaboration with features like issue tracking, release management, and version control built into the interface. I have a few TODOs that I’ll be posting as issues in the days ahead, and I am excited to see if Entity Import fills a need for others like it has for me.

If you run a data or content -intensive website and have trouble keeping everything up to date, Entity Import might be just the ticket. We’d love to give you a quick demo or talk through how this approach might help – just give us a shout and we’ll follow up!

Feb 15 2019
Feb 15

As a writer or editor for your organization’s website, you should be able to quickly write articles or build pages that are collections of smaller elements. You should be able to write some text, add a slideshow, write some more text, perhaps list a few tweets, and finish things off with a list of related content. Or maybe you paste in a pull quote, add a couple full-width images with captions, or even put together an interactive timeline. Your content management system should let you do all that, easily. But chances are, it won’t be with the WYSIWYG you’re used to right now.

What You See Isn’t What You Get

WYSIWYG editors still fall short when it comes to doing much more than simple formatting and embedding a few images. Anything beyond that, and the underlying technology has to leverage some kind of proprietary “smart code” or “token” and do some find-and-replace magic that makes slideshows, media players, or other more complex blocks of content show up right to the editor. These tokens aren’t typically based on any adopted standard. It’s just this custom, arbitrary formatting shortcut that programmers decided to use that tells the CMS, “Replace this snippet with that other piece of content.”

If it sounds complicated, that’s because it is. It’s hard to get right. It’s hard to build in a sustainable way. It’s hard – impossible, really – to make it look right and work well for authors. It’s REALLY hard to migrate.

Here’s an example: In earlier versions of Drupal, Node Embed was a way to embed one piece of content (say, an image) inside the body of another (like an article). The “smart code” [[nid: 123]] tells Drupal, “replace this with the piece of content that has an ID of 123.” It worked, but the authoring experience was janky. And it really wasn’t structured content, since your markup would end up littered with these proprietary snippets referencing objects in the CMS. Somewhere down the line, someone would inevitably have to migrate all of that and write regular expressions and processors to parse it back into a sane structure for the new system. That gets expensive.

Simple form with input, select, and textarea

Fieldable Entities and Structured Content

The thing that lets you, the web editor, write content that is both manageable and flexible is breaking your content into discrete, single-purpose fields. In Drupal it’s called “fieldable entities.” You don’t dump everything into the WYSIWYG (which would be hard to do anyway). Instead, there’s a field to add the author’s name, a field for attaching images, and a field for the text (that last part gets the WYSIWYG). More generally, this serves an important concept called “structured content.” Content is stored in sensible chunks. It adapts to a variety of contexts, like a mobile app or a news aggregator or (of course) your website. In the case of your website, your CMS pushes all those fields through a template, and voila, the page is published beautifully and your readers eat it up.

Complex form with input, select, textarea, and multiple fields nested inside

What If My Fields Have Fields?

Here’s where it gets interesting. Back to our earlier example: let’s say your article has a couple slideshows. Each slideshow has a few images, captions, and links. Suddenly your discrete, single-purpose field (slideshow) has its own fields (images, captions, links). And, you may want to add a slideshow virtually anywhere in the flow of the page. Perhaps the page goes text, slideshow, text. Or maybe it’s text, slideshow, text, some tweets, another slideshow. And now you want to swap some things around. Again, you should be able to do all that, easily.

Drupal Paragraphs

Enter the Drupal Paragraphs module. Paragraphs takes the approach of creating content bundles, or collections of fields, that can be mixed and matched on a given page in virtually countless configurations. They’re called “Paragraphs” because they are flexible, structured building blocks for pages. The name is a little misleading; in fact, they are 100% configurable groups of fields that can be added, edited, and rearranged however you want on a given article. You can have paragraph types for slideshows, pull quotes, tweets, lists of related content, or virtually anything else. Paragraphs are building blocks: smaller elements that can be combined to build a page. And like I said earlier, you should be able to easily make pages from collections of smaller elements.

Complex form for adding different types of content called paragraphs

Drupal Paragraphs is Sort of Easy

We use Drupal Paragraphs whenever a particular type of content (a news article, blog post, etc.) is really built up of smaller, interchangeable collections of other fields (text, slideshows, videos, etc.). Drupal Paragraphs are flexible and organized. They let authors create whatever kinds of pages they want, while storing content in a way that is structured and adaptable. Migrations with Paragraphs are generally easier than migrations with special, proprietary embed codes. Breaking content types into Paragraphs gives authors the flexibility they need, without sacrificing structure. You don’t end up with a bunch of garbage pasted into an open WYSIWYG field.

So what’s the catch? Well, the interface isn’t awesome. Using Drupal Paragraphs can add a lot of complexity to the authoring experience. Forms will have nested forms. It can be overwhelming.

Alternatives to Drupal Paragraphs

As I’m writing this, another approach to page building is gathering momentum in the Drupal universe. Layout Builder is currently an experimental module in core, and slated to ship as a stable release with Drupal 8.7. Layout Builder provides a slick drag-and-drop interface for editors to build pages from blocks and fields. We’re excited to see how Layout Builder develops, and to see how well it performs for large editorial websites. For websites with hundreds or thousands of articles, managing pages with Layout Builder may be difficult. As Drupal’s founder, Dries Buytaert, pointed out in a post late last year, “On large sites, the free-form page creation is almost certainly going to be a scalability, maintenance and governance challenge.”

Other open source CMS communities are seeing a similar rise in the demand to provide authors with flexible page-building tools. WordPress released Gutenberg, a powerful drag-and-drop editing experience that lets authors quickly build incredibly flexible pages from a massive library of components. It’s worth noting Gutenberg is not without challenges. It poses accessibility issues. Antithetical to the themes in this post, it does not necessarily produce structured content. It relies on proprietary tokens for referencing embedded blocks of content. But it is very flexible, and offers an expressive interface for authors. For Drupal users, there’s a Drupal port for Gutenberg.

For us at Aten, the balance comes back to making sure content is stored in a way that is structured, can be adaptive, is reusable, and is relatively easy to migrate. And that you, the writer, can easily build flexible web pages.

Structured and Adaptable: Drupal Paragraphs with Layout Control

We’ve been working on an approach that keeps Paragraphs in place as the primary way content is managed and stored, but also gives authors the ability to easily control layout. Using Drupal’s core Layout Discovery system, Entity Reference with Layout is a custom field type that combines layouts and Paragraphs. It’s still in very early experimental development, but we’re excited about the impact this approach might have on making it even easier to create flexible pages. And it uses Paragraphs for content storage, with the benefits we’ve already touched on: content is well-structured and relatively easy to migrate. It’s not as flexible or robust as Layout Builder, but might be a great option for authoring flexible pages with Paragraphs. (More on this in a future post.)

Reusable and Flexible: Advanced Paragraphs

Since Drupal Paragraphs are themselves collections of flexible fields, there are all kinds of interesting ways they can be applied to building complex publishing features. We’re working with a client in publishing who needs the ability to completely customize the way content appears on their home page. They would like to promote existing content to the homepage, but they may want to override article titles, images, and summaries. Since the article authors aren’t the same people editing the home page and other key listing pages, they didn’t want authors to have to think about all of those variations. The way content is presented on an article page isn’t always the best-suited for the homepage and other contexts. We used paragraphs to give home page editors the ability to drop articles onto the page, with fields for overriding everything they need to.

Where to Go From Here

Your CMS should make it easy to write content and build pages. If you’re interested in seeing a demo of Drupal Paragraphs, Layout Builder, or Gutenberg, drop us a line. We’d love to help.

Jan 31 2019
Jan 31

We do a lot of Drupal 8 migrations here at Aten. From older versions of Drupal and Wordpress, to custom SQL Server databases, to XML and JSON export files: it feels like we’ve imported content from just about every data source imaginable. Fortunately for us, the migration system in Drupal 8 is extremely powerful. It’s also complicated. Here’s a quick-start guide for getting started with your next migration to Drupal 8.

First, a caveat: we rarely perform simple one-to-one upgrades of existing websites. If that’s all you need, skip this article and check out this handbook on Drupal.org instead: Upgrading from Drupal 6 or 7 to Drupal 8.

It’s Worth the Steep Learning Curve

Depending on what you’re trying to do, using the migrate system might seem more difficult than necessary. You might be considering feeds, or writing something custom. My advice is virtually always the same: learn the migrate system and use it anyway. Whether you’re importing hundreds of thousands of nodes and dozens of content types or just pulling in a collection of blog posts, migrate provides powerful features that will save you a bunch of time in the long run. Often in the short run, for that matter.

Use the Drupal.org Migrate API Handbooks

There’s a ton of great information on Drupal.org in the Migrate API Handbooks. Be prepared to reference them often – especially the source, process, and destination plugin handbooks.

Basic Steps

Here’s a much simplified overview of the high-level steps you’ll use to set up your custom Drupal 8 migration:

All Migrations

  • Enable the migrate module (duh).
  • Install Migrate Tools to enable Drush migration commands.
  • Install Migrate Plus as well. It provides a bunch of extensions, examples and plugins for migrations. I’d just assume you need it.
  • Create a custom module for your migration.
  • Use YAML configuration files to map fields from the appropriate source, specifying process plugins for necessary transformations, to the destination. The configuration files should exist in “my_migration_module/config/install/“.
    (Pro tip: you’ll probably do a lot of uninstalling and reinstalling your module to update the configuration as you build out your migrations. Use “enforced dependencies” so your YAML configurations are automatically removed from the system when your module is uninstalling, allowing them to be recreated – without conflicts – when you re-enable the module.)

Enforced dependencies in your YAML file will looks something like this:

dependencies:
  enforced:
    module:
      - my_migration_module

See this issue on Drupal.org for more details on enforced dependencies, or refer to the Configuration Management Handbooks.

Drupal-to-Drupal Migrations

  • If you’re running a Drupal-to-Drupal migration, run the “migrate-upgrade” Drush command with the “--configure-only” flag to generate stub YAML configurations. Refer to this handbook for details: Upgrade Using Drush.
  • Copy the generated YAML files for each desired migration into your custom module’s config/install directory, renaming them appropriately and editing as necessary. As stated above, add enforced dependencies to your YAML files to make sure they are removed if your module is uninstalled.

Process Plugins

Process plugins are responsible for transforming source data into the appropriate format for destination fields. From correctly parsing images from text blobs, to importing content behind HTTP authentication, to merging sources into a single value, to all kinds of other transformations: process plugins are incredibly powerful. Further, you can chain process plugins together, making endless possibilities for manipulating data during migration. Process plugins are one of the most important elements of Drupal 8 migrations.

Here are a few process plugin resources:

Continuously Migrate Directly from a Pantheon-Hosted Database

Most of our projects are hosted on Pantheon. Storing credentials for the source production database (for example, a D7 website) in our destination website (D8) code base – in settings.php or any other file – is not secure. Don’t do that. Usually, the preferred alternative is to manually download a copy of the production database and then migrate from that. There are plenty of times, though, where we want to perform continuous, automated migrations from a production source database. Often, complex migrations require weeks or months to complete. Running daily, incremental migrations is really valuable. For those cases, use the Terminus secrets plugin to safely store source database credentials. Here’s a great how-to from Pantheon: Running Drupal 8 Data Migrations on Pantheon Through Drush.

A Few More Things I Wish I’d Known

Here are a few more things I wish I had known about back when I first started helping clients migrate to Drupal 8:

Text with inline images can be migrated without manually copying image directories.

It’s very common to migrate from sources that have inline images. I found a really handy process plugin that helped with this. In my case, I needed to first do a string replace to make image paths absolute. Once that was done, I ran it through the inline_images plugin. This plugin will copy the images over during the migration.

body/value:
   -
     plugin: str_replace
     source: article_text
     search: /assets/images/
     replace: 'https://www.example.com/assets/images/'
   -
     plugin: inline_images
     base: 'public://inline-images'

Process plugins can be chained.

Process plugins can be chained together to accomplish some pretty crazy stuff. Sometimes I felt like I was programming in YAML. This example shows how to create taxonomy terms on the fly. Static_map allows you to map old values to new. In this case, if it doesn’t match, it gets a null value and is skipped. Finally, the entity_generate plugin creates the new taxonomy term.

 field_webinar_track:
   -
     plugin: static_map
     source: webinar_track
     map:
       old_tag_1: 'New Tag One'
       old_tag_2: 'New Tag One'
     default_value: null
   -
     plugin: skip_on_empty
     method: process
   -
     plugin: entity_generate
     bundle_key: vid
     bundle: webinar_track

Dates can be migrated without losing your mind.

Dates can be challenging. Drupal core has the format_date plugin that allows specifying the format you are migrating from and to. You can even optionally specify the to and from time zones. In this example, we were migrating to a date range field. Date range is a single field with two values representing the start and end time. As you can see below, we target the individual values by specifying the individual value targets as ‘/’ delimited paths.

 field_date/value:
   plugin: format_date
   from_timezone: America/Los_Angeles
   from_format: 'Y-m-d H:i:s'
   to_format: 'Y-m-d\TH:i:s'
   source: start_date
 field_date/end_value:
   plugin: format_date
   from_timezone: America/Los_Angeles
   from_format: 'Y-m-d H:i:s'
   to_format: 'Y-m-d\TH:i:s'
   source: end_date

Files behind http auth can be copied too.

One migration required copying PDF files as the migration ran. The download plugin allows passing in Guzzle options for handling things like basic auth. This allowed the files to be copied from an http authenticated directory without the need to have the files on the local file system first.

   plugin: download
   source:
     - '@_remote_filename'
    - '@_destination_filename'
   file_exists: replace
   guzzle_options:
     auth:
       - username
       - password

Constants & temporary fields can keep things organized.

Constants are essentially variables you can use elsewhere in your YAML file. In this example, base_path and file_destination needed to be defined. Temporary fields were also used to create the exact paths needed to get the correct remote filename and destination filename. My examples use an underscore to prefix the temporary field, but that isn’t required.

source:
 plugin: your_plugin
 constants:
   base_path: 'https://www.somedomain.com/members/pdf/'
   file_destination: 'private://newsletters/'
 
 _remote_filename:
   plugin: concat
   source:
     - constants/base_path
     - filename
 _destination_filename:
   plugin: concat
   source:
     - constants/file_destination
     - filename
 
   plugin: download
   source:
     - '@_remote_filename'
     - '@_destination_filename'
   file_exists: replace
   guzzle_options:
     auth:
       - username
       - password

This list of tips and tricks on Drupal Migrate just scratches the surface of what’s capable. Drupalize.me has some good free and paid content on the subject. Also, check out the Migrate API overview on drupal.org.

Further Reading

Like I said earlier, we spend a lot of time on migrations. Here are a few more articles from the Aten blog about various aspects of running Drupal 8 migrations. Happy reading!

Jan 17 2019
Jan 17

If your library hosts community programs and events, you should check out Intercept: a new product for helping libraries run better events. Intercept makes it easy to create and manage events. Even better, it provides actionable reports to help measure success and recommend strategic improvements for your programs in the future.

Intercept Features

Intercept provides valuable features for event management, equipment reservations, room reservations and customer tracking.

Event Management

  • Easily add and edit events.
  • Book rooms at the time of event creation.
  • Host events at outside venues.
  • Make custom templates for quickly creating future events.
  • Create recurring events.
  • Set up registration for events, including waitlists.
  • Browse events as a list or grid-style calendar.
  • Filter events by type, audience, location, date and keyword.
  • Save and register for events.
  • See similar events based on type and location.
  • Receive recommendations for other events based on your preferences, events you’ve attended and/or saved.
  • Analysis for events.

Equipment Reservations

  • Browse and reserve available equipment.
  • Set reservation periods by item.
  • Manage and approve requests.
  • Report on equipment usage.

Room Reservations

  • Customize rooms and locations.
  • Browse rooms by type, capacity and timeframe.
  • Reserve rooms with validation to ensure rooms are not double-booked.
  • Deny or approve reservation requests, with email notifications.
  • Staff reservations are automatically approved.

Customer Tracking

  • Ability to integrate with popular Integrated Library Systems (ILS).
  • Integrates with Polaris ILS.
  • Single sign-on with website and ILS.
  • Allow attendees to scan into events with their library cards.
  • Gather and analyze feedback from customers.
  • Analyze event attendance numbers with population segmentation
  • Download a CSV report on attendance.

Built By Libraries, For Libraries

Intercept was built as one part of a large redesign and redevelopment project with Richland Library. From the beginning, Richland’s vision was to both create a product to help measure the effectiveness of its own events, and to release that product to the wider community of public libraries. More than five years of user research, planning and beta testing have gone into this product to date. Intercept was designed and developed by a team intimately familiar with the problems that it solves.

Open Source

Intercept was architected as a suite of modules for Drupal 8, is open source, and is freely available to all. You can download an early version of the code from Drupal.org at https://drupal.org/project/intercept, and the most recent version will available there soon. If you’re interested in learning more about using Intercept to help make your library’s events even better, we’d love to help! Just drop us a line on our contact page and we’ll be in touch right away.

Dec 19 2018
Dec 19

Decoupling Drupal is a popular topic these days. We’ve recently posted about connecting Drupal with Gatsby, a subject that continues to circulate around the Aten office. There are a number of great reasons to treat your CMS as an API. You can leverage the content modeling powers of Drupal and pull that content into your static site, your javascript application, or even a mobile app. But how to get started?

In this post I will first go over some basics about GraphQL and how it compares to REST. Next, I will explain how to install the GraphQL module on your Drupal site and how to use the GraphiQL explorer to begin writing queries. Feel free to skip the intro if you just need to know how to install the module and get started.

A Brief Introduction to GraphQL

Drupal is deep in development on an API First Initiative, and the core team is working on getting json:api into core. This exposes Drupal's content via a consistent, standardized solution which has many advantages and responds to REST requests.

Recently the JavaScript community has become enamored with GraphQL, a language for querying databases which is touted as an alternative to REST for communicating with an API.

Developed by Facebook, GraphQL is now used across the web from the latest API of Github to the New York Times redesign.

GraphQL opens up APIs in a way that traditional REST endpoints cannot. Rather than exposing individual resources with fixed data structures and links between resources, GraphQL gives developers a way to request any selection of data they need. Multiple resources on the server side can be queried at once on the client side, combining different pieces of data into one query and making the job of the front-end developer easier.

Why is GraphQL Good for Drupal?

GraphQL is an excellent fit for Drupal sites, which are made up of entities that have data stored as fields. Some of these fields could store relationships to other entities. For example, an article could have an author field which links to a user.

The Limitations of REST

Using a REST API with that example, you might query for “Articles”. This returns a list of article content including an author user id. But to get that author’s content you might need to do a follow-up query per user ID to get that author’s info, then stitch together that article with the parts of the author you care about. You may have only wanted the article title, link and the author name and email. But if the API is not well designed this could require several calls to the server which returned way more info that you wanted. Perhaps including the article publish date, it’s uuid, maybe the full content text as well. This problem of “overfetching” and “underfetching” is not an endemic fault with all REST based APIs. It’s worth mentioning that json:api has its own solutions for this specific example, using sparse fieldsets and includes.

Streamlining with GraphQL

With GraphQL, your query can request just the fields needed from the Article. Because of this flexibility, you craft the query as you want it, listing exactly the fields you need (Example: the title and URL, then it traverses the relationship to the user, grabbing the name and email address). It also makes it simple to restructure the object you want back; starting with the author then getting a reverse reference to Articles. Just by rewriting the query you can change the display from an article teaser to a user with a list of their articles.

Either of these queries can be written, fields may be added or removed from the result, and all of this without writing any code on the backend or any custom controllers.

This is all made possible by the GraphQL module, which exposes every entity in Drupal from pages to users to custom data defined in modules, as a GraphQL schema.

Installing GraphQL for Drupal

If you want to get started with GraphQL and Drupal, the process requires little configuration.

  1. Install the module with Composer, since it depends on a vendor library GraphQL-php If you're using a Composer based Drupal install use the command:
    composer require drupal/graphql
    to install the module and its dependencies.
  2. Enable the module; it will generate a GraphQL schema for your site which you can immediately explore.

Example Queries with GraphiQL

Now that you have GraphQL installed, what can you do? How do you begin to write queries to explore your site’s content? One of the most compelling tools built around GraphQL is the explorer, called GraphiQL. This is included in the installation of the Drupal GraphQL module. Visit it at:

/graphql/explorer

The page is divided into left and right sides. At the left you can write queries. Running a query with the button in the top left will display the response on the right pane.

GraphiQL with basic nodequery

Write a basic query on the left side, hit the play button to see the results on the right.

As you write a query, GraphiQL will try to autocomplete to help you along.

GraphiQL autocomplete

As you type, GraphiQL will try to autocomplete

GraphiQL default properties

With entities, you can hit play to have it fill in all the default properites.

You can also dive into the live documentation in the far right pane. You'll see queries for your content types, the syntax for selecting fields as well as options for filtering or sorting.

GraphiQL documentation

Since the schema is self documenting, you can explore the options available in your site.

The documentation here uses autocomplete as well. You can type the name of an entity or content type to see what options are available.

GraphiQL with filter for content type

Add additional filter conditions to your query.

Filters are condition groups, in the above example I am filtering by the "article" content type.

In the previous example I am just getting generic properties of all nodes, like entityLabel. However, if I am filtering by the "Article" type, I would want access to fields specific to Articles. By defining those fields in a "fragment", I can substitute the fragment right into my query in place of those individual defaults.

GraphiQL substituting with a fragment

Use fragments to set bundle specific fields.

Because my author field is an entity reference, you'll see the syntax is similar to the nodes above. Start with entities, then list the fields on that entity you want to display. This would be an opportunity to use another fragment.

Now that the query is displaying results how I want, I can add another filter to show different content. In this case; a list of unpublished content.

GraphiQL adding an additional filter

Add another filter to see different results.

Instead of showing a list of articles with their user, I could rearrange this query to get all the articles for a given user.

GraphiQL Using a reverse field

Display reverse references with the same fragment.

I can reuse the same fragment to get the Article exactly as I had before, or edit that fragment to remove just the user info. The nodeQuery just changes to a userById which takes an id similar to how the nodeQuery can take a filter. Notice the reverseFieldAuthorNode. This allows us to get any content that references the user.

Up Next: Building a Simple GraphQL App

If you’re new to GraphQL, spend a little time learning how the query language works by practicing in the GraphiQL Explorer. In the next part of this post I will go over some more query examples, write a simple app with create-react-app and apollo, and explain how GraphQL can create and update content by writing a mutation plugin.

Oct 26 2018
Oct 26

To get started with decoupling Drupal with Gatsby, check out our previous screencasts here.

In this screencast, I'll be showing you how to automate content deployment. So when you update the content on your Drupal site, it will automatically rebuild/update your Gatsby site on Netlify.

[embedded content]

Download a Transcription of this Screencast

Download Transcription

Oct 22 2018
Oct 22

Lately, we've been using Gatsby with Drupal for projects where we need it decoupled.

Gatsby is a unique project. It's most often evaluated as a static site generator compared to the likes of Jekyll and Hugo. While Gatsby can generate a static website, it's more accurately described as a content mesh.

Gatsby uses GraphQL to pull in one or more sources to generate site content. There is a large list of Gatsby source plugins including: Drupal, WordPress, YouTube, Twitter, Hubspot, Shopify and Google Sheets just to name a few. It's optimized for blazing fast performance. Since it's built using React, it can be used to build hybrid sites. Along with generating static pages it can also render client-side content. It can pull in dynamic data, add password protected content and take advantage of features typically not found in static generated sites.

Similar to Drupal, Gatsby is open source. It has a devoted and ever-growing community, with an expanding plug-in library which makes building your site even easier.

With this combination we can leverage Drupal as a content authoring platform and utilize Gatsby to render the frontend.

The screencasts below show how quickly you can configure a Drupal 8 website to pair with Gatsby.

[embedded content]

With our Drupal 8 website set up, the next step is to configure Gatsby to pull the Drupal site's content.

[embedded content]

Now it's time to automate your Drupal to Gatsby content deployment to Netlify.

Download a Transcription of this Screencast

Download Transcription

Sep 27 2018
Sep 27

As of Fall 2018 and the release of Drupal 8.6, the Migrate module in core is finally stabilizing! Hopefully Migrate documentation will continue to solidify, but there are plenty of gaps to fill.

Recently, I ran into an issue migrating Paragraph Entities (Entity Reference Revisions) that had a few open core bugs and ended up being really simple to solve within prepareRow in the source plugin.

Setup

In my destination D8 site, I had a content type with a Paragraph reference field. Each node contained one or more Paragraph entities. This was reflected by having a Node migration with a dependency on a Paragraph Entity migration.

With single value entity references, the migration_lookup plugin makes it really easy lookup up entity reference identifiers that were previously imported. As of September 2018, there is an open core issue to allow multiple values with migration_lookup. Migration lookup uses the migration map table created in the database to connect previously migrated data to Drupal data. The example below lookups up the taxonomy term ID based on the source reference (topic_area_id) from a previously ran migration. Note: You will need to add a migration dependency to your migration yml file to make sure migrations are run in the correct order.

 field_resource_category:
   plugin: migration_lookup
   migration: nceo_migrate_resource_category
   no_stub: true
   source: topic_area_id

Solution

Without using a Drupal Core patch, we need a way to do a migration_lookup in a more manual way. Thankfully prepareRow in your Migrate source plugin makes this pretty easy.

Note: This is not a complete Migrate source plugin. All the methods are there, but I’m focussing on the prepareRow method for this post. The most important part of the code is manually querying the Migrate map database table created in the Paragraph Entity migration.

<?php
 
namespace Drupal\your_module\Plugin\migrate\source;
 
use Drupal\Core\Database\Database;
use Drupal\migrate\Plugin\migrate\source\SqlBase;
use Drupal\migrate\Row;
 
/**
* Source plugin for Sample migration.
*
* @MigrateSource(
*   id = "sample"
* )
*/
class Sample extends SqlBase {
 
 /**
  * {@inheritdoc}
  */
 public function query() {
   // Query source data.
 }
 
 /**
  * {@inheritdoc}
  */
 public function fields() {
   // Add source fields.
 }
 
 /**
  * {@inheritdoc}
  */
 public function getIds() {
   return [
     'item_id' => [
       'type' => 'integer',
       'alias' => 'item_id',
     ],
   ];
 }
 
 /**
  * {@inheritdoc}
  */
 public function prepareRow(Row $row) {
   // In migrate source plugins, the migrate database is easy.
   // Example: $this->select('your_table').
   // Getting to the Drupal 8 db requires a little more code.
   $drupalDb = Database::getConnection('default', 'default');
 
   $paragraphs = [];
   $results = $drupalDb->select('your_migrate_map_table', 'yt')
     ->fields('yt', ['destid1', 'destid2'])
     ->condition('yt.sourceid2', $row->getSourceProperty('item_id'), '=')
     ->execute()
     ->fetchAll();
   if (!empty($results)) {
     foreach ($results as $result) {
       // destid1 in the map table is the nid.
       // destid2 in the map table is the entity revision id.
       $paragraphs[] = [
         'target_id' => $result->destid1,
         'target_revision_id' => $result->destid2,
       ];
     }
   }
 
   // Set a source property that can be referenced in yml.
  // Source properties can be named however you like.
   $row->setSourceProperty('prepare_multiple_paragraphs', $paragraphs);
 
   return parent::prepareRow($row);
 }
 
}

In your migration yml file, you can reference the prepare_multiple_paragraphs that was created in the migrate source plugin like this:

id: sample
label: 'Sample'
source:
 plugin: sample
process:
 type:
   plugin: default_value
   default_value: your_content_type
 field_paragraph_field:
   source: prepare_multiple_paragraphs
   plugin: sub_process
   process:
     target_id: target_id
     target_revision_id: target_revision_id

Sub_process was formally the iterator plugin and allows you to loop over items. This will properly create references to multiple Paragraph Entities. It will be nice when the migration_lookup plugin can properly handle this use case, but it’s a good thing to understand how prepareRow can provide flexibility.

Sep 21 2018
Sep 21

This year I’ve been working on a fun new Drupal-based event management application for libraries. During the course of development, I’ve become convinced that–beyond caching and naming things–the third level of Computer Science Hell is reserved for time zone handling. Displaying and parsing time zones proved to be a fruitful source of unforeseen issues and difficult to diagnose bugs.

The application we’ve been working on uses a progressively decoupled architecture, meaning interfaces that demand greater interactivity are self-contained client-side applications mounted within the context of a larger server-rendered Drupal site. This allows us to focus our efforts, time and budget on specific key interfaces, while taking advantage of Drupal’s extensive ecosystem to quickly build out a broader featureset.

Some examples of where time zone bugs reared their nasty heads:

  1. In certain cases, events rendered server-side displayed different date times than the same event rendered client-side.
  2. In a filterable list of events, time based-filters, such as “Show events between September 6th and 9th.”
  3. A user’s list of upcoming events would sometimes not include events starting within the next hour or two.
  4. Two logged in users could see a different time for the same event.

Let’s talk about that last one, as it’s a fairly common Drupal issue and key to understanding how to prevent the former examples.

User’s Preferred Time Zone

Drupal handles time zone display at the user level. When creating or editing their account, a user can specify their preferred time zone.

All users have a preferred time zone–even the logged-out anonymous user. A Drupal site is configured with a default time zone upon creation. This default time zone serves as the preferred time zone of all anonymous users. It also serves as the default preferred time zone for any new user account as it’s created.

When date values are rendered, the user’s preferred time zone is taken into account. Let’s say you have Drupal site with a default time zone set to America/New_York. If you have an event set to start at 4pm EST, all anonymous users–since they’re preferred time zone matches the site default–will see the event starts at 4pm.

However, a user with a preferred time zone of America/Phoenix will see that same event’s time as either 2pm or 1pm – literally depending on how the planets have aligned – thanks to the user’s time zone preference and Arizona’s respectable disregard for daylight savings time.

So the first thing you need to understand to prevent time zone bugs is users can set their preferred time zone.

Dates are Stored in UTC

One thing Drupal does really well is store dates in Coordinated Universal Time or UTC – I’m not sure that’s how acronyms work – but whatever. UTC is a standard, not a time zone. Time zones are relative to UTC. Dates are stored in the database as either Unix timestamps or ISO-8601 formatted strings. In either case they are set and stored in UTC.

Likewise, Javascript date objects are stored internally as UTC date strings. This allows dates to be compared and queried consistently.

The Disconnect

We now know Drupal and JS both store dates internally in a consistent UTC format. However JS dates, by default, are displayed in the user’s local time zone – the time zone set in their computer’s OS. Drupal on the other hand, displays dates in your user account’s preferred time zone. These two won’t always match up.

If the browser and Drupal think you are in two different time zones, dates rendered on the server and client could be off by any number of hours.

For example, you may be in Denver and have America/Denver set as your preferred time zone, but if you’re logged out and Drupal has a default time zone set to America/New York, server rendered times will display 2 hours ahead of client rendered times.

In another scenario, you may live in the same time zone configured as Drupal’s default and don’t have have a preferred time zone set. Everything looks fine until you travel a couple time zones away, and now all the client rendered dates are off.

This is the root cause of the bugs in the first three examples above.

The browser couldn’t care less what preferred time zone you have set in Drupal. It only knows local time and UTC time. Unlike PHP, JS currently does not have native functions for explicitly setting the time zone on a Date object.

Keeping Client- and Server-Rendered Dates in Sync

Now we know Drupal and the browser store dates in UTC. This is good. To make our lives easier, we’ll want to keep our client side dates in UTC as much as possible so that when we query the server with date-based filters, we get proper comparisons of dates greater than or equal to.

But we need to ensure our client-rendered dates match our server-rendered dates when they are displayed on the page. We also need to ensure dates entered via form fields are parsed properly so they match the user’s preferred time zone. There’s two things we need to do.

  1. Let the client know the user’s preferred time zone.
  2. Parse and display client-side dates using this time zone.

Passing the User’s Preferred time zone To do this, we’ll attach the preferred time zone via drupalSettings. This can be done via HOOK_page_attachments() in a .module file. If you don’t know how to create a custom module, there are plenty of resources online.

/**
 * Implements hook_page_attachments.
 */
function my_module_page_attachments(array &$attachments) {
  // Add user time zone to drupalSettings.
  $user_time zone = new \Datetime zone(drupal_get_user_time zone());
  $attachments['#attached']['drupalSettings']['my_module']['user'] = [
    'time zone' => drupal_get_user_time zone(),
  ];
  // Cache this per user.
  $attachments['#cache']['contexts'][] = 'user';
  // Clear the cache when the users is updated.
  $attachments['#cache']['tags'][] = 'user:' . $current_user->id();
}

With this, we can now access the user’s preferred time zone in the browser from the drupalSettings global, like so:

const userTimeZone = drupalSettings.my_module.user.time zone;
// Ex: ‘America/Denver’

Displaying, Parsing and Manipulating Dates in the User’s Time Zone

Now that we know the users preferred time zone client side, we can ensure any dates that are displayed and parsed, for example – from an input, are taking into account the correct time zone.

Currently there isn’t good native support for this in browsers. We have to either write our own functionality or use a third party date library. I typically use Moment.js for this. Moment has been around for a while and, as far as I know, has the best time zone handling.

To use Moment’s time zone handling, you’ll need to load the library with time zone support and the most recent time zone dataset. The dataset is required to map time zone names to the appropriate UTC offset – taking into account Daylight Saving Time at the appropriate time of year.

For all the following examples, we’ll assume you’ve loaded the time zone version of Moment with data bundled together as a browser global. There’s a number of other ways to import Moment via npm if you prefer to use it as a module.

<script src="https://momentjs.com/downloads/moment-time zone-with-data-2012-2022.min.js"></script>

Setting a Time Zone on Dates

To begin with, we need to tell Moment what time zone we are dealing with. We’ll also assume userTimeZone has been set to the string value from drupalSettings above.

// Create a date for now in the user’s time zone
const date = moment().tz(userTimeZone);

Just like Drupal and native JS Date objects, Moment stores the underlying date in UTC. The time zone plugin merely allows us to include a reference to a specific time zone which will be taken into account when manipulating the date with certain methods and formatting the date as a string.

const date = moment('2018-09-13 12:25:00');
date.unix();
// 1536863100
date.format('HH:mm')
// ‘12:25’
date.tz('America/New_York')
// "14:25"
date.unix()
// 1536863100

In this case, we are simply using Moment to display a date in a specific time zone. The underlying UTC date never changes.

Manipulating a Date

Whereas the format() method in Moment simply outputs a string representing a date, other methods manipulate the underlying date object. It’s important to take time zone into consideration when manipulating a date with certain operations to ensure the resulting UTC date is correct.

For example, Moment has a handy startOf() method that lets you set the date to, for example, the start of the day.

We instinctively think of the start of day as being 12:00 AM. But 12:00 AM in Denver is a different UTC time than 12:00 AM in New York. Therefore, it’s important to ensure our Moment object is set to the desired time zone before manipulating it. Otherwise we will get different results depending on the time of day and local time zone in which the method was executed.

Parsing a Date

In some cases, we need to parse a date. For instance, to correctly convert a Date Time field from Drupal into a JS Date object, we need to ensure it’s parsed into the right time zone. This is pretty straightforward as the date output from Drupal is in UTC. However, the client doesn’t know that. We can simply append the Z designator to indicate this date in UTC.

moment(`${someDateValue}Z`);

I typically wrap this in a function for easy reuse:

export const dateFromDrupal = date => moment(`${date}Z`).toDate();

And going the reverse direction:

export const dateToDrupal = date => date.toISOString().replace('.000Z', '');

Another use case for parsing dates is when handling user input. For example, if you have a filtering interface in which you need to show events on a given day. The user needs to enter a date. The HTML date input uses a simple string representation of a day, such as 2018-09-15 – in the user’s local time zone.

Now, if we want to take this input and query Drupal for events with a start time between the start and end of this day, we’ll need to convert this string value into a UTC date. We’ll need to do so by parsing this date in the user’s time zone, otherwise we might not get accurate results. In fact, they could be as much as a day off from what we’re expecting.

function changeEventHandler(event) {
  if (event.target.value) {
    const inputDate = moment.tz(event.target.value, userTimeZone);
    const startValue = inputDate.startOf(‘day’);
    const endValue = inputDate.endOf(‘day’);
    // Do something with the start and end date.
  }
}

Date and time zone handling is one thing you should not take for granted when considering decoupling a Drupal application. Having a good understanding of how dates are stored and manipulated will help you identify, diagnose and avoid date related bugs.

Keep these things in mind throughout the development process and you should be fine:

  1. Store your dates and pass them around in UTC. This will help keep things consistent and reduce any time zone related bugs.
  2. Take time zone into account when dealing with user input and output. Time zones are relative to the user, so keeping those operations as close to the user interface as possible should keep the internals of the application simple.
  3. When considering any open source modules that deal with dates, such as React components or date handling libraries, make sure they allow for proper time zone handling. This will save you some headaches down the road.
  4. Test your application using different user time zones in Drupal and by manually overriding your time zone on your local machine. Sometimes bugs are not apparent until you are on a drastically different time zone than the server.

Good luck!

Sep 21 2018
Sep 21

This year I’ve been working on a fun new Drupal-based event management application for libraries. During the course of development, I’ve become convinced that–beyond caching and naming things–the third level of Computer Science Hell is reserved for time zone handling. Displaying and parsing time zones proved to be a fruitful source of unforeseen issues and difficult to diagnose bugs.

The application we’ve been working on uses a progressively decoupled architecture, meaning interfaces that demand greater interactivity are self-contained client-side applications mounted within the context of a larger server-rendered Drupal site. This allows us to focus our efforts, time and budget on specific key interfaces, while taking advantage of Drupal’s extensive ecosystem to quickly build out a broader featureset.

Some examples of where time zone bugs reared their nasty heads:

  1. In certain cases, events rendered server-side displayed different date times than the same event rendered client-side.
  2. In a filterable list of events, time based-filters, such as “Show events between September 6th and 9th.”
  3. A user’s list of upcoming events would sometimes not include events starting within the next hour or two.
  4. Two logged in users could see a different time for the same event.

Let’s talk about that last one, as it’s a fairly common Drupal issue and key to understanding how to prevent the former examples.

User’s Preferred Time Zone

Drupal handles time zone display at the user level. When creating or editing their account, a user can specify their preferred time zone.

All users have a preferred time zone–even the logged-out anonymous user. A Drupal site is configured with a default time zone upon creation. This default time zone serves as the preferred time zone of all anonymous users. It also serves as the default preferred time zone for any new user account as it’s created.

When date values are rendered, the user’s preferred time zone is taken into account. Let’s say you have Drupal site with a default time zone set to America/New_York. If you have an event set to start at 4pm EST, all anonymous users–since they’re preferred time zone matches the site default–will see the event starts at 4pm.

However, a user with a preferred time zone of America/Phoenix will see that same event’s time as either 2pm or 1pm – literally depending on how the planets have aligned – thanks to the user’s time zone preference and Arizona’s respectable disregard for daylight savings time.

So the first thing you need to understand to prevent time zone bugs is users can set their preferred time zone.

Dates are Stored in UTC

One thing Drupal does really well is store dates in Coordinated Universal Time or UTC – I’m not sure that’s how acronyms work – but whatever. UTC is a standard, not a time zone. Time zones are relative to UTC. Dates are stored in the database as either Unix timestamps or ISO-8601 formatted strings. In either case they are set and stored in UTC.

Likewise, Javascript date objects are stored internally as UTC date strings. This allows dates to be compared and queried consistently.

The Disconnect

We now know Drupal and JS both store dates internally in a consistent UTC format. However JS dates, by default, are displayed in the user’s local time zone – the time zone set in their computer’s OS. Drupal on the other hand, displays dates in your user account’s preferred time zone. These two won’t always match up.

If the browser and Drupal think you are in two different time zones, dates rendered on the server and client could be off by any number of hours.

For example, you may be in Denver and have America/Denver set as your preferred time zone, but if you’re logged out and Drupal has a default time zone set to America/New York, server rendered times will display 2 hours ahead of client rendered times.

In another scenario, you may live in the same time zone configured as Drupal’s default and don’t have have a preferred time zone set. Everything looks fine until you travel a couple time zones away, and now all the client rendered dates are off.

This is the root cause of the bugs in the first three examples above.

The browser couldn’t care less what preferred time zone you have set in Drupal. It only knows local time and UTC time. Unlike PHP, JS currently does not have native functions for explicitly setting the time zone on a Date object.

Keeping Client- and Server-Rendered Dates in Sync

Now we know Drupal and the browser store dates in UTC. This is good. To make our lives easier, we’ll want to keep our client side dates in UTC as much as possible so that when we query the server with date-based filters, we get proper comparisons of dates greater than or equal to.

But we need to ensure our client-rendered dates match our server-rendered dates when they are displayed on the page. We also need to ensure dates entered via form fields are parsed properly so they match the user’s preferred time zone. There’s two things we need to do.

  1. Let the client know the user’s preferred time zone.
  2. Parse and display client-side dates using this time zone.

Passing the User’s Preferred time zone To do this, we’ll attach the preferred time zone via drupalSettings. This can be done via HOOK_page_attachments() in a .module file. If you don’t know how to create a custom module, there are plenty of resources online.

/**
 * Implements hook_page_attachments.
 */
function my_module_page_attachments(array &$attachments) {
  // Add user time zone to drupalSettings.
  $user_time zone = new \Datetime zone(drupal_get_user_time zone());
  $attachments['#attached']['drupalSettings']['my_module']['user'] = [
    'time zone' => drupal_get_user_time zone(),
  ];
  // Cache this per user.
  $attachments['#cache']['contexts'][] = 'user';
  // Clear the cache when the users is updated.
  $attachments['#cache']['tags'][] = 'user:' . $current_user->id();
} 

With this, we can now access the user’s preferred time zone in the browser from the drupalSettings global, like so:

const userTimeZone = drupalSettings.my_module.user.time zone;
// Ex: ‘America/Denver’

Displaying, Parsing and Manipulating Dates in the User’s Time Zone

Now that we know the users preferred time zone client side, we can ensure any dates that are displayed and parsed, for example – from an input, are taking into account the correct time zone.

Currently there isn’t good native support for this in browsers. We have to either write our own functionality or use a third party date library. I typically use Moment.js for this. Moment has been around for a while and, as far as I know, has the best time zone handling.

To use Moment’s time zone handling, you’ll need to load the library with time zone support and the most recent time zone dataset. The dataset is required to map time zone names to the appropriate UTC offset – taking into account Daylight Saving Time at the appropriate time of year.

For all the following examples, we’ll assume you’ve loaded the time zone version of Moment with data bundled together as a browser global. There’s a number of other ways to import Moment via npm if you prefer to use it as a module.

<script src="https://momentjs.com/downloads/moment-time zone-with-data-2012-2022.min.js"></script>

Setting a Time Zone on Dates

To begin with, we need to tell Moment what time zone we are dealing with. We’ll also assume userTimeZone has been set to the string value from drupalSettings above.

// Create a date for now in the user’s time zone
const date = moment().tz(userTimeZone);

Just like Drupal and native JS Date objects, Moment stores the underlying date in UTC. The time zone plugin merely allows us to include a reference to a specific time zone which will be taken into account when manipulating the date with certain methods and formatting the date as a string.

const date = moment('2018-09-13 12:25:00');
date.unix();
// 1536863100
date.format('HH:mm')
// ‘12:25’
date.tz('America/New_York')
// "14:25"
date.unix()
// 1536863100

In this case, we are simply using Moment to display a date in a specific time zone. The underlying UTC date never changes.

Manipulating a Date

Whereas the format() method in Moment simply outputs a string representing a date, other methods manipulate the underlying date object. It’s important to take time zone into consideration when manipulating a date with certain operations to ensure the resulting UTC date is correct.

For example, Moment has a handy startOf() method that lets you set the date to, for example, the start of the day.

We instinctively think of the start of day as being 12:00 AM. But 12:00 AM in Denver is a different UTC time than 12:00 AM in New York. Therefore, it’s important to ensure our Moment object is set to the desired time zone before manipulating it. Otherwise we will get different results depending on the time of day and local time zone in which the method was executed.

Parsing a Date

In some cases, we need to parse a date. For instance, to correctly convert a Date Time field from Drupal into a JS Date object, we need to ensure it’s parsed into the right time zone. This is pretty straightforward as the date output from Drupal is in UTC. However, the client doesn’t know that. We can simply append the Z designator to indicate this date in UTC.

moment(`${someDateValue}Z`);

I typically wrap this in a function for easy reuse:

export const dateFromDrupal = date => moment(`${date}Z`).toDate();

And going the reverse direction:

export const dateToDrupal = date => date.toISOString().replace('.000Z', '');

Another use case for parsing dates is when handling user input. For example, if you have a filtering interface in which you need to show events on a given day. The user needs to enter a date. The HTML date input uses a simple string representation of a day, such as 2018-09-15 – in the user’s local time zone.

Now, if we want to take this input and query Drupal for events with a start time between the start and end of this day, we’ll need to convert this string value into a UTC date. We’ll need to do so by parsing this date in the user’s time zone, otherwise we might not get accurate results. In fact, they could be as much as a day off from what we’re expecting.

function changeEventHandler(event) {
  if (event.target.value) {
    const inputDate = moment.tz(event.target.value, userTimeZone);
    const startValue = inputDate.startOf(‘day’);
    const endValue = inputDate.endOf(‘day’);
    // Do something with the start and end date.
  }
}

Date and time zone handling is one thing you should not take for granted when considering decoupling a Drupal application. Having a good understanding of how dates are stored and manipulated will help you identify, diagnose and avoid date related bugs.

Keep these things in mind throughout the development process and you should be fine:

  1. Store your dates and pass them around in UTC. This will help keep things consistent and reduce any time zone related bugs.
  2. Take time zone into account when dealing with user input and output. Time zones are relative to the user, so keeping those operations as close to the user interface as possible should keep the internals of the application simple.
  3. When considering any open source modules that deal with dates, such as React components or date handling libraries, make sure they allow for proper time zone handling. This will save you some headaches down the road.
  4. Test your application using different user time zones in Drupal and by manually overriding your time zone on your local machine. Sometimes bugs are not apparent until you are on a drastically different time zone than the server.

Good luck!

Apr 27 2018
Apr 27

In the previous post we looked into Pantheon hosting and how we can use it to easily create a suite of similar websites without having to build them individually each time. Often the requirement isn’t only easily creating new sites, but having to maintain them easily as well. When you have dozens or hundreds of websites that need changes applied to them, managing each one individually through Pantheon’s dashboard becomes a bottleneck. Fortunately Pantheon offers a command line interface that allows developers to automate much of that maintenance. In this post we’ll take a look at using Terminus to manage our sites.

Understanding Pantheon’s Framework

Before we can start rolling out features to multiple sites, it is helpful to understand how Pantheon groups the websites it hosts. Websites can be first grouped into an Organization. Within that, they can be tagged in any manner that makes sense for your needs. Both the organization and the tags can be used to filter sites into more targeted groups.

Each site then gets three environments; dev, test, and live are their machine names. Those machine names are important, as we’ll need to know which environment we’re targeting when we do our deployments. A single site also gets a machine name, like my-awesome-site. The combination of site name and environment name create a single instance identifier, which we use in our Terminus commands. For example, to clear Drupal’s cache on a live environment we’d run:

terminus remote:drush my-awesome-site.live -- cache-rebuild

A deployment on Pantheon has to follow a specific process, whether done via the dashboard or through Terminus. First, code must be deployed to the dev environment. Normally this is done with Git by pushing new code into the master branch on Pantheon’s repo. For features we’re deploying to multiple sites, the code must be pushed to the Upstream and then pulled from there. In the dashboard, this takes the form of a button that appears to alert you to new changes. In Terminus, you’d run the following command. Note, the --updatedb flag ensures any Drupal database updates get run as well.

terminus upstream:updates:apply my-awesome-site.dev --updatedb

Second, we have to move those updates to testing and then to production. Again, the dashboard provides a button on those environments when there are updates that can be made to them. In Terminus, this is done with:

terminus env:deploy my-awesome-site.test --updatedb --cc --note=”Deployed new feature.”

As before --updatedb runs the database updates, --cc rebuilds Drupal’s cache, and --note is the description of the updates that gets added to the Pantheon dashboard.

There are many other actions you can handle with Terminus. Their documentation covers the full list. However, out of the box Terminus has the same limitation that the dashboard has. You can only run a command on one site at a time. Thankfully, Terminus has additional plugins that solve this problem for us.

New Commands with Terminus Plugins

Terminus is built on PHP and managed with Composer. This allows for new commands to be built and distributed on Pantheon’s Terminus Plugin Library. We’ll need to install two plugins to run Terminus commands on multiple sites at once: Terminus Mass Update and Terminus Mass Run. Mass Update is created by Pantheon and runs the upstream:updates:apply command on a list of sites that get piped into it. Mass Run builds on that idea, by using the same piping logic and implements it onto more commands. With it you can run Drush commands, create site backups, and deploy code among other things.

To get the list of sites, we’ll use the org:site:list command. We could also use site:list, however since Custom Upstreams are an Organization level feature we’ll more than likely want to filter by Organization; org:site:list takes the name of the organization we want to filter by. To get a list of the Organizations you have access to, run terminus org:list. This returns both the machine name and the ID number of the Organizations, either will work for org:site:list.

Running terminus org:site:list aten will return a table of all sites in Aten’s Organization account. However, we still might only want a subset of those sites. This is where tagging comes in. Adding the --tag flag to our command lets us get only sites we’ve tagged with whatever is passed in. To see all sites tagged with “US” our command becomes terminus org:site:list aten --tag=US. This gets us closer, however it still returns a table of all site information. We only need the site ID numbers as a list for our Mass Run and Mass Update commands. To get this list we’ll add --format=list to our command, making the entire thing:

terminus org:site:list aten --tag=US --format=list

Now that we have a list of the site IDs we want to update, all we need to do is pipe that list into our plugin commands. To deploy a new feature from our upstream, we’d run:

terminus org:site:list aten --tag=US --format=list | terminus site:mass-update:apply --updatedb

Moving that feature through Pantheon’s environments is:

terminus org:site:list aten --tag=US --format=list | terminus env:mass:deploy --sync-content --cc --updatedb --env=test --note="Updated Drupal Core."

Removing a user from all sites they exist on becomes:

terminus org:site:list aten --tag=US --format=list | terminus remote:mass:drush --env=live -- ucan bad-user

Long Commands, Amazing Results

At this point you’ve probably noticed the commands we’re using have become very verbose. This is one downside of this approach: the commands themselves are not intuitive at first glance. For common tasks creating aliases can help simplify this. Leveraging the terminal’s history to bring up past commands and then modifying them speeds up more one-off tasks. But the ability to manage our websites en masse becomes a huge time saver over clicking our way through the dashboard for dozens of sites.

Apr 18 2018
Apr 18

Over the last couple years, organizations have been coming to us with a new problem. Instead of needing a single website, they need dozens, if not hundreds. They might be large universities with many departments, an association with independent franchises nationwide, or a real estate developer with offerings all over the world. Often, these organizations already have several websites supported by different vendors and technologies. They’ve become frustrated with the overhead of maintaining Drupal sites built by one vendor and Wordpress sites by another. Not to mention the cost of building new websites with a consistent look and feel.

While the details may vary, the broad ask is the same. How can we consolidate various websites onto a single platform that can be spun up quickly (preferably without developer involvement) and update and maintain these en masse, while maintaining enough structure for consistency and flexibility for customization. Essentially, they want to have their cake and would also like to eat it.

Over this series of posts, we’ll break down the various parts of this solution. We’ll first look at Pantheon’s hosting solution, and how its infrastructure is set up perfectly to give clients the autonomy they want. Then we’ll look at the command line tools that exist for developers to easily manage updates to dozens (if not hundreds) of websites. Lastly, we’ll look at the websites themselves and how Drupal 8 was leveraged to provide flexible website instances with structured limits.

Pantheon and Upstreams

Pantheon is a hosting solution designed specifically for Drupal and Wordpress websites. For individual sites they offer a lot of features, however the ones we’re most interested in are single click installations of a new website and single click updates to the code base. Using a feature called Upstreams, users can create fresh installs of Drupal 7, Drupal 8, or Wordpress that all reference a canonical codebase. When new code is pushed to any of those Upstreams, any site installed from it gets notified of the new code, which can be pulled into the instance with the click of a button.

Outside of the default options Pantheon maintains internally, developers can also build their own Custom Upstreams for website creation. Anyone with access to the Upstream can log into Pantheon and click a button to install a new website based on that codebase. In short this codebase will handle installing all of the features every website should have, establish any default content necessary, and be used to roll out new features to the entire platform. This setup allows non-technical users to easily create new websites for their various properties, and then handoff specific websites to their appropriate property managers for editing. We’ll go over more specifics of this codebase in a later post.

Since a developer is no longer required for the creation of individual sites, this frees up a lot of time (and budget) for building new features or keeping on top of maintenance. The process for rolling out updates is simple: the developer writes code for a new feature and pushes it to the upstream repository. Once pushed, every site connected to this upstream will get an alert about new features and a shiny button that pulls them in with a single click.

Pantheon and Organizations

At this point it’s worth mentioning that custom upstreams are a feature of a special account type called an Organization. An organization is used to group multiple websites, users, and Custom Upstreams under one umbrella. Organizations also come with additional features like free HTTPS and code monitoring services. It’s recommended that each organization signup with their own organization account, rather than use one tied to their development partner. This gives them full control over who can create new websites using their Custom Upstream, who can manage all their websites, and who can only access specific websites.

Organization accounts and Custom Upstreams go a long way in helping organizations reduce the overhead they may have from managing several properties simultaneously. Having the option to create an infinite number of websites in-house helps reduce the cost of growth. Having every website using the same codebase means new features can easily be rolled out to the entire platform and security vulnerabilities can be handled quickly.

The only downside with this approach is updates are generally applied one site at a time. The developer can push the code to the Custom Upstream, but it’s necessary to log into every website and click the button to update that site. For a handful of sites, this might be manageable. For dozens to hundreds, this problem becomes tedious. In the next post we’ll look at some of the scripted solutions Pantheon has for applying and managing an ever growing number of websites at once.

Mar 20 2018
Mar 20

Every generation is a little different. They bring different beliefs and perspectives tied to their upbringing. Generation Z, those born in 1995 or later, have been brought up in a world where information is at their fingertips. They are digitally connected through laptops, tablets and smartphones. If they have a question, all they have to do is pull out a device and ask Siri, Google or Alexa. They have a drive for getting information, learning new things and making an impact.

Gen Z students interested in Stanford are no different. We've often heard Stanford describe their students as adventurous, highly motivated, and passionate in their desire to come together and deepen their learning. During their time at Stanford, they will discover what motivates them and how they can impact the world after graduation. This is true of full-time Stanford students, as well as visiting students participating in the Summer Session. Stanford’s Summer Session provides current Stanford students, high schoolers and students at other universities with the unique opportunity to complete a summer at Stanford – getting a full experience of the coursework and college life Stanford offers.

The program isn’t cheap, and not every student can afford it.

We recently worked with the Stanford Summer Session communications team to combine the high school and college level websites into one site with a fresh design and structure. We kicked off the project with a discovery phase that included user surveys and stakeholder interviews.

The Problem

Image 1 Stanford Summer Session tuition table before the redesign

The old tuition tables from Stanford Summer Session. Students had to navigate the fees and add all the relevant fees up on their own.

As we learned from the user surveys, students are cost-aware individuals, just like Gen Zers all over the nation. Corey Seemiller and Meghan Grace in Generation Z Goes to College couldn’t have put it any better: “Anxiety over being able to afford a college education is forefront on the minds of these students.” Prospective Summer Session students were specifically looking for ways to make the program fit within their budget. This message was amplified by the interviews we conducted with the Summer Session staff, who noted that tuition information was buried, and in some cases, scattered throughout the site.

Through more discussion with stakeholders and students, we learned that students struggled with deciphering the available tuition and fee tables, inhibiting them from learning how much the program would cost (see Image 1). Additionally, as fees and tuition changed every year, staff found it painful and time-consuming to update information in the old system.

The Solution

So what did we do to help these students out? We created a one-stop-shop to get an estimate of the cost – welcome the tuition and fees calculator. By answering a few simple questions, students can get a true estimate of the total cost for their summer at Stanford. If they’re not happy with the results, they can tweak their answers to help make the program fit their budget.

As an added bonus, the system makes it really easy for staff to update the costs each year! On the admin side, each question and list of answers are fully editable. The dollar amounts can be changed for each student type to ensure the estimate will stay accurate for each coming school year. On the front end, the questions are organized, options are shown or hidden depending on previous answers, and the total amount is tallied on the final screen. Vue.js allowed us to build a complex interface simply, while making the static data more engaging.

How We Got There

Hopefully you’re in love with the solution we came up with, and maybe you’re wondering how we got here. Well, you already know we did some research – surveyed current students and interviewed staff – to learn about the problems they were facing. We then brought our two teams together to brainstorm ideas using the Core Model exercise.

Core Model worksheet for the tuition calculator

We took a few minutes to sketch out the proposed solution.

Sketches of the tuition calculator

These ideas were then further refined in wireframes and design:

Wireframes for the Stanford Summer Session Tuition and Fees Calculator Design Comps for the Stanford Summer Session Tuition and Fees Calculator

And finally developed in Drupal 8.

The tuition and fees calculator was designed to provide Gen Zs with the information they need to financially plan their summer at Stanford, but all generations can benefit from it. Since launch in November 2017, the calculator has become one of the top 10 visited pages of the redesigned site.

Feb 22 2018
Feb 22

When using traditional APIs your application is typically requesting or pulling data from an external service, requiring a request for fresh data if you want to see recent changes. When using webhooks, that process is reversed: data is pushed from an external service in real-time keeping your application more up to date and your project running more efficiently. Here are a few examples:

  • Facebook - Receive an alert anytime a message is read
  • Stripe - Get alerted anytime a transaction comes through
  • Eventbrite - Get alerted if an event is created or updated

This of course is not an exhaustive list; you'll need to check the application you are integrating with to see if they are implementing webhooks. A Google search like "Stripe Webhooks" is a good first step.

Implementing a webhook in your application requires defining a URL to which your webhook can push data. Once defined, the URL is added to the application providing the webhook. In Drupal 8, controllers are a straightforward way to define a path. See the complete code for an example.

When the webhook is fired it hits the defined URL with applicable data. The data that comes from a webhook is called the payload. The payload is often a JSON object, but be sure to check the application’s documentation to see exactly what you should be expecting. Capturing the payload is straightforward using the Request object available in a controller like this:

public function capture(Request $request) {
  $payload = $request->getContent();
}

If your payload is empty, you can always try some vanilla PHP:

Inspecting the Payload

Debugging webhooks can be a bit challenging if you are developing locally because your local environment typically does not have a public URL. Further, some webhooks require that the receiving URL implement SSL, which can also present challenges locally. The following options can help you navigate debugging webhooks locally.

Easiest

When capturing the payload, you can log it in Drupal. This option requires pushing your code up to a publicly available URL (a dev or staging environment).

$this->logger->debug('<pre>@payload</pre>', ['@payload' => $payload]);

Once you know what the payload looks like, you can copy it, modify it and make your own fake webhook calls locally using Postman. Feel free to checkout the importable Postman example in the repo.

Most Flexible

There is a utility called ngrok that allows you to expose your local environment with a publicly available URL; if you anticipate a lot of debugging it is probably worth the time to set up. Once ngrok is in place, you use the same logging method as above or use XDEBUG or something similar to inspect the payload. Ngrok will give you a unique, public URL which you can register, but which forwards to a server you have running on localhost. You can even use it with a local server that uses vhosts, such as yoursite.test with the command:

ngrok http -host-header=rewrite yoursite.test:80

Capturing and Processing the Payload

I'm a big fan of Drupal's queue system. It allows quick storage of just about anything (including JSON objects) and a defined way to process it later on a CRON run.

In your controller, when the payload comes in, immediately add the payload to your defined queue rather than processing it right away. This will make sure it is always running as efficiently as possible. You can of course process it right away if you choose to do so and skip the rest of this post.

$this->queue->createItem($payload);

Later when the queue runs, you can process the payload and do what you need to do, like create a node. Here is an example from the queue plugin (see ProcessPayloadQueueWorker.php for the full code):

 public function processItem($data) {   
   // Decode the JSON that was captured.
   $decode = Json::decode($data);
   // Pull out applicable values.
   // You may want to do more validation!
   $nodeValues = [
     'type' => 'machine_name_here',
     'status' => 1,
     'title' => $decode['title'],
     'field_custom_field' => $decode['something'],
   ];
 
   // Create a node.
   $storage = $this->entityTypeManager->getStorage('node');
   $node = $storage->create($nodeValues);
   $node->save();
 }

Once a queue is processed on CRON, the item is removed from the queue. Check out Queue UI module for easy debugging.

Security

As when building any web application, security should be a major consideration. While not an exhaustive list, here are a few things you can do to help make sure your webhook stays secure.

  • Check your service's webhook documentation to see what authentication protocols they provide.
  • Create your own token that only your application and the webhook service know about. If that is not included, do not accept the request. See the authorize method in the controller.
  • Instead of processing the payload and turning it into a node, consider doing an API call back to the service using the ID from payload and requesting the data to ensure its authenticity.
  • You should consider sanitizing content coming from the payload.

Once you implement a Webhook, you'll be hooked! Here's all the code packaged up.

There are of course Drupal contrib modules around webhooks. I encourage you to check them out, but if you have specific use cases or complex needs, rolling your own is probably the way to go.

Feb 15 2018
Feb 15

In the past I’ve struggled with the decision of whether or not to start a new Drupal project with a distribution. Since Drupal 8 has evolved I’ve noticed the decision has shifted from whether or not to use one, to which one is the right fit. Two things that are fairly new to distributions are sub-profiles and build tools, both of which have influenced the way I approach a new Drupal project.

Sub-profiles

Sub-profiles are a relatively new thing. While there is still some work to be done in how to manage dependencies and deal with more complex inheritance, inheriting a profile is now possible and in many cases recommended. One example is Acquia's Lightning distribution. Lightning does a good job highlighting the wheels you should not be re-inventing, while also serving as an example of a parent and sub-profile to the well known OpenEDU distribution.

Acquia's article about sub-profiles covers a helpful list of questions to start with such as: Does your new Drupal 8 site need media support? Does it need layout support? As the project develops and matures, are you ready to support the changes that will happen in Drupal core with media and layout, or anything else? As of version 8.3, things like media and layout were only stable enough in contrib, and in 8.4 were only partially moved into core. As of 8.5 and 8.6, workflow, media and layout are planned to be moved into core and stable and will considerably change your site's architecture and implementation. So, with a sub-profile, the specifications for which modules to use and how to use them are now inherited, and not the responsibility of the sub-profile.

Build tools

The next thing to consider is how, or who, is actually building your profile. If you're not thinking about SaaS, (if you are, see Dries's article about making distributions commercially interesting), then you're really targeting developers. Since Drupal 8 development is now entirely composer based, you might want to checkout what profiles are already doing with composer. Here are some examples of composer.json configurations as well as open source tools that you can integrate with composer:

  • Composer scripts - https://github.com/acquia/lightning/blob/8.x-3.x/composer.json - script hooks, (like post-install, pre-install), auto class loading, dependency management, etc.
  • Robo task runner - https://github.com/consolidation/Robo - defines tasks in an auto-loaded PHP class RoboFile
  • Phing build tool - https://www.phing.info - define tasks with a build.xml
  • Testing - PHPUnit test helper methods and classes, as well as addon Behat features and commands
  • Starter content - this currently is just a hook_install script that installs a view with a header, but worth trying out and building on
  • TravisCI integration - with only a few modifications to an existing .travis.yml file you can setup continuous integration for your profile. The existing configuration already handles setting up your server, installing composer and configuring PHP, installing a local browser for testing, headless browser for testing (see composer hooks), installing and re-installing Drupal (see robo), running tests (see behat, phpunit), and development tools for moving files around in your local development environment.

Using a combination of sub-profiles with these build tools have made starting my new Drupal projects more efficient. There is a lot of helpful material out there to learn from, contribute to, and build on. Hopefully this gives you a great start to focusing your new Drupal projects as well.

Feb 06 2018
Feb 06

I have two content types: Art and Artist. The first content type, Art, has an entity reference field to the second content type, Artist. I have a view named “Art” which shows all Art content with an exposed “Artist” filter that lets a user pare down their results. For example, a user might use the “Artist” filter to only show art by Cleon Peterson. By default this exposed filter will be rendered by Views as an empty text input, which is pretty much entirely useless! Users may not know of Cleon Peterson and wouldn’t know to search for him.

An empty text input provides a confounding UX

A much better solution would be to show the options available for this filter as a select list.

A select element provides a useful UX

This is exactly the problem I was faced with while working on The Octopus Initiative, a soon-to-launch Drupal 8 project by the Museum of Contemporary Art Denver that allows citizens of Denver the opportunity to take art from the museum home with them.

The solution

Let’s jump into the code.  You’ll need to either create a new module or add the code below from mca_artwork.module to the .module file of an existing one.  I created a new module called “MCA Artwork” and placed it in my project’s modules/custom directory.  My file structure looks like this:

- mca_artwork
    - mca_artwork.info.yml
    - mca_artwork.module

Here’s my mca_artwork.info.yml:

name: MCA Artwork
type: module
description: Customizes Artwork Display
core: 8.x
package: Custom

And here’s the mca_artwork.module file, where the magic happens:

<?php
 
/**
 * @file
 * Contains mca_artwork.module.
 */
 
 use Drupal\Core\Form\FormStateInterface;
 
  /**
   * Implements hook_form_FORM_ID_alter().
   *
   * Alters the artist options on artwork pages.
   */
  function mca_artwork_form_views_exposed_form_alter(&$form, FormStateInterface $form_state, $form_id) {
 
    // If not the view we are looking, move on
    if ($form['#id'] != 'views-exposed-form-the-art-block-1') {
     return FALSE;
    }
 
    // Query nodes
    $storage = Drupal::getContainer()->get('entity_type.manager')->getStorage('node');
    $nids = $storage->getQuery();
 
    // Gather published artist nodes and sort by title
    $nids = $nids->condition('type', 'artist')
     ->condition('status', 1)
     ->sort('title')
     ->execute();
 
    // If there are no nodes, move on
    if (!$nids) {
     return FALSE;
    }
 
    // Start building out the options for our select list
    $options = [];
    $nodes = $storage->loadMultiple($nids);
 
    // Push titles into select list
    foreach ($nodes as $node) {
     $options[$node->id()] = $node->getTitle();
    }
 
    // Start building out our new form element
    $artist_field = 'artist';
    $form[$artist_field]['#type'] = 'select';
    $form[$artist_field]['#multiple'] = FALSE;
 
    // Specify the empty option for our select list
    $form[$artist_field]['#empty_option'] = t('Artist');
 
    // Add the $options from above to our select list
    $form[$artist_field]['#options'] = $options;
    unset($form[$artist_field]['#size']);
  }

If you read through the comments in the above code, you’ll see we are essentially doing the following things:

  1. We load all published artist nodes, sorted by name
  2. We create an array of Artist names keyed by node id. These will be our select list options.
  3. We change the existing artist form input to a select list and populate it with our new options array.

It turns out this is a common UX need in Drupal 8 Views.  My comrade at Aten, John Ferris, also ran across this problem for a recently-launched Drupal 8 project he worked on for the Center for Court Innovation, a non-profit seeking to to create positive reforms in the criminal justice system.  The code snippet for The Octopus Initiative was largely adapted from his work on the Center for Court Innovation.

For the Center for Court Innovation site, the Chosen JS library was added to provide an interface for searching through a larger list of items.

A select element with Chosen JS added

In summary, the module I created for The Octopus Initiative provides a useful UX over what Drupal Views offers out-of-the-box.  If you have a larger number of items in your select list, then you may consider adding something like Chosen JS to make it easier to sort through, as was done for the Center for Court innovation.  Whatever you do, don't leave your users stranded with an empty text element!

Jan 23 2018
Jan 23

In Drupal 7, the Address Field module provided developers an easy way to collect complex address information with relative ease. You could simply add the field to your content type and configure which countries you support along with what parts of an address are needed. However, this ease was limited to fieldable entities. If you needed to collect address information somewhere that wasn’t a fieldable entity, you had a lot more work in store for you. Chances are good that the end result would be as few text fields as possible, no validation, and only supporting with a single country. If you were feeling ambitious, maybe you would have provided a select list with the states or provinces provided via a hardcoded array.

During my most recent Drupal 8 project I wanted to collect structured address information outside the context of an entity. Specifically, I wanted to add a section for address and phone number to the Basic Site Settings configuration page. As it turns out, the same functionality you get on entities is now also available to the Form API.

Address Field’s port to Drupal 8 came in the form of a whole new module, the Address module. With it comes a new address form element. Let’s use that to add a “Site Address” field to the Basic Settings. First we’ll implement hook_form_FORM_ID_alter() in a custom module’s .module file:

use Drupal\Core\Form\FormStateInterface;
 
function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // Overrides go here...
}

Don’t forget to add use Drupal\Core\Form\FormStateInterface; at the top of your file. Next, we’ll add a details group and a fieldset for the address components to go into:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // Create our contact information section.
  $form['site_location'] = [
    '#type' => 'details',
    '#title' => t('Site Location'),
    '#open' => TRUE,
  ];
 
  $form['site_location']['address'] = [
    '#type' => 'fieldset',
    '#title' => t('Address'),
  ];
}

Once the fieldset is in place, we can go ahead and add the address components. To do that you’ll first need to install the Address module and its dependencies. You’ll also need to add use CommerceGuys\Addressing\AddressFormat\AddressField; at the top of the file as we’ll need some of the constants defined there later.

use Drupal\Core\Form\FormStateInterface;
use CommerceGuys\Addressing\AddressFormat\AddressField;
 
function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // … detail and fieldset code …
 
  // Create the address field.
  $form['site_location']['address']['site_address'] = [
    '#type' => 'address',
    '#default_value' => ['country_code' => 'US'],
    '#used_fields' => [
      AddressField::ADDRESS_LINE1,
      AddressField::ADDRESS_LINE2,
      AddressField::ADMINISTRATIVE_AREA,
      AddressField::LOCALITY,
      AddressField::POSTAL_CODE,
    ],
    '#available_countries' => ['US'],
  ];
}

There’s a few things we’re doing here worth going over. First we set '#type' => 'address', which the Address module creates for us. Next we set a #default_value for country_code to US. That way the United States specific field config is displayed when the page loads.

The #used_fields key allows us to configure which address information we want to collect. This is done by passing an array of constants as defined in the AddressField class. The full list of options is:

AddressField::ADMINISTRATIVE_AREA
AddressField::LOCALITY
AddressField::DEPENDENT_LOCALITY
AddressField::POSTAL_CODE
AddressField::SORTING_CODE
AddressField::ADDRESS_LINE1
AddressField::ADDRESS_LINE2
AddressField::ORGANIZATION
AddressField::GIVEN_NAME
AddressField::ADDITIONAL_NAME
AddressField::FAMILY_NAME

Without any configuration, a full address field looks like this when displaying addresses for the United States.

For our example above, we only needed the street address (ADDRESS_LINE1 and ADDRESS_LINE2), city (LOCALITY), state (ADMINISTRATIVE_AREA), and zip code (POSTAL_CODE).

Lastly, we define which countries we will be supporting. This is done by passing an array of country codes into the #available_countries key. For our example we only need addresses from the United States, so that’s the only value we pass in.

The last step in our process is saving the information to the Basic Site Settings config file. First we need to add a new submit handler to the form. At the end of our hook, let’s add this:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // … detail and fieldset code …
 
  // … address field code …
 
  // Add a custom submit handler for our new values.
  $form['#submit'][] = 'MYMODULE_site_address_submit';
}

Now we’ll create the handler:

/**
* Custom submit handler for our address settings.
*/
function MYMODULE_site_address_submit($form, FormStateInterface $form_state) {
  \Drupal::configFactory()->getEditable('system.site')
    ->set(‘address’, $form_state->getValue('site_address'))
    ->save();
}

This loads our site_address field from the submitted values in $form_state, and saves it to the system.site config. The exported system.site.yml file should now look something like:

name: 'My Awesome Site'
mail: [email protected]
slogan: ''
page:
 403: ''
 404: ''
 front: /user/login
admin_compact_mode: false
weight_select_max: 100
langcode: en
default_langcode: en
address:
 country_code: US
 langcode: ''
 address_line1: '123 W Elm St.'
 address_line2: ''
 locality: Denver
 administrative_area: CO
 postal_code: '80266'
 given_name: null
 additional_name: null
 family_name: null
 organization: null
 sorting_code: null
 dependent_locality: null

After that, we need to make sure our field will use the saved address as the #default_value. Back in our hook, let’s update that key with the following:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // … detail and fieldset code …
 
  // Create the address field.
  $form['site_location']['address']['site_address'] = [
    '#type' => 'address',
    '#default_value' => \Drupal::config('system.site')->get('address') ?? [
      'country_code' => 'US',
    ],
    '#used_fields' => [
      AddressField::ADDRESS_LINE1,
      AddressField::ADDRESS_LINE2,
      AddressField::ADMINISTRATIVE_AREA,
      AddressField::LOCALITY,
      AddressField::POSTAL_CODE,
    ],
    '#available_countries' => ['US'],
  ];
 
  // … custom submit handler ...
}

Using PHP 7’s null coalesce operator, we either set the default to the saved values or to a sensible fallback if nothing has been saved yet. Putting this all together, our module file should now look like this:

<?php
 
/**
 * @file
 * Main module file.
 */
 
use Drupal\Core\Form\FormStateInterface;
use CommerceGuys\Addressing\AddressFormat\AddressField;
 
/**
 * Implements hook_form_ID_alter().
 */
function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // Create our contact information section.
  $form['site_location'] = [
    '#type' => 'details',
    '#title' => t('Site Location'),
    '#open' => TRUE,
  ];
 
  $form['site_location']['address'] = [
    '#type' => 'fieldset',
    '#title' => t('Address'),
  ];
 
  // Create the address field.
  $form['site_location']['address']['site_address'] = [
    '#type' => 'address',
    '#default_value' => \Drupal::config('system.site')->get('address') ?? [
      'country_code' => 'US',
    ],
    '#used_fields' => [
      AddressField::ADDRESS_LINE1,
      AddressField::ADDRESS_LINE2,
      AddressField::ADMINISTRATIVE_AREA,
      AddressField::LOCALITY,
      AddressField::POSTAL_CODE,
    ],
    '#available_countries' => ['US'],
  ];
 
  // Add a custom submit handler for our new values.
  $form['#submit'][] = 'MYMODULE_site_address_submit';
}
 
/**
* Custom submit handler for our address settings.
*/
function MYMODULE_site_address_submit($form, FormStateInterface $form_state) {
  \Drupal::configFactory()->getEditable('system.site')
    ->set(‘address’, $form_state->getValue('site_address'))
    ->save();
}

Lastly we should do some house cleaning in case our module gets uninstalled for any reason. In the same directory as the MYMODULE.module file, let’s add a MYMODULE.install file with the following code:

/**
 * Implements hook_uninstall().
 */
function MYMODULE_uninstall() {
  // Delete the custom address config values.
  \Drupal::configFactory()->getEditable('system.site')
    ->clear(‘address’)
    ->save();
}

That’s it! Now we have a way to provide location information to the global site configuration. Using that data, I’ll be able to display this information elsewhere as text or as a Google Map. Being able to use the same features that Address field types have, I can leverage other modules that display address information or build my own displays, because I now have reliably structured data to work with.

Jan 23 2018
Jan 23

In Drupal 7, the Address Field module provided developers an easy way to collect complex address information with relative ease. You could simply add the field to your content type and configure which countries you support along with what parts of an address are needed. However, this ease was limited to fieldable entities. If you needed to collect address information somewhere that wasn’t a fieldable entity, you had a lot more work in store for you. Chances are good that the end result would be as few text fields as possible, no validation, and only supporting with a single country. If you were feeling ambitious, maybe you would have provided a select list with the states or provinces provided via a hardcoded array.

During my most recent Drupal 8 project I wanted to collect structured address information outside the context of an entity. Specifically, I wanted to add a section for address and phone number to the Basic Site Settings configuration page. As it turns out, the same functionality you get on entities is now also available to the Form API.

Address Field’s port to Drupal 8 came in the form of a whole new module, the Address module. With it comes a new address form element. Let’s use that to add a “Site Address” field to the Basic Settings. First we’ll implement hook_form_FORM_ID_alter() in a custom module’s .module file:

use Drupal\Core\Form\FormStateInterface;
 
function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // Overrides go here...
}

Don’t forget to add use Drupal\Core\Form\FormStateInterface; at the top of your file. Next, we’ll add a details group and a fieldset for the address components to go into:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // Create our contact information section.
  $form['site_location'] = [
    '#type' => 'details',
    '#title' => t('Site Location'),
    '#open' => TRUE,
  ];
 
  $form['site_location']['address'] = [
    '#type' => 'fieldset',
    '#title' => t('Address'),
  ];
}

Once the fieldset is in place, we can go ahead and add the address components. To do that you’ll first need to install the Address module and its dependencies. You’ll also need to add use CommerceGuys\Addressing\AddressFormat\AddressField; at the top of the file as we’ll need some of the constants defined there later.

use Drupal\Core\Form\FormStateInterface;
use CommerceGuys\Addressing\AddressFormat\AddressField;
 
function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // … detail and fieldset code …
 
  // Create the address field.
  $form['site_location']['address']['site_address'] = [
    '#type' => 'address',
    '#default_value' => ['country_code' => 'US'],
    '#used_fields' => [
      AddressField::ADDRESS_LINE1,
      AddressField::ADDRESS_LINE2,
      AddressField::ADMINISTRATIVE_AREA,
      AddressField::LOCALITY,
      AddressField::POSTAL_CODE,
    ],
    '#available_countries' => ['US'],
  ];
}

There’s a few things we’re doing here worth going over. First we set '#type' => 'address', which the Address module creates for us. Next we set a #default_value for country_code to US. That way the United States specific field config is displayed when the page loads.

The #used_fields key allows us to configure which address information we want to collect. This is done by passing an array of constants as defined in the AddressField class. The full list of options is:

AddressField::ADMINISTRATIVE_AREA
AddressField::LOCALITY
AddressField::DEPENDENT_LOCALITY
AddressField::POSTAL_CODE
AddressField::SORTING_CODE
AddressField::ADDRESS_LINE1
AddressField::ADDRESS_LINE2
AddressField::ORGANIZATION
AddressField::GIVEN_NAME
AddressField::ADDITIONAL_NAME
AddressField::FAMILY_NAME

Without any configuration, a full address field looks like this when displaying addresses for the United States.

For our example above, we only needed the street address (ADDRESS_LINE1 and ADDRESS_LINE2), city (LOCALITY), state (ADMINISTRATIVE_AREA), and zip code (POSTAL_CODE).

Lastly, we define which countries we will be supporting. This is done by passing an array of country codes into the #available_countries key. For our example we only need addresses from the United States, so that’s the only value we pass in.

The last step in our process is saving the information to the Basic Site Settings config file. First we need to add a new submit handler to the form. At the end of our hook, let’s add this:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // … detail and fieldset code …
 
  // … address field code …
 
  // Add a custom submit handler for our new values.
  $form['#submit'][] = 'MYMODULE_site_address_submit';
}

Now we’ll create the handler:

/**
* Custom submit handler for our address settings.
*/
function MYMODULE_site_address_submit($form, FormStateInterface $form_state) {
  \Drupal::configFactory()->getEditable('system.site')
    ->set(‘address’, $form_state->getValue('site_address'))
    ->save();
}

This loads our site_address field from the submitted values in $form_state, and saves it to the system.site config. The exported system.site.yml file should now look something like:

name: 'My Awesome Site'
mail: [email protected]
slogan: ''
page:
 403: ''
 404: ''
 front: /user/login
admin_compact_mode: false
weight_select_max: 100
langcode: en
default_langcode: en
address:
 country_code: US
 langcode: ''
 address_line1: '123 W Elm St.'
 address_line2: ''
 locality: Denver
 administrative_area: CO
 postal_code: '80266'
 given_name: null
 additional_name: null
 family_name: null
 organization: null
 sorting_code: null
 dependent_locality: null

After that, we need to make sure our field will use the saved address as the #default_value. Back in our hook, let’s update that key with the following:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // … detail and fieldset code …
 
  // Create the address field.
  $form['site_location']['address']['site_address'] = [
    '#type' => 'address',
    '#default_value' => \Drupal::config('system.site')->get('address') ?? [
      'country_code' => 'US',
    ],
    '#used_fields' => [
      AddressField::ADDRESS_LINE1,
      AddressField::ADDRESS_LINE2,
      AddressField::ADMINISTRATIVE_AREA,
      AddressField::LOCALITY,
      AddressField::POSTAL_CODE,
    ],
    '#available_countries' => ['US'],
  ];
 
  // … custom submit handler ...
}

Using PHP 7’s null coalesce operator, we either set the default to the saved values or to a sensible fallback if nothing has been saved yet. Putting this all together, our module file should now look like this:

<?php
 
/**
 * @file
 * Main module file.
 */
 
use Drupal\Core\Form\FormStateInterface;
use CommerceGuys\Addressing\AddressFormat\AddressField;
 
/**
 * Implements hook_form_ID_alter().
 */
function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) {
  // Create our contact information section.
  $form['site_location'] = [
    '#type' => 'details',
    '#title' => t('Site Location'),
    '#open' => TRUE,
  ];
 
  $form['site_location']['address'] = [
    '#type' => 'fieldset',
    '#title' => t('Address'),
  ];
 
  // Create the address field.
  $form['site_location']['address']['site_address'] = [
    '#type' => 'address',
    '#default_value' => \Drupal::config('system.site')->get('address') ?? [
      'country_code' => 'US',
    ],
    '#used_fields' => [
      AddressField::ADDRESS_LINE1,
      AddressField::ADDRESS_LINE2,
      AddressField::ADMINISTRATIVE_AREA,
      AddressField::LOCALITY,
      AddressField::POSTAL_CODE,
    ],
    '#available_countries' => ['US'],
  ];
 
  // Add a custom submit handler for our new values.
  $form['#submit'][] = 'MYMODULE_site_address_submit';
}
 
/**
* Custom submit handler for our address settings.
*/
function MYMODULE_site_address_submit($form, FormStateInterface $form_state) {
  \Drupal::configFactory()->getEditable('system.site')
    ->set(‘address’, $form_state->getValue('site_address'))
    ->save();
}

Lastly we should do some house cleaning in case our module gets uninstalled for any reason. In the same directory as the MYMODULE.module file, let’s add a MYMODULE.install file with the following code:

/**
 * Implements hook_uninstall().
 */
function MYMODULE_uninstall() {
  // Delete the custom address config values.
  \Drupal::configFactory()->getEditable('system.site')
    ->clear(‘address’)
    ->save();
}

That’s it! Now we have a way to provide location information to the global site configuration. Using that data, I’ll be able to display this information elsewhere as text or as a Google Map. Being able to use the same features that Address field types have, I can leverage other modules that display address information or build my own displays, because I now have reliably structured data to work with.

Nov 09 2017
Nov 09

Planning for Drupal 9 and Beyond

Justin’s recent post about the product approach and Drupal's new release cycle got me thinking about what upgrading to Drupal 9 will really look like from a more technical standpoint. There's already lots of information out there explaining this new feature. I think there are some misconceptions about what it means for Drupal projects though, so let’s take a little look under the hood.

Background

To understand what the process of updating to Drupal 9 might look like, you'll need to know a few background terms. If you already know what "semver" and "deprecation" mean for a software project, you can skip ahead to "Preparing for Drupal 9".

Drupal now follows semantic versioning. Semantic versioning—semver for short—is a format for assigning numbers to releases. This number signals a promise about the compatibility and/or stability of a release. A semver tag has three parts, a major version number, a minor version number and a patch version number, all separated by periods. For example, a recent version of Drupal was tagged 8.4.0. For this release, the major version number was 8; the minor version number is 4; the patch number is 0. A common misconception is to think that after 8.0.9, one must have 8.1.0, but it’s perfectly acceptable to have a release number like 8.0.107.

What do major, minor and patch mean? We'll look at each from least to most significant.

A patch release signifies a fix.

They are usually composed of small, isolated fixes. When Drupal is on version 8.4.0 and releases version 8.4.1, this indicates that it is a patch release. It means, "nothing in your code needs to change, we just fixed some bugs". These might be security fixes, so you should update to these releases as soon as possible.

A minor release signifies new features and deprecations.

These are my favorite. They're the ones filled with all the sunshine and rainbows. A minor release adds new features and code to Drupal. This might mean an experimental module becomes stable as the Workflow module did in 8.4.0 or—my personal favorite—a new experimental module, like JSON API, might be added to Core. A minor release is an opportunity for Drupal's maintainers to clean things up, to keep Drupal fresh and to ensure Drupal keeps up with the needs of modern applications. It's also an opportunity to deprecate parts of Drupal Core. A deprecation is when a part of Drupal Core gets moved to the graveyard. It will be maintained for security fixes and bugs, but you shouldn't use that part of Drupal any more. It's usually an indication that there are new, better APIs or best-practices you should follow. A minor release says, "we've made things better, we've cleaned stuff up, and we didn't break your stuff yet."

Many dilapidated planes in a desert.

The graveyard of deprecated APIs

A major release signifies incompatible changes.

They can be a cause for celebration or an ominous cloud on the horizon. They're a critical event in Drupal's lifecycle. A major release is a signal that says "Warning: Your Stuff Might Break!" In some software, it might mean you need to rebuild your project. In Drupal 8 and beyond, thankfully, this shouldn't be the case. The maintainers have made a promise that says, "if you're not using deprecated code, you can update without rebuilding." In the past, like from Drupal 6 to Drupal 7, that wasn't the case and things definitely broke.

Preparing for Drupal 9

So, you know what a deprecation is, and what a major version release means. You know that the promise of Drupal's new release cycle is "if you're not using deprecated code, you can update without breaking things." But did you catch the caveat? If you're not using deprecated code. It's here that I believe the most misconceptions lie. I believe that many have confused this promise to mean that projects can be updated to Drupal 9 without a hitch. That the upgrade will be relatively routine.

The truth is that it's really up to you to make it routine and not something to fear.

How you approach your Drupal software project means that this major event in Drupal's lifecycle can be either a midlife crisis or a coming-of-age.

I said earlier that all the sunshine and rainbows are in the minor version releases. That's because you get all the goodies for free. Underneath the hood though, you need to be aware of what's being deprecated. Every time something is deprecated, a little technical debt is added to your project. You should pay that debt as soon as possible. A deprecation is a warning that the code underneath is going to disappear in Drupal 9. It's a warning to module maintainers and project developers that they need to update code that relies upon that newly deprecated API.

This reality is another point in the product approach's favor, as we alluded to in our earlier post. By making an ongoing investment in your project, you can address these deprecations as they happen so that when you're ready to upgrade to Drupal 9, it will be as painless as any other update.

The Prettiest Rainbows Come After a Little Rain

A select few sites will be able to proudly announce their upgrades to Drupal 9 the day it's released or soon after. Many won’t be able to move so quickly.

Knowing that Drupal 9 will break things relying on deprecated code and knowing that many modules won’t have been updated ahead of the release (such is life in open source), many sites will have to be patient. How patient depends on whether they’ve made an ongoing investment in their platform and kept up with deprecations in their custom code. They’ll be able to upgrade even faster if they have been model citizens in the Drupal community by reporting—and hopefully fixing—contrib dependencies on deprecated code too.

So, is this just like every other major Drupal release? Absolutely not! Gone forever are the days of completely rebuilding your site with every major update. Gone are the days of needing to migrate your own content to your own site. What this means is that you’ll just have to sit inside and wait or go play in the rain, fixing some bugs and updating some outdated dependencies, before you get to enjoy the Drupal 9 rainbow.

Here are some helpful links to stay up to date with deprecations:

Oct 31 2017
Oct 31

Here at Aten, we do a lot of work with Drupal, mostly on large redesign projects for nonprofits and universities. We’ve been in the business for a while now (since 2000), and one thing we’ve been talking about for years is the predicament of “buy-maintain-replace” release cycles. Specifically, website redesigns often fall prey to a problematic purchasing cycle that directly counteracts strategic goals.

It goes like this: first there’s a capital campaign, then a big spike in funding to do a redesign project, followed by modest support budget for a few years. During support, things everyone would like to change start to pile up, often beginning as a “backlog” or “wish list,” inevitably becoming a “gripe list” for all the things that are slowly and surely making your website obsolete. Time passes and the gripe list grows. We hear things like, “Our current website is horribly outdated; it isn’t even responsive.” Rather than invest in old technology and continue addressing the growing list of issues, tasks are pushed off for a future redesign. Eventually, there is a new capital campaign. The cycle starts over, rinse and repeat.

If you’re coming from a product background and you’re programmed to value ongoing development and continuous innovation, this already sounds bad. But if you’re from traditional IT management, you might think of redesigns more like purchasing any other technology solution. You buy it, it gets old, you replace it – often with some form of ongoing support between major expenditures. The smaller the support requirement, the more successful the project. Likewise the longer you can go without upgrading, the more successful the project.

The trouble is, your website, app, etc. doesn’t really work that way. Your website shouldn’t just be checking boxes on functionality requirements the way your phone system or workstations do; rather, your website is the public face and voice of your organization. It needs to keep up and tell your story clearly, every day. It needs to evolve as quickly as your organization does. And that requires ongoing investment. More than that, it requires a fundamental shift in the way decision makers think about planning digital projects.

There’s already a ton of fantastic material about the need to adopt a product approach over the more traditional project mindset. One of my favorite posts on the subject was written back in 2015 by the team at Greenpeace titled, “Product teams: The next wave of digital for NGOs?” I especially love this infographic. The illustration is spot on: first, a huge spike in money and time with a brief climax at launch, followed by diminished investment during a prolonged support period with equally diminished satisfaction, all to be repeated over and over again.

Interestingly, this problematic “buy-maintain-replace” cycle actually aligned closely with the release cycle for previous versions of Drupal. For years, timing for the “buy” stage in the cycle aligned surprisingly well with the stable release for major Drupal versions. First, you built a website on Drupal 4. Support phase ensued. Over a few years wish lists turned to gripe lists. Momentum grew behind doing the next major redesign, right on time for the stable release of Drupal 5. Rinse. Drupal 6. Repeat. Drupal 7.

While we were talking more and more about a product approach, the technology actually lent itself to the project mindset. Quick example: retainers are a big part of our business at Aten, and have been important for helping us support clients in the product approach. With retainers, clients invest consistently in their digital platforms over the long term. We identify strategic priorities together, maintain a backlog, organize sprints and deploy iterative releases. But with past versions of Drupal, an organization still needed to invest heavily for major release upgrades. At some point in the cycle, there were diminishing returns associated with ongoing investment in an outdated system. We started prioritizing tasks based on the fact that a large redesign was looming. We said things like, “Let’s just wait on Drupal 7 for that.” In many ways the underlying platform was promoting a “buy-maintain-replace” development cycle. The product approach was still valuable, but hampered by inevitable obsoletion of the technology.

Enter Drupal 8.

With Drupal 8, there’s a lot to be excited about: configuration management, component-based theming, improved performance, content moderation, modernized development tools, support for API-first architecture, and the list goes on. But I want to focus for a minute on Drupal’s release cycle.

Drupal’s vastly improved upgrade path is a huge win for the platform and a major reason organizations should consider migrating to Drupal 8 sooner rather than later.

With past versions of Drupal, major release upgrades (i.e. D6 to D7) required a significant development effort and usually constituted a major technology project. As I’ve touched on already, upgrades would typically coincide with a complete redesign (again, buy-maintain-replace).

With Drupal 8 the release cycle is changing. The short, non-technical version is this: Drupal 8 will eventually become Drupal 9. If you stay up-to-date with underlying changes to the platform as it evolves, upgrading from 8 to 9 should be relatively simple. It’s just another release in your ongoing development cycle.

With Drupal 8, an organization can invest consistently in its digital platform without the problem of diminishing returns. As long as you adopt the product approach, your platform won’t become outdated. And that’s fantastic, because the product approach is what we’re all going for – right?

Resources and Related Reading:

Oct 30 2017
Oct 30

Accessibility should be part of the criteria for picking a CMS. Fortunately, many CMSs out there are getting that right. Building on the information from Part 1 and Part 2 of this series, I’m going to focus on leveraging Drupal 8’s accessibility features to enhance any user’s experience.

Drupal 8 Core

Drupal 8 makes it much easier to add accessibility features than previous versions. Some of the most significant improvements for accessibility within Drupal 8 core are:

  • Core code uses semantic HTML5 elements and the addition of aria landmarks, live regions, roles, and properties to improve the experience of screen readers.
  • Creating aural alerts for users who use audio features to navigate and understand a website are easy to implement using Drupal.announce().
  • Users have more control navigating through content with a keyboard using the new Tabbing Manager.
  • Hidden, invisible or on-focus options for all labels have been included so screen readers can give more context to content – without impacting design decisions for traditional screens.
  • Fieldsets have been added for radios and checkboxes in the Form API.
  • Alt text is now required for all image fields by default.
  • The default Bartik Theme is now underlining links so that it is much easier for people to identify links on the page.
  • D8 now includes an optional module to include form errors inline to easily associate errors with inputs when filling in a web form.

Theming

Out of the box, Drupal core is a great starting point for creating an accessible website. Usability issues tend to arise when designers and developers begin the theming process. In order to achieve a desired design or function, they inadvertently remove or alter a lot of Drupal’s accessible defaults. With knowledge gained from the previous posts and the following tips, you will be on your way to theming a more accessible site for everyone!

Links

Make sure pseudo :focus and :active styles are always included for users navigating by keyboard. This helps the user visually understand where they currently are on a page. This can be the default browser styling or something more brand specific.

You may include “read more” links on teasers, but make sure there is a visually hidden field to include what the user will be “reading more" about for aural users.

Display None vs Visually Hidden

Drupal 8 core now has this option for labels when creating content types and forms, but it also includes simple class names to hide content properly. A great example of this usage is fixing a “read more” link to something more descriptive for screen readers.

  <a href="{{url}}">{{'Read more'|t}} <span class="visually-hidden"> {{'about'|t}} {{label}}</span></a>

Anchor and Skip Links

Providing a way to skip links and navigation on a page can improve the usability of a keyboard or aural user on your site. This is a great addition to your site and easy to implement. As mentioned in the previous post, screen readers have the ability to skip and search your site by sections, headings, links, etc. Adding another way to skip various types of content gives the user an easier way of flowing through and skipping heavy or repetitive information on a page. Just remember that this should be visibly hidden and not display: none;!

Forms

Always include a button for users to submit their form information. Exposed forms within Drupal have the option for an “auto submit” setting, which automatically submits the form once an element is interacted with or changed. Having one action which invokes two outcomes can cause major confusion for users navigating with assistive technologies.

For Example: A user chooses an item within a select dropdown, and the form submits this change which modifies the content on the page. All of this happens just by selecting an item within a dropdown. Ideally, the user should be able to choose the item in the dropdown, and then press submit to search. Each item should only have one action.

Be careful that you are not reducing the accessibility of forms when using hook_form_alter and other techniques to modify forms. Following the basic form guidelines while implementing forms through this technique will ensure that your forms work well for everyone.

Final Thoughts

We have seen great improvements in Drupal’s core code over the past few years to accommodate everyone. Drupal 8 has a lot of accessibility features built in and as developers we need to take advantage of those features or at the very least, not remove them.

Oct 10 2017
Oct 10

Drupal 8 advertised many new, promising features after its release. One of the exciting new changes was the addition of form modes. Form modes promised to let you manage the content entry side of your site just as you often managed content display with view modes. This change seemed like it would eliminate the need for much of the custom and repetitive code I often needed to write inside a hook_form_alter.

Over time, I've realized that form modes aren't everything I had hoped they would be. While it's easy to create new form modes, it's literally impossible to use them without custom code or contributed modules. Drupal simply doesn't have a way to know when to use one form mode over another. Should it be based on role? Permissions? A field on the node? Content moderation state? There are contributed modules for most if not all of these, but nothing out-of-the-box.

This forced me to think about why I needed a form mode in the first place. Almost always, the answer was to disable or hide a field from a user because that user shouldn't be allowed to change that field. The same was also often true of my view modes (only to a lesser extent). I realized that this particular problem is not one of user experience, but of access control.

Drupal 8 has hook_entity_field_access(). This hook is called for every field for the specified entity type when the entity is viewed or when its form is shown. When you deny access to a field, either for viewing or editing, that field will not be shown to the user. In any scenario. This should be your preferred method for hiding fields that certain users should not be able to access.

Using field access over form and view modes to hide fields when a user should not be allowed to see or edit a field is the secure and "Drupal way" to do things. This prevents mistakes in configuration, which might accidentally leak field information via teasers, searches, and Views. It also future proofs your site. If you ever turn on REST or JSON API or add a new form or view mode down the line, you can never accidentally expose a field that needs to be kept private.

Best of all, using the field access hook is much easier to implement than all the hoops you'll have to jump through to get the right form modes displayed at the right times.

First, make a custom module in the standard way. Create a .module file and create the following function:

<?php
use Drupal\Core\Access\AccessResult;
use Drupal\Core\Field\FieldDefinitionInterface;
use Drupal\Core\Session\AccountInterface;
use Drupal\Core\Field\FieldItemListInterface;
 
 
/**
 * Implements hook_entity_field_access().
 */
function yourmodule_entity_field_access($operation, FieldDefinitionInterface $field_definition, AccountInterface $account, FieldItemListInterface $items = NULL) {
}
?>

From this hook, you should always return an AccessResult. By default, you should simply return a neutral access result. That is, your hook is not concerned with actually allowing or preventing access yet. Add the following to your function.

<?php
function yourmodule_entity_field_access($operation, FieldDefinitionInterface $field_definition, AccountInterface $account, FieldItemListInterface $items = NULL) {
    $result = AccessResult::neutral();
    if ($field_definition->getName() == 'field_we_care_about') {
        if (/* a condition we'll write later... */) {
            $result = AccessResult::forbidden();
        }
    }
    return $result;
}
?>

The above code will deny access when our still unwritten condition is true, in every other case, we're just saying "we don't care".

There's an infinite number of scenarios in which you might want to deny access, but let's say that we want to make a field only editable by an administrator. We would add the following:

<?php
function yourmodule_node_field_access($operation, FieldDefinitionInterface $field_definition, AccountInterface $account, FieldItemListInterface $items = NULL) {
    $result = AccessResult::neutral();
    if ($field_definition->getName() == 'field_we_care_about') {
        if ($op == 'update' && !in_array('administator', $account->getRoles())) {
                $result = AccessResult::forbidden();
        }
    }
    return $result->addCacheContexts(['user.role:administrator']);
}
?>

Now, for every user without the administrator role that attempts to update field_we_care_about, the field will not be accessible. This works for more than just forms. For example, if we had the REST module installed, this would block the user from updating the field in that way as well.

The last part to note is that we added a cache context to our AccessResult. This ensures that our access decision is only relevant when the current user does or does not have the 'administrator' role. It's important to understand that we added the cache context both when we did and when we did not deny access. If we had just added the context when we denied access, if a user with the 'administrator' role happened to be the first person to attempt to access the field, then that result would be cached for all users no matter what.

Sep 15 2017
Sep 15

If you ever had to overwrite a module’s css file or a core javascript library in Drupal 7, you likely remember the experience. And not because it was a glorious encounter that left you teary-eyed at the sheer beauty of its ease and simplicity.

Along with Drupal 8 came the Libraries API and a new standard of adding CSS and JS assets and managing libraries. In true Drupal 8 fashion, the new system uses YAML files to grant developers flexibility and control over their CSS and JS assets. This gives you the ability to overwrite core libraries, add libraries directly to templates, specify dependencies and more.

There are many pros to this approach. The most important improvement being that you can add these assets conditionally. The FOAD technique and other hackish ways to overwrite core CSS files or javascript libraries are long gone. This is extraordinarily good news!

However, the simplicity of the drupal_add_js() and drupal_add_css() functions have also disappeared, and now you have to navigate a potentially overwhelming and confusing nest of yml’s just to add some custom CSS or javascript to a page. (No, you can’t just add css via the theme’s .info file). In this post, I’ll guide you through the new Libraries API in Drupal 8 so you can nimbly place assets like you’ve always dreamed.

Prerequisites

If you haven’t yet experienced the glory that is the yml file, you’ll want to get familiar. Read this great introduction to YAML then come back to this post.

Creating Libraries

First step is to create your libraries.yml file at your-theme-name.libraries.yml or your-module-name.libraries.yml.

Here’s an example of how you define a Drupal 8 library.

A few things to note:

  • The path to CSS and JS files are relative to the theme or module directory that contains the libraries.yml file. We’ll cover that in more depth shortly.
  • The dependencies, in this case jQuery, are any other library your library depends on. They will automatically be attached wherever you attach your library and will load before yours.
  • Multiple libraries can be defined in one libraries.yml file, so each library in the file must have a unique name. However the libraries will be namespaced as mytheme/mylibrary or mymodule/mylibrary so a libraries.yml file in your theme and libraries.yml file in your module can contain libraries with the same name.
  • The css files are organized according to the SMACSS categories. This gives some control over the order of which assets are added to your page. The css files will be added according to their category. So any css file in the theme category will be added last, regardless of whether or not it comes from the base theme or the active child theme. Javascript files are not organized by SMACSS category.

Now that you know the library basics, it’s time to up the ante.

Properties

You can add additional properties inside the curly braces that allow you to define where the javascript is included, additional ordering of files, browser properties and more. We won’t cover all the possibilities just now, but here are some examples

Attaching Libraries

The simplest way to attach libraries globally is through the your-theme-name.info.yml file. Instead of adding stylesheets and scripts ala Drupal 7, you now attach libraries like so:

Global is great and all, but perhaps the coolest libraries upgrade in Drupal 8 is the ability to attach libraries via twig templates. For example, if you have a search form block, you can define the css and js for the block, add it to a library, and add that library to the search form block twig template. Those assets specific to the search form block will only be included when the block is rendered.

And yes, you can also attach libraries with our ol’ pal php.

Extending and Overriding Libraries

Another boon is the ability to extend and override libraries. These extensions and overrides take place in the your-theme-name.info.yml file.

As you might expect, libraries-extend respects the conditions of the library is being extended. Maybe you have a forum module that comes with css and js out-of-the-box. If you want to tweak the styling, you can extend the forum modules library, and add your own css file.

For overrides, you can remove or override a specific file or an entire library.

Final Considerations

Before we wrap up, I'll send you on your way with a couple final considerations and gotcha's that you need to be aware of.

  • The Libraries API Module is still relevant in Drupal 8, and should be used to handle external libraries that don't ship with drupal or a module or theme. It also allows libraries to be used by multiple modules or sites.
  • If a file is already linked to within a library it won't get added a second time.
  • Module CSS files are grouped before theme CSS files, so a module's css file will always be loaded before a theme's css file.
  • Refer to the Drupal 8 Theming Guide for more info.

Thanks for reading. Now go forth and use your asset placing powers for good, not evil.

Sep 06 2017
Sep 06

Sometimes you need to pull in content or data on an ongoing basis from a third-party product or website. Maybe you want to pull in a list of books from Amazon, or show some products from your Shopify store. You may need all the flexibility of nodes in Drupal, but you don’t want to copy content manually, and you don’t want to be forced to move away from those other systems that are already serving your needs.

Here’s a recipe for synchronizing content from outside websites or products – in our case, Eventbrite – using the Migrate module.

But First, A Few Alternatives

In our specific project, we considered a few alternatives before landing on Migrate. We could have reimplemented Eventbrite's functionality in Drupal. However, we didn’t want to abandon the product (Eventbrite) that was already meeting our client’s needs perfectly. We just needed to pull in the content itself, without having to manage it in multiple places. We also considered a client-side application like Vue.js or React to simply re-present their Eventbrite content on the site in a seamless manner. But with that approach, we would lose the flexibility of storing events as nodes and would need to reinvent many of the features which Drupal gives us for free, like clean URLs, Views, Search, fine-grained caching, and more.

What we really needed was a continuous content synchronization between Eventbrite and Drupal that would leverage Eventbrite for content entry and event management and Drupal for a seamless integration with the rest of their site. But, how to do it?

Enter the Migrate Module

But Migrate is just for moving old sites into new ones, right? The reality is Migrate comes with a plethora of excellent, built-in, plugins which makes importing content a breeze. Moreover, it has all the necessary concepts to run migrations on a schedule without importing anything that’s not new or updated. While it’s often overlooked, Migrate is a perfect tool for synchronization of content as much as it is a perfect tool for one-time migration of content.

In Drupal 7, the feeds module was often used for these kinds of tasks. Feeds isn’t as far along in Drupal 8 and Migrate is now a much more flexible platform on which to build these kinds of integrations.

Important Concepts

In order to understand how to use Migrate as a content synchronization tool, you’ll first need to understand a few important concepts about how Migrate is built. Migrate module makes liberal use of Drupal 8 plugins, making it incredibly flexible, but also a little hard to understand at first. Especially when coming directly from Drupal 7.

Migrations are about taking arbitrary data from one bucket of content and funneling it into a new Drupal-based bucket of content. In Migrate-speak, the first bucket of data is your data "source." Your Drupal site is then the "destination."

Between those two buckets – your source and your destination – you may need to manipulate or alter the data to make it compatible with your Drupal content types and fields. In Migrate, this is called a "processor." For example, you may need to transform a Unix timestamp from your source data into a formatted date, or make taxonomy terms out of a list of strings. Migrate lets you describe one or more "processing pipelines" for each field of the data you'll be importing.

These are the three key components we'll be working with:

  1. "source" plugins (to fetch the data to import)
  2. "process" plugins (to transform that data into something easier to use)
  3. "destination" plugins (to create our Drupal nodes).

The "Source" Plugin

Migrate already comes with a few source plugins out-of-the-box. They can plug into generic data sources like a legacy SQL database, CSV, XML, or JSON files. However, what we needed for our client was to integrate with a somewhat non-standard JSON-based API. For that, you’ll need to write a custom source plugin.

Q: How? A: SourcePluginBase and MigrateSourceInterface.

When implementing a source plugin, you’ll need to extend from SourcePluginBase and implement all the methods required by MigrateSourceInterface.

SourcePluginBase does much of the heavy lifting for you, but there remains one essential method you must write yourself, and it is by far the most complicated step of this entire effort. You’ll need to implement the initializeIterator() method. This method must return something that implements PHP’s built-in \Iterator interface. In a custom migration, connecting to a custom API, you’ll need to write your own custom class which implements this interface. An iterator is an object which can be used in a foreach in place of a PHP array. In that respect, they’re very much the same. You can write:

foreach ($my_iterator as $key => $value) {
  // $my_iterator might as well be an array, because it behaves the same here.
}

That’s where the similarity ends. You can’t assign values to an iterator, and you can’t arbitrarily look up a key. You can only loop over the iterator from beginning to end.

In the context of the Migrate module, the iterator is what provides each result row, or each piece of content, to be imported into your Drupal site. In the context of our Eventbrite implementation, our iterator is what made requests to the Eventbrite API.

There are five methods which every class that implements \Iterator must have:

  1. current() - This returns the current row of data. Migrate expects this to return an array representing the data you’ll be importing. It should be raw and unprocessed. We can clean it up later.
  2. key() - This returns the current ID of the data. Migrate expects this to be the source ID of the row to be imported.
  3. next() - This advances the iterator one place. This will be called before current() is called. You should prepare your class to return the next row of data the next time current() is called. In the context of the Eventbrite API, this could mean advancing one item in the returned JSON response from the Eventbrite API. However, Eventbrite’s API is paginated, it was in this method that, when we had no more rows in the current page, we would make a new HTTP request for the next page of JSON data and set up our class to return the next row of data.
  4. rewind() - This resets the Iterator so that it can be looped over anew. This clears out current data and sets up the next call to the current() method to return the first result row.
  5. valid() - This indicates when the iteration is complete, i.e. when there are no more rows to return. This method returns TRUE until you’ve returned every result. When you have nothing left to return after a call to next(), you should return FALSE to tell Migrate that there is nothing left to import.

I’m not going to go into the specifics of each method here; it is highly variable and entirely dependent on the source of your migration. Is your third-party API JSON-based or XML-based, etc.? Plus, if you’re here for Eventbrite, I’ve already done all the hard work for you! I’ve made all the Eventbrite code I wrote public on Github.

Once you’ve built your iterator, the rest of it should be smooth sailing. You’ll still need to implement the remaining methods for MigrateSourceInterface, each of which is more extensively documented on Drupal.org.

  • fields() - A list of fields available for your source rows. This is usually the top-level keys of the array returned by your Iterator’s current() method
  • getIds() - This returns the uniquely identifying fields and some schema information for your source data. E.g. user and user_id from some arbitrary data source
  • __toString() - This is usually just a human-readable name for your Migration, something like, “My Custom Migration Source”

Once you have all this done, you’re ready to set up a migration YAML file and almost all your custom PHP is already written.

Much of the documentation about migrations that exist today tells you to install the Migrate Plus module at this point. Migrate Plus gives you some nice Drush commands and specifies how you should place and define your migrations. Honestly, I found it completely confusing and, for our use-case, entirely unnecessary. It’s a rabbit hole I wouldn’t go down. Migrate itself comes with everything we need.

To define a migration, i.e. the YAML which tells the Migrate module which plugins to use and how to map your source data into your destination content types, you’ll need to place a file in a custom module under a directory named migration_templates. For me, I named this file eventbrite.yml, but you may name it how you want. Just make sure that the id that you define in YAML matches the filename.

The five top-level keys you must define in this file are:

  1. id: The machine ID for the migration, matching your filename
  2. label: The human-readable name of the Migration, in my case, “Eventbrite”
  3. source: This is where we tell the Migrate module to use our custom source plugin, more on that below
  4. destination: This is the plugin that tells migrate which plugin to map your content into. Usually, this will be entity:node
  5. process: This tells migrate how to map source data values into fields in your destination content. We’ll discuss that below as well

The source key tells the Migrate module which plugin will provide the source data that it needs to migrate or import. In our case, it looked like this:

source:
  plugin: eventbrite

Where the eventbrite string must match the plugin id defined by an annotation on our custom MigrateSourcePlugin. Ours looked like this:

/**
 * @MigrateSource(
 *   id = "eventbrite"
 * )
 */
class EventbriteSource extends SourcePluginBase … omitted ...

The process key is the third and last component of our custom migration. Briefly, you use this section to map your source fields into your destination fields. As a simple example, if your source data has a key like “name,” you might map that to “title” for a node. Of all the Migrate documentation, this section on process plugins is by far the most well-documented and I referenced it extensively.

The biggest misunderstanding I’ve seen about the process section is how powerful “pipelines” and ProcessPlugins can be. Do not worry about cleaning up and processing your data in your custom iterator. Instead, do it with ProcessPlugins and pipelines.

The documentation page for how to write a ProcessPlugin is incredibly short. That said, ProcessPlugins are incredibly easy to write. First, create a new class with a file named like: /src/Plugin/migrate/process/YourClassName.php. Your class should extend the ProcessPluginBase class. You only need to implement one method: transform().

The transform() method operates on each value, of each field, on a single result row. Thus, if your source data returns an array of strings for a field named “favorite_flavors” on a chef’s user profile, the transform method will be called once for each string in that array.

The idea is simple, the transform method takes $value as its first argument, does whatever changes it needs to, then returns the processed value. E.g., if you wanted to translate every occurrence of the word “umami” to a less pretentious word like “savory,” you would return the string “savory” every time $value was equal to “umami”.

By composing different processors and understanding what already comes free with the Migrate module (see the list of built-in processors), complicated migrations become much simpler to reason about as complexity grows.

Continuity

The single biggest differentiating factor between using Migrate for a legacy site migration and using Migrate for content synchronization is that you’ll run your migrations continuously on a regular interval. Usually, something like every 30 minutes or every hour. In order to run your migration continuously, it’s important for your migration to know a few things:

  • What has already been migrated
  • What, if anything, has been updated since the last run
  • What is new

When the Migrate module can answer these questions, it can optimize the migration so it only imports or updates what needs to be changed, i.e., it doesn’t import the same content over and over.

To do this, you need to specify one of two methods for answering these questions. You can either specify track_changes: “TRUE” under the source in your migration YAML, or you can specify a high_water_property. The former will hash each result row and compare it to a previously computed hash. If they match, Migrate will skip that row. The latter, will examine the property you specify and compare it to the same property from the previous migration. If the incoming high water property is higher, then Migrate knows it should import the row. Typically, you might use something like a “changed” or “updated” timestamp on the incoming content as your high water property.

Both methods work fine, sometimes you just might be unable to use one or the other. For example, if there are no available properties on your source data to act as a high water mark, then the track_changes method is your only option. You may be unable to use the high_water_property if there are fields on your source data that might change over time (thereby changing the hash of the content) but you do not want to trigger an update when those fields change.

Cron

The final piece of the puzzle is actually ensuring that your migration runs on a regular basis. To do that, you’ll want to write a little bit of code to run your migration on cron.

I found this tutorial on cron and the Drupal 8 Queue API to be very informative and helpful. I would recommend it if you’d like to learn more about Drupal’s Queue API. Here, we’re just going to go over the minimum required effort to get a migration importing regularly on cron.

First, you’ll need to implement hook_cron in a custom module. Put the following in that function:

/**
 * Implements hook_cron().
 *
 * Schedules a synchronization of my migration.
 */
function mymodule_importer_cron() {
  $queue = \Drupal::queue('mymodule_importer');
 
  // We only ever need to be sure that we get the latest content once. Lining
  // up multiple sync's in a row would be unnecessary and would just be a
  // resource hog. We check the queue depth to prevent that.
  $queue_depth = (integer) $queue->numberOfItems();
  if ($queue_depth === 0) {
    $queue->createItem(TRUE);
  }
}

In the above, we’re loading a queue and adding an item to that queue. Below, we’ll implement a QueueWorker that will run your migration when there is an item in the queue. It’s possible that the migration might take longer than the amount of time you have between cron runs. In that case, items would start piling up and you would never empty the queue. Here, we just make sure we have one item in the queue. There’s no reason to let them pile up.

Next, in a file named like src/Plugin/QueueWorker/MyModuleImporterQueueWorker.php, we’ll write a class that extends QueueWorkerBase:

namespace Drupal\mymodule_importer\Plugin\QueueWorker;
 
use Drupal\Core\Plugin\ContainerFactoryPluginInterface;
use Drupal\Core\Queue\QueueWorkerBase;
use Drupal\Component\Plugin\PluginManagerInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Drupal\migrate\MigrateExecutable;
use Drupal\migrate\MigrateExecutableInterface;
use Drupal\migrate\MigrateMessage;
 
/**
 * @QueueWorker(
 *   id = "mymodule_importer",
 *   title = @Translation("My Module Cron Importer"),
 *   cron = {
 *     "time" = 30,
 *   },
 * )
 */
class MyModuleImporterQueueWorker extends QueueWorkerBase implements ContainerFactoryPluginInterface {
   … omitted ...
}

Notice that the id value in the annotation matches the name we put in our implementation of hook_cron. That is important. The “time” value is the maximum time that this run of your worker is allowed to take. If it does not complete in time, it will be killed and the item will remain in the queue and will be executed on the next cron run.

Within the class, we’ll inject the the migration plugin manager…

  public function __construct(PluginManagerInterface $migration_manager) {
    $this->migrationManager = $migration_manager;
  }
 
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) {
    return new static($container->get(‘plugin.manager.migration’));
  }
 
  public function processItem($item) {
    $migration = $this->migrateManager->createInstance('your_migration');
    $message = new MigrateMessage('Content Imported');
    $executable = new MigrateExecutable($migration, $message);
    $executable->import();
  }

I’ve left out a lot of code standards for clarity above (don’t judge). The key things to notice are that ‘your_migration’ must match the id of the migration in your migration YAML file. The rest of the processItem() method is just a little limbo to get your migration to a point where you can call import() without an error.

With all this written, your migration will be run every time cron is executed.

Conclusion

It took a lot of research to get this working the first time, but we still saved a lot of time by using as much as we could from the Migrate module to implement third-party content synchronization. Once I wrote this initial implementation, I’ve been able to simply tweak the Iterator and machine names in order to implement synchronization with another API on another project. Getting everything set up and working took about a day and will probably take less time in the future.

You can see the work in action at Museum of Contemporary Art Denver – just check out their events page.

I hope you’ll let me know if you give this a try, and what you did and didn’t find helpful!

Aug 30 2017
Aug 30

A recent project involved a large number of nodes, each with a menu item. The menu was hierarchical with three levels. Each node page needed a link to the previous and next item.

To generate previous and next links for the current page, I had first looked at loading the menu and traversing it. However, these objects are not easy to navigate. Even the render array is not in order. One would have to resort it by weight. Instead most of what we need is in the database. And ultimately, any classes that load menu items get them from the database anyway. In the simplest case, the previous and next items are simply the previous and next sibling menu items. However, previous could be the parent of the current menu item. If the current menu item is a parent, the previous item could be the last child of the previous sibling. Similar situations exist for the next item. Finally, one also has to account for there not being either a previous or next item. The below image better illustrates this relationship.

The links are generated in a block defined in code. To do this we extend Drupal’s BlockBase in a php file of the same name as the class.

  class BookNavigation extends BlockBase implements ContainerFactoryPluginInterface {
 

This should go in a custom module’s src/Plugin/Block/ directory.

To get this data and be able to traverse it, we start with the MenuActiveTrail class. Remember to include the necessary use statement:

use Drupal\Core\Menu\MenuActiveTrailInterface;
  $active_trail_ids = $this->menuActiveTrail
      ->getActiveTrailIds('MENU-MACHINE-NAME');

This gives us an array of menu item UUIDs starting with the current page at the first item on through to the top level menu item.

We need to break this up into current item and any parents.

    $current_menu_id = $this->getMenuId($current_menu_uuid);
 
    $parent_menu_uuid = array_shift($active_trail_ids);
    if ($parent_menu_uuid != '') {
      $parent_menu_id = $this->getMenuId($parent_menu_uuid);
    }
 
    $grandparent_menu_uuid = array_shift($active_trail_ids);

While a menu could have more layers, for this purpose we only ever need to consider two levels “up” from the current item.

Using these menu UUIDs we can load all the child items from the database.

    $this->menuStorage = $this->entityTypeManager
      ->getStorage('menu_link_content');
 
    $siblings = $this->menuStorage->getQuery()
      ->condition('menu_name', 'menu-table-of-contents');
    if ($parent_menu_uuid == '') {
      $siblings->condition('parent', NULL, 'IS');
    }
    else {
      $siblings->condition('parent', $parent_menu_uuid);
    }
    $siblings = $siblings->sort('weight', 'ASC')->sort('title', 'ASC')
      ->execute();

This query gets all sibling menu items. It returns entity ids, not UUIDs. However, the parent is identified as a UUID. An extra query gets the entity id for a given UUID:

  protected function getMenuId($menu_uuid) {
    $parts = explode(':', $menu_uuid);
    $entity_id = $this->menuStorage->getQuery()
      ->condition('uuid', $parts[1])
      ->execute();
    return array_shift($entity_id);
  }

The query also has entity_ids as the array indexes. The following will simply things:

  $siblings_ordered = array_values($siblings);

We’ll similarly need all parent menu items, where the grandparent is used in the query.

Then to find the previous and next items:

    $sibling_index = array_search($current_menu_id, $siblings_ordered);
    if ($sibling_index !== FALSE) {
      $prev_index = $sibling_index - 1;
      $next_index = $sibling_index + 1;
    }

This is for that simplest case. It gets slightly more complicated when the previous or next item could be a parent or the sibling of the previous or next parent.

    if ($has_children && $prev_index > -1) {
      $prev_sibling_entity = $this->menuStorage
        ->load($siblings_ordered[$prev_index]);

Once you’ve determined the previous and next URL, populate a renderable array.

    if ($prev_url) {
      $prev_url->setOption('attributes', [
        'class' => [
          'pager__link',
          'pager__link--prev',
        ],
      ]);
      $items['prev'] = Link::fromTextAndUrl($prev_title, $prev_url)->toRenderable();
    }
    else {
      $items['prev']['#markup'] = $prev_title;
    }
 
    // Generate next content.
    if ($next_url) {
      $next_url->setOption('attributes', [
        'class' => [
          'pager__link',
          'pager__link--next',
        ],
      ]);
      $items['next'] = Link::fromTextAndUrl($next_title, $next_url)->toRenderable();
    }
    else {
      $items['next']['#markup'] = $next_title;
    }
$build['nav_links] = $items;

Finally, to make sure the block is cached properly and cleared when needed, a cache context of 'url' is needed. This ensures the block is cached separately for each page, or url. A cache tag that corresponds to the menu name will ensure these items are cleared from cache whenever the menu is updated. That tag would take the format of 'config:system.menu.MENU-MACHINE-NAME'.

    $build['#cache'] = ['max-age' => -1];
    $build['#cache']['contexts'][] = 'url';
    $build['#cache']['tags'][] = 'config:system.menu.menu-table-of-contents';

While this is a small amount of code, it handles menu systems of varying complexity, and the code is only run once per url after the menu is saved or all cache is cleared.

May 11 2017
May 11

I recently migrated content from a Drupal 7 site to a new Drupal 8 install using core’s Migrate, Migrate Drupal and Migrate Drupal UI modules. A few months after the initial migration, I decided to enable core Content Moderation for use with one of my migrated content types. No beuno.

Just saving any content for which I had enabled Content Moderation resulted in this tasteless error:

The website encountered an unexpected error. Please try again later.

Clicking over to Recent log messages at /admin/reports/dblog revealed the following exception:

Invalid translation language (und) specified.

It was being thrown from two different places when I attempted to save a node.

Error: Invalid translation language (und) specified.

The details for each in order were:

Drupal\Core\Entity\EntityStorageException: Invalid translation language (und) specified. in Drupal\Core\Entity\Sql\SqlContentEntityStorage->save() (line 770 of /Applications/MAMP/htdocs/project/core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorage.php).
InvalidArgumentException: Invalid translation language (und) specified. in Drupal\Core\Entity\ContentEntityBase->addTranslation() (line 823 of /Applications/MAMP/htdocs/project/core/lib/Drupal/Core/Entity/ContentEntityBase.php).

Several quick searches didn’t turn up anything exactly applicable, but I did find a pretty useful summary of a similar issue that put me on the right track.

Basically, my content migrated from a Drupal 7 site was set with a legacy langcode of und. If I just load a node and output its language attribute using the Devel module’s dpm() function, you can see what’s amiss:

use Drupal\node\Entity\Node;
 
$node = Node::load(<My NID>);
dpm($node->language()->getId()); // returns 'und', which was causing a problem

It’s the language specification of und that was preventing me from saving the content with Content Moderation enabled. Changing the language over to the site default, EN in my case, just took a minute with the following code.

use Drupal\node\Entity\Node;
 
$node = Node::load(<My NID>);
$langcode = \Drupal::languageManager()->getDefaultLanguage()->getId();
$node->set('langcode', $langcode);
$node->save();
dpm($node->language()->getId()); // now this returns 'en', which is good!

Now you can see that the language attribute is reflecting an appropriate value, which in my case meant I was able to save the node like normal with Content Moderation enabled. Fixed!

All that remained was doing the same operation for every node in the database. For my project there were only a couple of thousand nodes, so I just used the code below (I ran this at devel/php but you could do it in a module or wherever is easiest).

use Drupal\node\Entity\Node;
$langcode = \Drupal::languageManager()->getDefaultLanguage()->getId();
 
$result = db_query("SELECT nid FROM node WHERE langcode = 'und' LIMIT 100");
foreach ($result as $record) {
  $node = Node::load($record->nid);
  $node->set('langcode', $langcode);
  $node->save();
}

I limited my query to 100 nodes to avoid a timeout, then just executed the code several times until all my nodes were updated. If you have tons of nodes that need updating, you might consider writing a hook_cron or using batch operations.

Warning: The site I was working on wasn't multilingual. The code above sets every node with a language specification of und to the site default, which may not be what you want for a multilingual site.

That’s it! It seems simpler in retrospect but took me awhile to understand what was going on. I also created an issue on Drupal.org so hopefully this bug will get addressed eventually.

May 09 2017
May 09

This is the second part of a series of blog posts about automated testing for Drupal. Its mission is to take you from zero testing experience to confidence in testing your custom Drupal work, from the ground up. Last time, in Testing for the Brave and True: Part Zero we defined exactly what automated testing is and discussed some of the common vocabulary of testing. It also introduced the two primary tools used by the Drupal community to test their work, PHPUnit and Behat.

Why Automated Testing Will Save You Time and Treasure

Now that we know a little about what automated testing is, I'm going to make the case that it can be a net positive to your everyday workflow and actually make you a better programmer.

Everybody Tests

If you came to this blog post with the idea that you've never done any testing or that you've tried to test and didn't succeed, you'd be wrong. Every developer tests their code. Some developers just throw that work away.

Consider what you're doing every time you clear cache and go refresh your browser. You're testing your work. You've made some change to your code and now you're asserting that your work functions as you expect. Perhaps you put a dpm() or kint() in your new code to inspect some part of your code or a variable, or maybe you're using XDebug (if not, I'd encourage you to start) to step through your code. This process is testing.

While these informal tests can be incredibly valuable, you can't commit them; you can't run them the next day and you cannot run all the tests you've ever written with just one command. Writing automated tests is simply writing code that can do some of that testing for you. It's making those informal tests you already do, explicit and formalized.

Context, context, context

Whenever you write code to do specific things, you make assumptions. Assumptions are the foundation of abstraction and abstraction is the foundation of progress. You can't write code that does anything useful without making assumptions. Especially in Drupal. Entities themselves are an abstraction writ large. But, wrong or hidden assumptions are also the root of most bugs.

Therefore, when we write code, we ought to be very aware of the assumptions we make. We ought to record those assumptions in some way, for future maintainers or simply to help us remember that we made them in the first place. Unfortunately, when we only do informal testing, we bake our wrong assumptions into our code without leaving a record of them. We can't re-assert our assumptions later without digging through code or comments or taking the time to figure out what something was actually supposed to do.

This is the first place where formal tests can be a boon to you, future you, and your successors. The act of writing formal, automated tests by its very nature is recording your assumptions for posterity. When you return to your code an hour, day, week, or year later, all the assumptions you made can be tested again. If you have a strange if/else in your code because of some edge case you discovered when you were doing your initial development, a test can ensure that that code isn't deleted when you're cleaning up or refactoring later (at least without explicitly deleting the test).

In short, you make your assumptions explicit. This reduces the cognitive burden of "getting back up to speed" whenever you need to come back to some piece of code.

Confidence

This is where I first really fell in love with testing. Having formal tests for the code I was working with gave me confidence as I made changes. That can sound strange to someone who's never tested before. It sounded strange to me, too.

The confidence I'm talking about is not the confidence I have in my abilities (Lord knows I could learn a little more about being humble), it's my confidence in the codebase itself and the relative safety I have when I incorporate a change.

If you've ever been in an old, large, legacy codebase, you might recognize that feeling of mild anxiety when you've made a change and there's just no feasible way to know if you've broken something else in some obscure place of "the beast". The only thing you can do is click around and cross your fingers. This is where a well-tested codebase can create real confidence. Having a suite of automated tests means I can isolate my changes and then run all the tests ever written for that codebase and ensure that my changes haven't broken something, somewhere.

Better tests, better code

If you've been interested in the art of programming itself (and I think you must be to be reading this), then you might have heard of the SOLID design principles. Or, at least, things like "write small functions" and "do one thing and one thing well." Maybe you've heard about "dependency injection," "separation of concerns," or "encapsulation." All these words are names for the concepts that, when applied to the way we write code, make the likelihood of our code being robust, flexible, extensible, and maintainable (all good things, right?) go up.

The art and practice of testing itself can help you apply all of these concepts to your code. If you recall the term "unit testing" from the last post in this series, I said, "[unit] tests isolate very small bits of functionality." The process of identifying the one small thing that your code achieves in order to test it, helps you apply the Single Responsibility Principle. Put another way, when your tests become large and unwieldy, they're saying to you, "this code does too much and it should be refactored."

When you're testing code that has dependencies on other code or configuration, like access to the database, another service, or some credentials, it can become difficult to write useful tests. For example, if you're writing code that runs an entity query and you'd like to test how the code works when there are no results, five results or 500 results, you would have a hard time doing so with a real entity query and database connection. This is where "inversion of dependencies" or "dependency injection" come into play. Instead of running an entity query and doing processing on the results all in one function or within a single class, pass the entity query or its results into the function, method or class. This allows you to test the function with fake results, which you can then set up in your test (we'll go over the methods for doing exactly that in a later part of this series).

That inability to test code with implicit dependencies is a good thing™—it forces you to do dependency injection, whereas it's simply a ritual that you have to practice without tests (I should note, the reason inversion of dependencies is a good thing™ is because it makes your code modular and helps ensure it only "does one thing well").

What's next?

I hope I've made a convincing case that writing automated tests for Drupal can save you time and treasure. In the next part of this series, we're going to begin our descent into the art of testing itself. We'll go over writing our first unit test and getting it running on the command line. Until then, feel free to comment or tweet @gabesullice if you've got questions!

May 04 2017
May 04

In Drupal 8, setting your sites domain in settings.php is no longer possible. In Drupal 7, you could set the base_url in settings.php like:

$base_url = 'http://domain.com';

Have you noticed in Drupal 8 that when you use drush uli it returns a url that starts with http://default! If you are tired of copying and pasting what comes after http://default/ or adding the --uri=http://domain.com flag along with drush uli I have a solution for you!

Meet the drushrc.php file. I prefer to put this one level higher than my Drupal root. So…

  • Project repo
    • webroot (public_html, web, docroot, etc)
    • drush/drushrc.php

Lots can go in the drushrc.php file, but if you simply want to fix the drush uli default issue, it can just have:

<?php
 
$options['uri'] = 'http://domain.com';

If you are using GIT to manage your code base, you could consider a strategy of a drushrc.php file per environment. Example:

Create drush/drushrc.local.php

That file can contain:

<?php
 
$options['uri'] = 'http://domain.dev';

Your main drushrc.php now looks like:

<?php
 
/**
 * If there is a local drushrc file, then include it.
 */
 
$local_drushrc = __DIR__ . "/drushrc.local.php";
if (file_exists($local_drushrc)) {
  include $local_drushrc;
}

Now you can place drush/drushrc.local.php in your .gitignore file.

If you are using a PaaS like Pantheon, you can take this strategy:

Since Pantheon automatically handles setting the $options[‘url’] for you, you can simply say...if NOT Pantheon, use my local dev domain.

With the Pantheon approach, your drushrc.php file can look like:

<?php
 
if (!isset($_SERVER['PANTHEON_ENVIRONMENT'])) {
  $options['uri'] = 'http://domain.dev';
}

I believe setting the $options[‘url’] has always been possible if using drush aliases, so continue on if you’ve always done that.

Now enjoy the infinite bliss when typing drush uli and having the correct domain returned.

Apr 27 2017
Apr 27

Working with external APIs in Drupal has always been possible. Using PHP’s curl function or drupal_http_request is simple enough, but in Drupal 8 you can build in a lot more flexibility using Drupal::httpClient. Drupal::httpClient leverages Guzzle, a PHP HTTP client. If your project requires more than a single interaction with an external API a few steps can be taken to make those interactions much more reusable.

Make a Drupal Service for your API Connection

In Drupal 8, you can turn reusable functionality into a Service that you can easily reuse in your other custom code. Services can be called in module hooks and added to things like controllers using Dependency Injection. Defining a service in Drupal 8 requires a services.yml file in your custom module. For example, let's create a module called my_api:

The code from this example is based on a couple real-world API clients we’ve created recently.

  1. Watson API - IBM’s Watson API allows interacting with powerful utilities like speech to text. Rather than using Drupal’s httpClient in this case, we found a PHP class that already did the heavy lifting for us and we created a Drupal Service to extend that PHP class. In this case, we only needed to use Watson’s text-to-speech functionality to save text to audio files in Drupal, but ended up creating a flexible solution to interact with more than just the text-to-speech endpoint. You can check that code out on Drupal.org or GitHub.
  2. PCO API - Planning Center Online (PCO) is a CRM that is popular with faith-based institutions. PCO has a well documented RESTful API. In this case, I was building an application that needed to access the people stored in Planning Center in a Drupal 8 application. You can check that code out on Drupal.org or GitHub.
my_api.services.yml
services:
  my_api.client:
    class: '\Drupal\my_api\Client\MyClient'
    arguments: ['@http_client', '@key.repository', '@config.factory']

The class indication references the PHP class for your service; arguments references the other Services being utilized in the Class.

Now MyClient can be referenced throughout our code base. In a Drupal hook (example hook_cron) it might look something like this:

$client = Drupal::service('my_api.client');
$client->request();

In a Controller (example MyController.php) it might look like this:

<?php
 
namespace Drupal\my_custom_module\Controller;
 
use Drupal\Core\Controller\ControllerBase;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Drupal\my_api\MyClient;
 
/**
 * Class MyController.
 *
 * @package Drupal\my_custom_module\Controller
 */
class MyController extends ControllerBase {
 
  /**
   * Drupal\my_api\MyClient definition.
   *
   * @var \Drupal\my_api\MyClient
   */
  protected $myClient;
 
  /**
   * {@inheritdoc}
   */
  public function __construct(MyClient $my_client) {
    $this->myClient = $my_client;
  }
 
  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container) {
    return new static(
      $container->get('my_api.client')
    );
  }
 
  /**
   * Content.
   *
   * @return array
   *   Return array.
   */
  public function content() {
    $this->myClient->request());
    return [];
  }
}

Now we can use a common method in my_api.client like “request” to make calls to our external API. Connect then serves as a nice wrapper for the httpClient service.

public function request($method, $endpoint, $query, $body) {
  $response = $this->httpClient->{$method}(
    $this->base_uri . $endpoint,
    $this->buildOptions($query, $body)
  );
}

The request method accepts: A method (GET, POST, PATCH, DELETE, etc.) Endpoint (the API being used) Query (querystring parameters) Body

You can adjust your method parameters to best fit the API you are working with. In this example, the parameters defined covered the needed functionality.

$request = $this->myClient->request('post', 'people/v2/people', [], $body);

Using httpClient directly would look something like:

$response = $this->httpClient->post(
  'http://someapi.com/people/v2/people',
  [
    'auth' => ['token', 'secret'],
    'body' => json_encode($body),
  ]
);

Using httpClient instead of a service directly could litter hard coded values throughout your code base and insecurely expose authentication credentials.

Next time your project requires integrating with a 3rd party API, consider turning it into a service to set yourself up well for the future.

Apr 25 2017
Apr 25

TL;DR: Just here for Browsersync setup? Skip to the steps.

I’m always looking for ways to reduce the time between saving a file and seeing changes on the screen.

When I first started working with Drupal, back in the day, I would ftp changes to a server and refresh the page. Once I found Transmit, I could mount a remote directory locally and automatically save files to the server. At the time, my mind was blown by the efficiency of it. In hindsight, it seems so archaic.

Then I started working with development teams. Servers were typically managed with version control. If I wanted to make changes to the server, I had to commit code and push it up. It was time to act like a grown up and develop with a local server.

The benefits of developing locally are too numerous to list here, but the most obvious is not having to push changes to a remote server to see the effects. I could just save a file, switch to the browser and refresh! What was taking minutes was reduced to seconds.

My progression of development practices went something like this.

“Oy, hitting cmd+r is so tedious. What’s this? LiveReload? OH MY GOD! It refreshes the page immediately upon saving! It even injects my CSS without refreshing the page! I’m so super fast right now! Whoa! Less! I can nest all my CSS so it matches the DOM exactly! Well that was a terrible idea. Not Less’s fault but ‘Hello Sass!’. Because Compass! And Vertical Rhythm! And Breakpoint! And Susy! Holy crap, it’s taking 8-12 seconds to compile CSS? Node sass you say?. That’s so much faster! Back down to 3 seconds. Wait a minute, everything I’m using Compass for, PostCSS handles. BOOM! Milliseconds!”

This boom and bust cycle of response times of mine has been going on for years and will continue indefinitely. Being able to promptly see the effects of your changes is incredibly important.

In 1968, Robert Miller published a paper in which he defined 3 thresholds of human-computer response times.

  • .1 sec. To feel like you are directly manipulating the machine
  • .1 - 1 sec. To feel like you are uninhibited by the machine, but long enough to notice a delay
  • > 10 sec. At this point, the user has lost focus and is likely to move onto another task while waiting for the computer.

In short, less than a second between making a file change and seeing the result is ideal. More than that and you’re likely to lose focus and check your email, Slack or Twitter. If it takes longer than 10 sec. to compile your code and refresh your browser, your productivity is screwed.

One of my favorite tools for getting super fast response times is Browsersync. Browsersync is a Node.js application that watches your files for changes then refreshes your browser window or injects new CSS into the page. It’s similar to LiveReload but much more powerful. The big difference is that Browsersync can be run in multiple browsers and on multiple devices on the same LAN at the same time. Each connected browser will automatically refresh when code changes. It also syncs events across browsers. So if you scroll the page in one browser, all your connected browsers scroll. If you click in one browser, all browsers will load the new page. It’s incredibly useful when testing sites across devices.

For simple sites, Browsersync will boot up its own local http server. For more complex applications requiring their own servers, such as Drupal, Browsersync will work as a proxy server. Setup is pretty simple but I’ve run into a few gotchas when using Browsersync with Drupal.

Setting up Browsersync

Local dev environment

First things first, you’ll need a local dev environment. That’s beyond the scope of this post. There are a few great options out there including DrupalVM, Kalabox and Acquia Dev Desktop.

For this tutorial, we’ll assume your local site is being served at http://local.dev

Install Node

Again, out of scope. If you don’t already have node installed, download it and follow the instructions.

Install Browsersync

Browsersync needs to be installed globally.

npm install browser-sync -g

Change directories to your local site files.

cd wherever/you/keep/your/drupal/docroot

Now you can start Browsersync like so:

browser-sync start --proxy "local.dev" --files "**/*.twig, **/*.css, **/*.js”

The above will start a browsersync server available at the default http://localhost:7000. It will watch for changes to any Twig templates, CSS or JS files. When a change is encountered, it will refresh any browsers with localhost:7000 open. In the case of CSS, it will inject just the changed CSS file into the page and rerender the page rather than reloading it.

Making Drupal and Browsersync play nicely together

There’s a few tweaks we’ll need to do to our Drupal site to take full advantage of our Browsersync setup.

Disable Caching

First we want to make sure page caching is disabled so when we refresh the page, it’s not serving the old cached version.

Follow the steps outlined under “Enable local development settings” in the Disable Drupal 8 caching documentation.

Now when you change a Twig template file, the changes will be reflected on the page without having to rebuild the cache. Keep in mind, if you add a new file, you’ll still need to rebuild the cache.

Avoid loading CSS with @import statements

This gotcha was much less obvious to me. Browsersync will only recognize and refresh CSS files that are loaded with a link tag like so:

<link rel="stylesheet" href="https://atendesigngroup.com/sites/.../style.css" media="all">

It does not know what to do with files loaded via @import tags like this:

<style>
  @import url('/sites/.../style.css');
</style>

This is all well and good as Drupal uses link tags to import stylesheets unless you have a whole lot of stylesheets, more than 31 to be exact. You see, IE 6 & 7 had a hard limit of 31 individual stylesheets. It would ignore any CSS files beyond that. It’s fairly easy to exceed that maximum when CSS aggregation is turned off as any module or base theme installed on your site can potentially add stylesheets. Drupal has a nice workaround for this by switching to the aforementioned @import statements if it detects more than 31 stylesheets.

We have two ways around this.

1. Turn preprocessing off on specific files

The first involves turning CSS aggregation (a.k.a. CSS preprocessing) on for your local site and manually disabling it for the files you are actually working with.

In settings.local.php, set the following to TRUE:

$config['system.performance']['css']['preprocess'] = TRUE;

Then in your [theme_name].libraries.yml file, turn preprocessing off for any files you are currently working on, like in the following example.

global:
  version: 1.0.x
  css:
    theme:
      build/libraries/global/global.css: { preprocess: false }

This will exclude this file from the aggregated CSS files. This approach ultimately leads to a faster site as your browser loads considerably fewer CSS files. However, it does require more diligence on your part to manage which files are preprocessed. Keep in mind, if you commit the above libraries.yml file as is, the global.css file will not be aggregated on your production environment as well.

2. Use the Link CSS module

The easier alternative is to use the Link CSS module. When enabled, this module will override Drupal’s @import workaround and load all files via the link tag regardless of how many. The only downside to this approach is the potential performance hit of still loading all the unaggregated CSS files which may not be a big deal for your environment.

Add a reload delay if needed

In some cases, you may want to add a delay between when Browsersync detects a change and when it attempts to reload the page. This is as simple as passing a flag to your browsersync command like so.

browser-sync start --proxy "d8.kbox.site" --files "**/*.twig, **/*.css, **/*.js" --reload-delay 1000

The above will wait 1 second after it detects a file change before reloading the page. The only time I’ve ever needed this is when using Kalabox as a local development environment. Your mileage may vary.

In addition to the reload delay flag, Browsersync has a number of other command line options you may find useful. It’s also worth noting that, if the command line approach doesn’t suit you, Browsersync has an api that you can tie into your Gulp or Grunt tasks. Here at Aten we use Browsersync tied into gulp tasks so a server can be started up by passing a serve flag to our build task. Similar to the following gulp compile --watch --serve

Apr 19 2017
Apr 19

Custom styled form elements are a common thing to see in a design. That’s because default form styles vary visually from browser to browser and OS to OS. It makes sense that we’d want these elements styled consistently. Styling them is pretty straightforward, with the exception of select dropdowns which can be more complex. Recently, I ran into an unexpected problem when working on a site that needed a branded admin experience.

Styling Radio and Checkbox Buttons

There’s a simple method of styling radio buttons and checkboxes I’ve been using for a while. I first saw it from the people at Tuts+, and they provided this Pen demoing the technique. Briefly explained, we visually hide the input for our radios/checkboxes and draw a new one using :before and :after pseudo elements on the label element. CSS’ :checked selector allows us to toggle our styles based on if the input is checked or not. This technique relies on appropriately marked up inputs and labels, for example:

<div class=”form-element”>
  <input type=”checkbox” id=”click-me”>
  <label for=”click-me”>Click Me</label>
</div>

Clicking the label (containing the fake checkbox styling) will toggle the state of the real checkbox that’s visually hidden.

Drupal’s Admin Interface

One thing I learned while working with some of Drupal’s admin interfaces is that they only supply the input, and not an accompanying label. This seemed especially true in tabled interfaces, where you’d check off rows of content and perform some action on the selected items. Since we’re hiding an input that doesn’t have a label to attach the visuals to, we just end up with a blank space. There were several options we had for how to address this issue.

1. Drop the Custom Styles

The simplest is to just rely on browser defaults for checkboxes and radios. It’s not a great option, but it is an affordable one for tight budgets.

2. Create the Missing Labels

This ended up being my first approach to fixing this, and became more akin to a game of Whack-a-Mole than I anticipated. After going through various preprocess functions, alters, and render functions I was still encountering inputs that were missing labels. Some I was never able to fully track down where the markup was coming from. Manually finding and fixing every missing label might be a viable solution if your website or application has only a handful of places you need to update. However this is not the most scalable solution, and if your product grows this can quickly become a financial black hole.

3. Create the Missing Labels… with Javascript

Instead of trying to find every place that creates a checkbox or radio on the server side, we could use Javascript to target every checkbox or radio input that is not followed by a label. From there, we just create the label element and insert it after the selected inputs. This is how that might look using jQuery, though it can also be done with Vanilla JS.

This is great, as it solves the problem for every input in one fell swoop. One downside here is the Javascript dependency. Should your Javascript not run for any reason, you’re still left with the original problem of missing inputs. Another is page rendering. User’s might be left with a janky experience as Javascript inserts these elements into the DOM.

4. Drop the Custom Styles… for Older Browsers

In the end, this was the solution that won out. Using CSS Feature Queries and CSS’ appearance property, we’re able to provide styled inputs for most modern browsers and then fall back to default styles in browsers that lack the support we need. This gives us our custom styles, without the scaling problem of #2, and the Javascript dependency of #3. The downside to this solution is that all versions of Internet Explorer and Firefox will use their browser defaults.

Firefox was a surprise to me, as the documentation says it supports appearance. However in practice what I got was a less appealing version of the browser default styles. Also surprisingly was by checking for only -webkit-appearance support, Edge still gets our custom styles applied. This all sat well with me for a working solution. Every team and project has it’s own constraints, so your mileage may vary.

Apr 19 2017
Apr 19

Quite a bit has changed for the Migrate module in Drupal 8: the primary module is part of core and some of the tools have been split into their own modules. Recently, we migrated a Wordpress site into Drupal 8 and this article will help guide you in that process. If you’re looking for information about Wordpress to Drupal 7 migrations, check out Joel Steidl’s article on that here.

At the time of writing this post, the migration modules are considered "experimental" so be aware of that as well. The module's location in core also means that all Drupal core modules also have migration-related code to help out with your Drupal upgrades. We used the WP Migrate module (Migrate Wordpress) as a starting point in bringing this content to Drupal.

This module will give you a good basis for migration, but it is missing a few things that you might want to consider:

  • It will create all vocabularies and taxonomies based on what is in Wordpress but you will need to add some code to connect the taxonomies with posts.
  • Also, it will not bring in featured images.
  • WP content might be using the "line break to paragraphs" functionality, which you need to account for either in your text format for posts or in the migration.

And if you are looking for information about Wordpress to Drupal 7 migrations, check out Joel Steidl's article on that here.

Taxonomy

There's code existing to pull in Wordpress's terms and vocabularies, but you will need to do some work to put them into the right fields with your posts. For this, I ended up taking a more efficient route by querying the source database in prepareRow():

<?php
 
// place in Posts.php prepareRow()
 
// get terms for this blog post
$tags = $this->select('wp_term_relationships', 'r')
  ->join('wp_term_taxonomy', 't', 't.term_taxonomy_id=r.term_taxonomy_id')
  ->fields('r')
  ->condition('t.taxonomy', 'tags')
  ->condition('object_id', $row->getSourceProperty('id'))->execute();
$tags = $tags->fetchAll();
$tags = array_map(function($tag) {
  return intval($tag['term_taxonomy_id']);
}, $tags);
$row->setSourceProperty('tags', $tags);
 
// get categories for this blog post
$category = $this->select('wp_term_relationships', 'r')
  ->join('wp_term_taxonomy', 't', 't.term_taxonomy_id=r.term_taxonomy_id')
  ->fields('r')
  ->condition('t.taxonomy', 'category')
  ->condition('object_id', $row->getSourceProperty('id'))->execute();
$category = $category->fetchAll();
$category = array_map(function($tag) {
  return intval($tag['term_taxonomy_id']);
}, $category);
$row->setSourceProperty('categories', $category);

And then I updated the migration template with those new values:

# add to the process section
field_tags: tags
field_category: tags

Featured Images

Wordpress stores featured images as attachment posts and stores the relationship in the postmeta table. To bring these in as image fields, we need to make file entities in Drupal which means configuring a new migration.

First, create a migration template called wp_feature_images.yml. Note that I stole some of this from Drupal's core file module:

id: wp_feature_images
label: Wordpress Feature Images
migration_tags:
  - Wordpress
migration_group: wordpress
source:
  plugin: feature_images
destination:
  plugin: entity:file
process:
  filename: filename
  uri: uri
  status:
    plugin: default_value
    default_value: 1
# migration_dependencies:
#   required:
#     - wp_users

And then create a source plugin:

<?php
/**
 * @file
 * Contains \Drupal\migrate_wordpress\Plugin\migrate\source\FeatureImages.
 */
 
namespace Drupal\migrate_wordpress\Plugin\migrate\source;
 
use Drupal\migrate\Row;
use Drupal\migrate\Plugin\migrate\source\SqlBase;
use Drupal\Core\File\FileSystemInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Drupal\migrate\Plugin\MigrationInterface;
use Drupal\Core\State\StateInterface;
 
/**
 * Extract feature images from Wordpress database.
 *
 * @MigrateSource(
 *   id = "feature_images"
 * )
 */
class FeatureImages extends SqlBase {
 
  public function __construct(array $configuration, $plugin_id, $plugin_definition, MigrationInterface $migration, StateInterface $state, FileSystemInterface $file_system) {
    parent::__construct($configuration, $plugin_id, $plugin_definition, $migration, $state);
    $this->fileSystem = $file_system;
  }
 
  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition, MigrationInterface $migration = NULL) {
    return new static(
      $configuration,
      $plugin_id,
      $plugin_definition,
      $migration,
      $container->get('state'),
      $container->get('file_system')
    );
  }
 
  /**
   * {@inheritdoc}
   */
  public function query() {
    $query = $this
      ->select('wp_postmeta', 'm')
      ->fields('p', ['ID', 'guid']);
    $query->join('wp_posts', 'p', 'p.ID=m.meta_value');
    $query
      ->condition('m.meta_key', '_thumbnail_id', '=')
      ->condition('p.post_type', 'attachment', '=')
      ->condition('p.guid', '', '<>')
      // this prevents some duplicates to get the count closer to even
      ->groupBy('ID, guid');
    return $query;
  }
 
  /**
   * {@inheritdoc}
   */
  public function fields() {
    $fields = array(
      'ID' => $this->t('The file ID.'),
      'guid' => $this->t('The file path'),
    );
    return $fields;
  }
 
  /**
   * {@inheritdoc}
   */
  public function prepareRow(Row $row) {
    $url = $row->getSourceProperty('guid');
    $parsed_url = parse_url($url);
    $filename = basename($parsed_url['path']);
    $row->setSourceProperty('filename', $filename);
    $public_path = 'public://' . $parsed_url['path'];
    $row->setSourceProperty('uri', $public_path);
 
    // download the file if it does not exist
    if (!file_exists($public_path)) {
      $public_dirname = dirname($public_path);
 
      // create directories if necessary
      if (!file_exists($public_dirname)) {
        $this->fileSystem->mkdir($public_dirname, 0775, TRUE);
      }
 
      // try to download it
      $copied = @copy($url, $public_path);
      if (!$copied) {
        return FALSE;
      }
    }
    return parent::prepareRow($row);
  }
 
  /**
   * {@inheritdoc}
   */
  public function bundleMigrationRequired() {
    return FALSE;
  }
 
  /**
   * {@inheritdoc}
   */
  public function getIds() {
    return array(
      'ID' => array(
        'type' => 'integer',
        'alias' => 'p',
      ),
    );
  }
 
}

In Migrate, the template defines what source, processing, and fields are created. The source plugin is used by that migration to allow you to specify what is created. The source plugin above will get the feature images for posts, but also try and download the image into Drupal's files directory.

You can add this as a dependency for the wp_posts migration. A word of warning though: if one migration (Migration A) depends on a different migration (Migration B), all of the content from A must be migrated before B can be run. If there are images that cannot be resolved for some reason (maybe leftover DB references after an image or post is deleted), this might stop the migration because the dependency cannot be resolved.

And finally, you will also need to add "wp_feature_images" to your manifest_wordpress.yml before running the migration.

Converting content

So far we have updated migration source plugins, but there are also process plugins, which can be used to change row values. As mentioned, the WP content often uses the autop filter to create paragraph/line breaks automatically so we need to change those to HTML for Drupal. (You can also just use this functionality in your text format and skip this step if having this on will not cause issues with other content)

First, create a "src/Plugin/migrate/process" directory if one does not exist in the module and add this processor:

<?php
 
namespace Drupal\migrate_wordpress\Plugin\migrate\process;
 
use Drupal\migrate\MigrateExecutableInterface;
use Drupal\migrate\ProcessPluginBase;
use Drupal\migrate\Row;
 
/**
 * Apply the automatic paragraph filter to content
 *
 * @MigrateProcessPlugin(
 *   id = "wp_content"
 * )
 */
class WpContent extends ProcessPluginBase {
 
  /**
   * {@inheritdoc}
   *
   * Split the 'administer nodes' permission from 'access content overview'.
   */
  public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) {
    return _filter_autop($value);
  }
 
}

Then, update the "process" section of "wp_posts.yml" to include this processor:

'body/value':
    plugin: wp_content
    source: post_content

All of this should put you on the road to getting Wordpress content migrated into a Drupal 8 site, although you’ll probably have to adjust code to your specific circumstances along the way.

Apr 13 2017
Apr 13

It’s that time of year again when the Drupal community of developers, designers, strategists, project managers and more come together for the biggest Drupal event in the world: DrupalCon North America. This year, from April 24-28, we'll be in Baltimore and here’s where you can find us:

The Aten Booth

Be sure to stop by booth 216 in the exhibit hall, we’d love to chat about the successes and challenges you face in your web projects. We’ll also have our sought-after sketchbooks to add to your collection.

Monday, April 24

Nonprofit Summit

We are thrilled to support the newly added Nonprofit Summit at DrupalCon this year. Chris, Aten’s Director of Digital Strategy, and Joel, Aten’s Director of Engineering, work closely together and are sharing tips to create intuitive and compelling experiences for organizations to successfully connect with their users.

Tuesday, April 25

Mastering Drupal 8’s Libraries API
Matt Jager
2:45 p.m. - 3:15 p.m.
Room: 307 - Acquia

Wednesday, April 26

Powering an Interactive SVG and JavaScript Game Engine with Drupal
Peter Weber
2:45 p.m. - 3:15 p.m.
Room: 307 - Acquia

A Type System for Tomorrow
Gabe Sullice
3:45 p.m. - 4:45 p.m.
Room: 318 - New Target

Thursday, April 27

Testing for the Brave and True
Gabe Sullice
10:45 a.m. - 11:45 a.m. Room: 315 - Symfony

Mar 31 2017
Mar 31

"If you're not testing, you're doing it wrong." I can't remember how many times I've heard those words. Each time, I'd feel a little pang of guilt, a little bit of shame that every day, I wrote code for myself and clients that wasn't tested. I'd be frustrated with the developers who repeated that mantra. Sure, it was easy to say, but hard to live up to. How do I test? What do I test? Should I test? How would I justify the costs?

As a developer who started his career writing custom code for Drupal applications, it was easy to skip testing. Drupal 7 was really quite untestable, at least in any practical sense. Drupal core itself was tested, and big modules like Views, Ctools, and Entity API had test suites too. But finding tests in other custom code or smaller modules was a rarity. Even today, with a vastly more testable code base, documentation for testing Drupal code is terse and hard to come by. It's focused on educating core contributors and module maintainers, not day-to-day developers writing modules to satisfy particular real-world needs.

I hope this series will change that for you.

I will make the case that you should be testing; that you'll save time and you'll save money, and that you won't need to "justify the cost." This series will start from first principles:

  • What is automated testing?
  • Why is automated testing a net positive?
  • How do I write my first test?
  • How do I write the one after that? (because that's really where it gets hard isn't it?)

Part Zero

What is automated testing?

I define automated testing as the act of asserting that your code achieves its goals without human interaction.

Types of Automated Tests

There are many types of automated testing. Each serves a specific need and each operates at a different level of abstraction. They form something like a pyramid, your most numerous tests should be at the bottom, as you go higher up the stack, you should have fewer and fewer tests.

Functional Test Pyramid

At the lowest level, are unit tests (and that's what this series will focus on). Unit testing is code that you write to test or "exercise" the actual custom code that you write. Unit tests isolate very small bits of functionality and assert that your code can handle all kinds of inputs correctly. You might write a unit test to assert that an add(x, y) function adds numbers correctly.

Functional Test Pyramid - Unit Tests

Above unit tests, are integration tests. Integration tests assert that the small bits of functionality that you tested with unit tests "integrate" together. Perhaps you wrote a suite of arithmetic functions which you unit tested individually. When those functions come together into a calculator, you might write integration tests to validate that they all work in harmony.

Functional Test Pyramid - Integration Tests

At the highest level are system tests. System tests assert that your application works as a cohesive whole. These "acceptance" tests are usually best expressed as the functionality your client cares about. "Can my potential customer use a calculator to estimate a mortgage payment and then call me for a quote?"

Functional Test Pyramid - System Tests

There are no definite lines of separation between these types of tests, they all fall along a continuum—it's a curve, not a step function.

It's not important to know exactly where your test falls on that curve, really, it's just important to know that:

  • You can test at different levels of abstraction.
  • You do not need to test everything at every level of abstraction.

Different Tools for Different Tests

Just as there are different types of tests, there are different tools that go along with them. As with all things in software development, there are lots of tooling choices and tradeoffs no matter what you choose. The beauty of using Drupal, however (or any framework) is that some of those choices have already been made for you either officially or by convention in the community.

At the lowest level is unit testing. The standard adopted by Drupal 8 is PHPUnit. PHPUnit is a suite of command line tools and base classes that make writing tests easier. Drupal has extended some of the PHPUnit classes with some extra features that make testing code written specifically for Drupal easier. The Drupal class used for unit testing is called UnitTestCase. We're going to take a deep dive into this, and all the Drupal testing classes and tools later in the series.

At the integration test level, Drupal uses a mix of PHPUnit and Simpletest, but is migrating all of its Simpletest based code to extensions of PHPUnit tests that can achieve the same things. In Drupal, the class primarily used for this kind of testing is called KernelTestBase.

At the system test level the lines begin to become somewhat blurred. Drupal calls these “Functional” tests and there are two classes for them. WebTestCase and BrowserTestBase classes can do quite a bit, and are the standard for testing Drupal Core and contributed modules. They work well for contributed modules and Drupal core because they don’t need to test the specifics of a real-world Drupal application and all the configuration and customization that implies.

The Drupal community has largely settled on Behat as the standard for testing real-world Drupal applications. Behat works by starting a "headless" browser that can emulate a real user by doing things like clicking links and filling out forms. These kinds of tests let you test the configuration of your Drupal site holistically—your theme, javascript and custom code—which ensures that everything works well together.

I hope this post has given you a sense of what automated testing is and some basic terminology that we can share in the next part of this series: “Why Automated Testing will Save You Time and Treasure.”

Jan 30 2017
Jan 30

Get a crash course in the basics of building a website using Drupal.

In this 3-hour training, we'll dive into the world of Drupal and learn about content types, views, blocks & themes as we build a site together.

This webinar is ideal for those with experience working with content management systems like Drupal, Wordpress, Joomla, or Craft.

Brought to you in partnership with

General Assembly

Reserve your spot today

February 8, 2017, 6pm - 9pm MTRegister Now

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web