Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
May 27 2020
May 27

Drupal 7 to 9 Upgrade

Drupal 7, our much-loved CMS that was released in 2011, is nearing the end of its life. No, that's not hyperbole; Drupal 7 is scheduled to reach end-of-life in November 2021. Drupal 8 has been out for a few years, but at the time of this writing, Drupal core usage statistics indicate that only about 350,000 of the more than 1.1 million reporting Drupal core sites are using Drupal 8.x. Over 730,000 of those sites are still using Drupal 7. If your site is one of those 730,000 still on Drupal 7, should you upgrade to Drupal 9? 

Drupal 7 is coming to an end

Whether or not you choose to upgrade to Drupal 9, it's time to acknowledge one very important truth: Drupal 7 is coming to an end. After a decade in service, Drupal 7 will stop receiving official community support in November 2021, and the Drupal Association will stop supporting Drupal 7 on Drupal.org. Automated testing for Drupal 7 will stop being supported via Drupal.org, and Drupal 7 will no longer receive official security support.

Beyond the loss of support for Drupal 7 core, there is less focus on the Drupal 7 version of many contributed modules. Some of them are quite stable and may work well into the future, but others are more neglected. The reality is that once module maintainers have moved their own sites to Drupal 8 or Drupal 9, they may lose interest in spending the time it takes to keep a Drupal 7 version of their code up to date.

Upgrading from Drupal 7 is harder than from Drupal 8

Drupal 8 was released in November 2015. When the Drupal Association announced Drupal 9, they discussed a big change coming to the Drupal ecosystem: Major Drupal version changes would no longer be a substantial replatforming effort, but would instead be a continuation of an iterative development process. In practice, this means that Drupal 9 is built in Drupal 8, using deprecations and optional updated dependencies. The result is that upgrading from Drupal 8 to Drupal 9 is just an iterative change from the final Drupal 8 version. Drupal 9.0 involves the removal of some deprecated code, but introduces no new features; it's a continuation of the fully-tested, stable codebase that is Drupal 8. Basically, Drupal 9.0 is just another release of Drupal 8. 

On the other hand, Drupal 7 has significant differences from Drupal 8 and 9. The jump from Drupal 7 to Drupal 9 can be an enormous undertaking. Third-party libraries replaced huge swaths of custom Drupal code. The procedural code was reworked into object-oriented code. The code changes were massive. Upgrading a Drupal 7 site to Drupal 9 will bring it into the new upgrade paradigm, but there's quite a bit of work to do to get there.  So the question of whether, and how, to make the jump to Drupal 9 is more complicated.

That leaves Drupal 7 sites with a handful of options:

We’ll focus on the first option in this article, and the others later.

Benefits of Drupal 8 and 9

While Drupal 8 is a big change from Drupal 7, it features many developmental and editorial improvements that pay dividends for users who are willing to take the time to learn how to use them.

Lots of contributed module functionality now in core

One of the biggest upsides of Drupal 8 and Drupal 9 versus Drupal 7 is the fact that many of the things that require contributed modules in Drupal 7 are just baked into core now. This includes things like:

  • Layout Builder provides the ability to create customized page layouts that Panels or Display Suite provide in Drupal 7.
  • Blocks have been re-imagined to be fieldable and re-usable, things that require contributed modules like Bean in Drupal 7.
  • You don’t need to install a contributed module and third-party libraries to get a WYSIWYG editor; it’s built into core.
  • Views is in core, and most of the custom lists in core are now fully customizable views.
  • Media handling is not an add-on. It’s an integral feature. To get similar functionality in Drupal 7, you need a half dozen or more complicated contributed Media framework modules, each of which might require quite a bit of additional configuration. You can get a pretty decent media handling experience in Drupal 9 by doing nothing more than enabling the Media and Media Library modules and using the default configuration.
  • Web services are built in, like JSON:API.
  • Customized editorial workflows are now available in core, providing functionality that would have required contributed modules like Workbench Moderation or Workflow.

That’s just to mention a few features; there are many things in core that would require contributed modules in Drupal 7.

Maintaining this functionality is simplified by having more of it in core. Managing fewer contributed modules simplifies the process of keeping them in sync as you update versions and dependencies, and trying to decide what to do when you get a conflict or something breaks. As Drupal 7 development falls by the wayside, this is even more important, as it could take months - or longer - to get updates to Drupal 7 contributed modules, until they’re no longer supported at all after end-of-life.

Having these solutions in core means everyone is using the same solution, instead of splintering developer focus in different directions. And having them in core means they’re well-tested and maintained.

Composer gets us off the island

One of the changes to Drupal since the Drupal 7 release is that Drupal 8 and 9 extensively use third party libraries like Symfony for important functionality, instead of relying on custom Drupal-specific code for everything. That move “off the island” has introduced a need to manage Drupal’s dependencies on those libraries. This is handled with yet another tool, a package called Composer.

You need to manage the dependencies of these new top-level third-party libraries, but each of these libraries has dependencies on other libraries, which have dependencies on more libraries, creating a confusing spiderweb of dependencies and requirements and potential conflicts. Dependency management quickly becomes a maintenance nightmare. It’s a new tool to learn, but Composer is a great dependency manager. Taking the time to learn Composer gives developers a powerful new tool to deal with dependency management.

Composer can do other things. If you add cweagans/composer-patches, it’s also a very useful tool for managing patches from Drupal.org. You can add a patches section to composer.json with links to the patches you want to watch. Composer will automatically apply the patches, and your composer.json file becomes a self-documenting record of the patches in use.

You can read more about Composer in another Lullabot article: Drupal 8 Composer Best Practices.

No more Features for configuration management

In Drupal 7, many sites deploy configuration using the Features module. Depending on who you ask, using Features for configuration management could be regarded as a good thing or a bad thing. Many developers maintain that Drupal 8 (and therefore Drupal 9’s) Configuration Management system, which allows database configuration to be exported to YML files, is much easier than the Drupal 7 Features system. As with Composer, it takes time to learn, but it enables developers who understand the system to accomplish more with less effort. 

Secure PHP support

Drupal 7 sites could be running on deprecated versions of PHP, even as old as 5.3. Drupal 7 sites should already have moved to PHP 7, but could still be running on older, very outdated and insecure, versions of PHP. Drupal 7 currently works with PHP 7.3 but has problems with PHP 7.4. As PHP continues to progress and deprecate older versions, you may find that you can no longer keep your Drupal 7 site running on a secure version of PHP. Drupal 8 runs on PHP 7.0+, and Drupal 9 runs on and requires a minimum of PHP 7.3, so both provide a better window of compatibility with secure PHP versions.

Resistance to migrating to Drupal 8 and 9

There are some reasons why sites delay making this move:

Lack of Drupal 8 versions of Drupal 7 contributed modules

Early in Drupal 8’s release cycle, one of the big complaints about Drupal 8 was that many Drupal 7 contributed modules no longer worked in D8. It did take time for some contributed modules to be updated to Drupal 8. However, many Drupal 7 contributed modules were no longer needed in Drupal 8, because the functionality they provided is now a part of Drupal 8 core.

If you haven’t checked the state of Drupal contributed modules in the last few years, take a look at what’s now available for Drupal 8. You can check the Drupal 8 Contrib Porting Tracker to find the status of popular Drupal 7 modules and see whether they’ve gotten a Drupal 8 stable release. You may find that modules that were missing early on are now available, or that you no longer need some contributed modules because that functionality is now managed in another way.

More importantly, you don’t have to worry about lack of parity in Drupal 8 contributed modules when Drupal 9 is released; as long as the Drupal 8 module in question isn’t built on deprecated code, everything that works in 8.x should continue to work in Drupal 9. And if a D8 module is built on deprecated code, the maintainer should be aware of it. All the code that is being removed in Drupal 9 has already been deprecated in Drupal 8.8, so there won’t be any surprises for module or site maintainers.

Maintenance overhead for small teams

With the introduction of Drupal 8 and Drupal 9, the new paradigm in Drupal development is more frequent, smaller releases. This mirrors a larger trend in software development, where iterative development means frameworks make more frequent releases, and consequently, those releases aren’t supported as long. 

This means you need to commit to keeping your site current with the latest releases. If you’re part of a small team managing a large Drupal site, you may simply not have the bandwidth or expertise to keep up with updates. 

There are some tools to make it easier to keep a site current. There is an Automatic Updates module that might be helpful for small sites. That module is a work in progress, and it does not yet support contributed module updates or composer based site installs. These are planned for Phase 2. But this is a project to keep an eye on. 

You can manage updates yourself using Composer and Drush. Sites of any size can also use  DependaBot, a service that creates automatic pull requests with updates. 

And of course, some web hosts and most Drupal vendors will provide update services for a fee and just take care of this for you.

The new way of doing things is harder

The final complaint that has prevented many Drupal 7 sites from upgrading to Drupal 8 and Drupal 9 is that the new way of doing things is harder. Or, if not harder, different. There’s a lot to unpack here. In some cases, this reflects resistance to learning and using new tools. In other cases, it may be that long-time Drupal developers have a hard time learning new paradigms. Another option may be that developers are simply not interested in learning a new stack, and may no longer want to develop in new versions of Drupal. 

Drupal 6 and 7 have a lot of “Drupalisms,” Drupal-specific, custom ways of doing things, so developers who have been deep into Drupal for a long time may feel the number of things to re-learn is overwhelming. Fortunately, the “new” things, such as Composer, Twig, and PHPUnit are used by other PHP projects, so there is a lot that Drupal 7 developers can learn that will be useful if they work on a Symfony or Laravel project, for example.

Developing for Drupal 8 and Drupal 9 is certainly different compared to Drupal 7 and older versions. Some developers may choose this as a turning point to shift gears into other career paths, developing for a different stack, or making a more substantial change. But with the Drupal 7 end-of-life approaching, developers who don’t want to move to Drupal 8 and Drupal 9 must make some move, just as Drupal 7 sites must move to a modern platform.

Security considerations

In today's world, enterprises have a responsibility to protect their website users' personal data - and they face costly liability considerations if they don't. For many organizations, this means website security is a looming and ongoing concern. It's common for enterprise security policies to require that organizations only use services with ongoing security support. Relative to the Drupal 9 upgrade, this means that many enterprises can't continue to maintain Drupal 7 websites after they stop receiving security support.

But what does “no more security support” actually mean?

When Drupal 7 reaches end-of-life, the Drupal community at large will no longer provide “official” security updates or bug fixes. The Drupal Security Team will no longer provide support or Security Advisories for Drupal 7 sites. Automated or manual processes that you currently use to update your sites may no longer work.

There is a bit of nuance to the lack of security support, however. The Drupal 7 ES program involves partnering with a Drupal Association-vetted vendor and assuring that the vendor is coordinating responsible disclosure of security issues and fixes while publicly sharing the work toward those fixes.

Practically speaking, this means that even if you’re not partnered with an ES vendor, you can still get security patches for your site. However, websites using modules that aren’t actively supported by ES vendors won’t have the benefit of a partner to hunt down and fix issues with those modules, security, or otherwise. If you have modules or other dependencies that age out of security updates, such as the end-of-life of the PHP version you’re hosting, you may be left with a website with an increasing number of security holes.

Additionally, after November 2021, Drupal 7 core and Drupal 7 releases on all project pages will be flagged as not supported. As a result, third-party scans may flag sites using Drupal 7 as insecure since they’ll no longer get official security support.

No more bug fixes or active development

Alongside security considerations, a lesser concern of the Drupal 7 end-of-life timeline is an official end to community-at-large bug fixes and active development. Drupal 7 development has already shifted to focus on Drupal 8 over the past several years, with Drupal 7 bugs lingering. For example, take a look at the Drupal.org issue queue for Drupal 7 core bugs; you’ll see issues that haven’t been updated for weeks or months, versus hours or days for Drupal 8/9 development issues.

Questions to ask when migrating from Drupal 7

So how do you decide which path is right for your organization? Here are some questions to ask.

What are the skills and size of your development team?

The shift from Drupal 7 to Drupal 8 and Drupal 9 involved a shift from Drupal-specific paradigms to incorporating more general object-oriented programming concepts. If your team consists of long-time Drupal developers who haven't done a lot of object-oriented programming, this paradigm shift involves a learning curve that does have an associated cost. For some budget-conscious organizations, this may mean it's more economical to remain on Drupal 7 while developers work on skilling up for Drupal 8/Drupal 9 paradigms.

Another consideration is the size of your development team. If your team is small, you may need to engage an agency for help or explore the other alternatives mentioned above.

What are the plans for the site?

How much active development is being done on the site? Are you planning to add new features, or is the site in maintenance mode? What is your budget and plan to maintain the site; do you have developers devoted to ongoing maintenance, or is it one small priority among many competing priorities? 

If you're planning to add new features, the best option is to migrate to Drupal 8 and Drupal 9. Drupal 9 is under active development, and these modern systems may already include the new features you want to add. If not, working in an ecosystem that's under active development generally reduces development overhead. 

What is the life expectancy of the site?

How many years do you expect the current iteration of the site to continue? Are you planning to use the site for three more years before a major redesign and upgrade? Eight more years? Sites with a shorter lifespan may be good candidates for Drupal 7 ES, while sites with longer life expectancies would benefit from upgrading to a modern platform with a longer lifespan.

What code is the site using?

Do an inventory of your site's code. What contributed modules are you using? What do you have that's custom? Drupal 8 upgrade evaluation is a good place to start. 

Some Drupal 7 contributed modules have Drupal 8 and Drupal 9 versions available, while others no longer apply in a world with different programming paradigms. Still, others may now be a part of Drupal 9 core. 

If you're using a lot of custom modules and code, migrating to Drupal 8 and Drupal 9 is a bigger project.  You might be able to mitigate some of that by altering the scope of your new site to take more advantage of the new capabilities of core and the available Drupal 8 contributed modules.

What features do you want?

Make a list of the features that are important to your organization. This should include features that your site currently has that you couldn't live without, and features you'd like to have but currently don't. Then, do a feature comparison between Drupal 8 and Drupal 9, and any other options you're considering. This may drive your decision to migrate, or you may decide that you can live without "must-have" features based on availability.

Where to go from here

Bottom line: with the Drupal 7 end-of-life date coming next year, now is the time to scope your site changes. But where do you go from here? The next few articles in this series explore how and when to upgrade from Drupal 7 to Drupal 9 and alternate solutions if upgrading isn’t a good choice for your organization. Stay tuned!

May 21 2020
May 21

Whether you're a developer or a designer, everyone has a role to play in making websites more accessible and usable for site visitors. Our design and development teams got together to create this checklist of high-priority tasks you can execute to make your site more accessible. These are some of the issues that affect users of assistive technologies the most, so they should be first on your list when you're forming a plan to improve website accessibility.

Accessibility Checklist

Color Choice - Choose a well-balanced set of complementary colors that meet color contrast standards.

Color Contrast - When combining colors, verify that they have at least the minimum required color contrast.

Link Style - Add at least two differentiators to links for users with visual disabilities

Buttons - Remember color contrast requirements, states (i.e., hover, focus, etc.), and readability when designing and developing buttons for your site.

Forms - Set up your forms for success by including accessible labels, fields, instructions, errors, and alerts.

Color Choice

  • Create a visual design for your site using a balance of colors that isn't too distracting, while also not being too timid. This helps organize the site for all users, especially for those with disabilities. The effective use of color combinations in your website can establish a visual hierarchy, organize content, and draw distinctions between things such as foreground and background, or static areas and areas that are interactive. 
  • Use the primary color palette for things like CTA's, icons, and any place where highlighting areas visually is important.
  • Save secondary colors to highlight less critical information, such as secondary CTA's or backgrounds.
  • Finally, apply neutral colors to things like text and backgrounds, or to tone things down when there are large areas of color.

Color Contrast

One of the designer's most important tasks is to check color contrasts. Once you learn how to do this, you can easily integrate this task into your design workflow. When you are checking color contrasts, you should:

Web-based

  • Pull the hex value(s) and go to the WebAIM Contrast Checker tool
  • Enter your hex value(s) into the Foreground Color or Background Color field(s)
  • Using the sliders below the Foreground Color or Background Color, change the color values until the Contrast Ratio is at or above these minimum values:
    • For text that's at or over 18.66px and bold, look for a color contrast of at least 3:1
    • For text under 18.66px, look for a color contrast of at least 4.5:1
  • Pull the new hex value(s) and place them into your page

Desktop-based

  • Download the Colour Contrast Analyser tool from The Paciello Group for Windows/macOS
  • Enter your hex value(s) into the Foreground Color or Background Color field(s)
  • Using the sliders below the Foreground Color or Background Color, change the color values until the Contrast Ratio is at or above these minimums:
    • For text that's at or over 18.66px and bold, look for a color contrast of at least 3:1
    • For text under 18.66px, look for a color contrast of at least 4.5:1
  • Pull the new hex value(s) and place them into your design

Other useful tools

  • Colorblinding: This extension simulates what your website looks like to a visitor with a color vision impairment.
  • ColorZilla: Advanced Eyedropper, Color Picker, Gradient Generator and other colorful goodies

Link Style

Make sure links can be distinguished from regular text. When you use links in body content, they should be visually distinct in two different ways. One of these should be color, but the other differentiator should be independent of color to provide a distinction for colorblind users. When links are placed in the header, footer, or sidebar navigation, this differentiation is not required, although it is recommended. Having links underlined by default and removed on hover/focus is the best option, because most users expect that behavior, and it is also the default behavior of browsers. Other options include highlighting, dots, dashes, an outline, or bolded text.

A focus state is required on all links, so be sure to include it when setting the hover state. This adds a solid border to a link when a user tabs to it using their keyboard, which helps keyboard-only users who use the keyboard to navigate instead of a mouse. 

Make sure horizontal and vertical link groups have enough space to enable users to access them easily. Iconography can also help users, giving them another way to distinguish between links and plain text. Users understand content more quickly when paired with a visual cue, such as an icon. Finally, use descriptive text for links instead of general text like "More" or "Click here." Links should have some context to the content they're related to; however, make sure they are kept short and understandable.

When designing links, think about the following states:

  • Default (unvisited)
  • Visited (already visited)
  • Hover (moused over)
  • Focus (focusable elements via the keyboard tab key, i.e., links, buttons, etc.)
  • Active (clicked on, i.e., tabs or navigation)
  • Disabled (not able to be activated by the user)

 Further reading on link style:

Buttons

When we talk about buttons, we're referring to regular form buttons and links that are styled as buttons. When developing for accessibility, form buttons should always be used in forms, and links should be used when you need to redirect the user to another page, site, or an anchor within the page.

Buttons should have a clear and solid border color that meets color contrast against the background color and a background color that meets color contrast against the text. When you hover over a button, there should be a very noticeable differentiation in the background and text color. Inverting the colors is a good option; alternately, darken the background color, and invert the text color.

When designing buttons, think about button sizing for both desktop and touch screen. Minimum touch targets should be comfortable for an adult finger to navigate successfully. The Web Content Accessibility Guidelines (WCAG) specify a minimum size of 44x44 pixels, or larger for users such as children or people with motor difficulties. 

Create button labels that are easy to read. Choose sentence case or title case over uppercase, and make sure the font is big enough for easy readability. Make labels action-oriented, i.e., "Next step," or "Save recipe." Including iconography within your buttons can help users understand actions more quickly. Include button states in all designs. These states provide users with feedback that an action is about to happen. When designing buttons, think about the following states:

  • Default (what a button looks like without any interaction)
  • Hover (on desktop, when a user's cursor is hovering over a button)
  • Active (a user has clicked on, and it is selected)
  • Focus (when a user clicks on or uses a keyboard tab key)
  • Disabled (not active)

Think about the overall button hierarchy within the system regarding primary, secondary, and tertiary buttons. This hierarchy lets users understand what the primary and secondary calls to action are on a page. 

 Further reading on buttons:

Forms

When designing forms, these tips can make them more readable and usable:

Group related fields 

  • Group logical sections with clear headings, i.e., Personal information, Contact information, Billing information. 
  • Groups of fields (i.e. checkboxes, radio buttons, etc.) should be contained within a <fieldset> that includes a <legend> element. The <legend> element holds the title for the grouped fields, which will be displayed on the page.
  • Include ample white space between sections to provide a visual distinction across sections. 

Single column forms

  • Forms are easiest to scan when form titles and fields are stacked in one column and aligned. This allows the eye to quickly scan down a single column instead of zig-zagging across multiple columns.

Form labels

  • For text fields, it is best practice to place labels above corresponding form fields in a form. Place checkboxes and radio buttons to the right of each field.
  • Use a bolded font to help form labels stand out. A flag on whether the form field is required should be placed right after the label as well. This can be a red asterisk, red "REQUIRED" text, or something similar. Form labels can also contain brief instructions for the particular field; for example, Date (mm/dd/yyyy)
  • In addition to a label, each form field should have descriptive helper text and placeholder text. Left-aligning and stacking form labels over their respective fields improve scannability. Keep the labels short and sweet.
  • Don't use placeholder text as a label, as this text isn't available to screen readers. Placeholder text disappears when the user interacts with the field. This can be annoying for users who forget what the placeholder text said. For sighted users, placeholder text offers an excellent opportunity to give users brief instructions, or show them how to format their information.

Form fields

  • Form fields should have a clear, solid border color that meets color contrast against the background color and a background color that meets color contrast against the text within the field.
  • The width of the form field should match the content it will contain. For instance, a date field would have a much shorter width than a name field that must accommodate long names.

Form field states

  • When designing your form fields, include the various field states. These field states give the user visual cues about where they are within the form, and where they're going next.
  • Field states to include are default, focus, feedback, and disabled.

Form instructions

  • Provide a brief list of form instructions directly above all input forms. A note about required fields is recommended, as well as required formats (i.e., dates).

Form alerts and errors

  • Use form errors and alerts to concisely explain to users how to fix issues that prevent them from completing the form. Follow color contrast requirements with these alerts and errors.
  • Display alerts as a summary at the top of the form, including brief steps on how the user can fix the issues. You can also include links directly to the fields containing errors within the form. Display errors with each problematic form field to make it easier for a user to find specific error details. This may be inline, above, or under the field. 
  • When a field has an error, change the form field's border to another color. Red is recommended because it's universally understandable as an error in a form. In addition to a color change, another differentiator should be added to form fields when they receive errors. This could include an error icon within the field or to the left of the error message.
  • Try to keep the form lengths short. If fields aren't required, consider whether those form fields are truly necessary on the form. If you don't need them, leave them out altogether. The shorter the form, the fewer opportunities for errors.

Further reading on forms:

As designers and developers, we can help make the world more accessible to our users. Half the battle is knowing what needs you should be designing for, and the other half is applying the design during development with the best practices and requirements we've discussed. Let's start today by taking a closer look at our work and finding opportunities to make it more accessible, more usable, and more inclusive for everyone. 

If you would like to ask us any questions about this article, please join us on Slack in the #accessibility channel. We look forward to seeing you there!

Kat Shaw

Kat Shaw

Kat is passionate about digital accessibility, and happy to continue her work as an advocate with her new colleagues at Lullabot.

Ana Barcelona

Ana Barcelona

Ana is a Senior UX Designer at Lullabot whose creative vision stems from user needs, a sensitivity towards concept, typography, color, and kinetics. 

May 18 2020
May 18

You Might Also Like

This past Friday, the Olivero project reached a major milestone and tagged our first alpha release. You can download it on the Olivero project page. This release marks a point in time where the code is stable(ish!), looks good amazing, and is ready for additional testing!

What is Olivero?

Olivero is a new theme that is slated to make it into Drupal core in version 9.1 as the new default theme (replacing Bartik). It’s named after Rachel Olivero, who was a valued community member and accessibility advocate. 

About this release

This release has been a long time coming (read about the origin here) and has countless hours from dedicated volunteers poured into it with love. 

This release is not perfect, in fact, we’ve stated that “perfect is the enemy of good” for our alpha releases! That being said, we’ve done extensive testing with various devices, and browsers (yes — this theme supports Internet Explorer 11), and have done lots of accessibility work, although more still needs to be done!

You get a feature! You get a feature! Everyone gets a feature!

Well… almost. Besides cross browser and accessibility work, we’ve included some common features in this initial release.

  • Dropdown menu support — this is self-explanatory, but until Drupal 9.1, core themes did not support multi-level menus. 
  • Option for “always-on” mobile navigation — This is intended for the use case where the page has more top-level menu items than can fit within the header.
  • Background color options for site branding — You can change the site branding (logo) background to suit your needs. The current options are white, gray, and blue (which is the default). 

As of this release, we’re not including support for dark mode or changing the color of the theme (via color module, or anything else). However, this is on the horizon (likely after the initial core inclusion). 

How you can help

We’re at the point now where we can use some real-world testing. Please find things that break, we know they’re there!

For example, I loaded up Lullabot.com with the new theme and discovered various default styling that can be tightened up (e.g., links within headers).


We’re also at the point where we can use additional accessibility testing. Please fire up your screen readers, and decrease (or increase) your contrast and take a look!

As with all free open source projects, please take a look at the issue queue to make sure that the issue isn’t already created.

Tugboat helps save the day!

We also want to give out a huge shoutout to the Tugboat team. If you aren’t familiar with Tugboat, it’s an automated service that generates live, working sites on every PR, branch, or what-not.

Through the Tugboat service, we worked to set up a Drupal multisite install that has previews with content, without content, and using the minimal profile (note this is currently broken). 

Check out the Tugboat preview here! Note that the “working” status of this Tugboat preview will be in flux, as we commit code, rebuild the preview, etc.

Next steps

We’re off to the races! The next step is incremental alpha releases (either weekly or biweekly). We’re also going to make a list of features that are necessary to tag our first beta and work toward that. 

We hope to create our first Drupal core path in a month or so. This will give core committers the opportunity to put an eye on the codebase, so it can be included in version 9.1. We’re aiming to get this committed into core in late July-ish.

Thank you, thank you, thank you!

We have so many people helping out, and we wouldn’t be here without them. Here is the full list of committers (16 as of today), and this does not even count the people who are doing testing, etc! 
 

Mike Herchel

Thumbnail

A senior front-end developer, Mike is also a lead of the Drupal 9 core "Olivero" theme initiative, organizer for Florida DrupalCamp, maintainer for the Drupal Quicklink module, and an expert hammocker

Putra Bonaccorsi

Thumbnail

Putra Bonaccorsi is a Senior Front-end Developer with a flair for creative uses of CMS and a dedication to finding unique solutions to tough problems.

May 15 2020
May 15

Matt and Mike talk with Putra Bonaccorsi and host Mike Herchel about Drupal 9's new front-end theme, and its past, present, and future. 

May 13 2020
May 13

As the global pandemic continues to spread — causing widespread sickness and death, restricting in-person human contact, creating additional responsibilities at home or financial hardships, or any of the countless other changes to daily life that have resulted in feelings such as fear, anger, boredom, or uncertainty — this virus has forced some of us to reassess our values and our place in the world. While the majority of us who participate in the Drupal community remain focused squarely on technical issues, others might find now is an especially good time to take a closer look at Drupal's Values and Principles. For those of us privileged enough to have the time and space to consider more philosophical questions, we can ask if Drupal's stated values (still) align with our values, or even consider the role of Drupal in our lives when the pandemic subsides.

This article — the first in a series of articles exploring Drupal's values and principles — considers Drupal's first principle, "impact gives purpose," which is one aspect of the first value, "prioritize impact." On one level, the first principle is merely practical. It concludes by prioritizing the "stakeholders" we should consider: "When faced with trade-offs, prioritize the needs of the people who create content (our largest user base) before the people who build sites (our second largest user base) before the people who develop Drupal (our smallest user base)." In its simplest form, this principle tells us that Drupal ranks the needs of content creators before the needs of the developers.

However, the first principle offers considerably more depth. While acknowledging the practical nature of the Drupal software, it calls on us to aspire to a higher goal: "When contributing to Drupal, the goal is often to optimize the project for personal needs ('scratching our own itch'), but it has to be bigger than that." Thus, Drupal is presented as much more than simply a good product.

The phrase "scratching our own itch" has become a platitude. It's everywhere. The Harvard Business Review called it "one of the most influential aphorisms in entrepreneurship." The phrase is well known among software developers in part because in his influential 1999 book, The Cathedral and the Bazaar, (the highly controversial) Eric S. Raymond wrote, "Every good work of software starts by scratching a developer's personal itch." In the Drupal community, however, we see ourselves as aspiring to much more. 

As the first principle states, "Slowly, focus shifted from writing the perfect code to growing the project, and amplifying Drupal's impact through better marketing, user experience, and more." 

Countless individuals and Drupal subgroups express their desire to impact people. For instance, the Drupal agency Palantir prioritizes impact that is "positive," "lasting," "thoughtful," and "deliberate." Over at ThinkShout, a Drupal agency that works "with courageous organizations that put people and the planet first," the "impact" they aspire to in their first core value "is driven by our sense of connectedness and desire to deliver meaningful, measurable results." Countless individuals and organizations in the Drupal community feel motivated by a sincere desire to positively "impact" other human beings.

Drupal's first principle is especially ambitious in describing the impact of the Drupal community: "Prioritizing impact means that every community member acts in the best interest of the project." It seems unlikely that "every community member" can or should make the Drupal project their top priority. Though it may be idealized, it's a worthy goal. We also must reiterate that people will necessarily begin with their own needs.

Contributions to the Drupal project should not come at personal expense. Imagine telling a single parent, who recently lost their job and wants to build a career with Drupal, to consistently act "in the best interest of the project." Change should come from individuals who have the capacity to help others. Part of why some of us contribute to Drupal is because we imagine another human being finding value in our work. We do not require those who benefit to give back. In this idealized form, we encourage people to participate, but we give with an open hand and no expectation of reciprocation. We contribute because we believe our actions have meaning. As the first principle states, "We derive meaning from our contributions when our work creates more value for others than it does for us."

When we look inward to examine our value systems, we probably do not want to find a heap of clichés, and phrases like "prioritize impact" and "create value for others" might sound rather cliché to some ears. In fact, on various lists of "business buzzwords," the word "impact" takes the top slot. The noun "impact" comes from the Latin impactus, which means "collision" or "striking one thing against another." The cultural and historical context of "impact" doesn't negate its usefulness, but if the real goal is to "derive meaning," it might be helpful to reconsider this principle in more human terms.

As previously noted, much of Drupal's first principle points toward bigger goals that extend beyond the conference room to a human-centered skill that good people work to cultivate: generosity. We seek to help others, both at home and in our careers. The business-friendly language in the first principle like, "maximize the impact the project can have on others," could, for at least some of us, be read as "practice generosity toward others." We seek to use Drupal for Good or even live in (with) Drutopia.

Thanks to Drupal and its community, some of us possess the fortunate capacity to help others. If that describes you, then consider in what ways you have the opportunity to be generous. Toni Morrison — the iconic writer, activist, and college professor who became the first African-American woman to win the Nobel Prize in Literature — used to tell her students:

"When you get these jobs that you have been so brilliantly trained for, remember that your real job is that if you are free, you need to free somebody else. If you have some power, then your job is to empower somebody else. This is not just a grab-bag candy game."

In this case, Morrison's inspirational words apply not just to students, but to countless people in the Drupal community. Many in our community have freedom and power. We have the opportunity to help others. Help other Drupalers. Help kids. Help the homeless. Help anyone in need. Maybe even help Drupal and give to #DrupalCares. If your actions produce positive results, keep going!

Ultimately, action matters more than language. Whether you feel motivated by the desire to make an impact, or you want to practice generosity, don't let up because the world has changed. Take another look at Drupal's Values & Principles and determine for yourself if they motivate you to action. This is not just a grab-bag candy game.

May 06 2020
May 06

Drupal 8 to 9 Upgrade

With Drupal 9 just around the corner, there's more and more buzz about preparing for the upgrade. From a project planning perspective, what do organizations need to consider when planning for the Drupal 9 upgrade? Developers may be wondering about the technical details; how to upgrade to Drupal 9. We’ve discussed who should upgrade to Drupal 9 and when to upgrade to Drupal 9. Here’s how to do it.

Drupal 9 Upgrade Project Planning

Plan a release window

Drupal 9 is currently slated for release in June 2020. However, Drupal 8.9.x is scheduled to reach end-of-life in November 2021, with older versions, such as 8.7.x slated to stop receiving security support in June 2020. You first need to plan a release window to upgrade to Drupal 8.9.x during this timeframe to make sure your site is upgraded before the older Drupal versions are no longer supported. Once on Drupal 8.9, you can perform and release all the preparatory work described below. After that, you’ll be ready to plan a release window for the final upgrade to Drupal 9. 

For more on planning a release window, check out Drupal 8 Release Planning in the Enterprise. Remember to factor in other development work, updates for the rest of your stack, and other ongoing development projects, and give yourself plenty of time to complete the work.

Scope the upgrade project

To scope the upgrade project, you'll need to consider a handful of factors:

  • Deprecated code that must be updated
  • Other development work that you'll do as part of the upgrade project

We'll dive deeper into how to check for and correct deprecated code and APIs shortly, but first, let's take a look at other development work you might do as part of the upgrade project.

Solicit feature requests from stakeholders

Does your website deliver stakeholder-required features using contributed modules that haven't yet been updated for Drupal 9? Do your key stakeholders want new features to better serve site users, or meet business objectives? 

For many organizations, major Drupal replatforming efforts have provided a cadence for other website development work, including new feature development. If it's been a while since your organization checked in with stakeholders, now might be a good time to do that. 

Regardless of whether or not you plan to deliver new feature development in the Drupal 9 upgrade project, it's a good idea to make sure you won't lose Drupal 8 contributed modules that provide the functionality your stakeholders can't live without - unless you've got a new way to deliver that functionality in Drupal 9.

Architecture, content, accessibility audits and more

For sites that are already on Drupal 8, the Drupal 9 upgrade is different than many previous major version updates; Drupal 8 to Drupal 9 does not require a content migration, so there's no real need to do a major information architecture audit and overhaul. In this new world, organizations should look at shifting the site redesign and content architecture cadence to an ongoing, iterative model.

How to Prepare for Drupal 9

Upgrade to Drupal 8.8 or Drupal 8.9

If you haven't already updated your Drupal 8 site to the most recent version of Drupal 8.8.x or 8.9.x, that's where you must start. Drupal 8.8 is a big milestone for API compatibility; it's the first release with an API that's fully compatible with Drupal 9. Practically speaking, this means that contributed modules released prior to 8.8 may not be compatible with Drupal 9.

Beyond API compatibility, Drupal 8.8 and 8.9 introduce further bugfixes, as well as database optimizations, to prepare for Drupal 9. If you upgrade your website and all contributed modules and themes to versions that are compatible with 8.9, those parts of your site should be ready for Drupal 9. 

Platform requirements 

One change between Drupal 8 and Drupal 9 is that Drupal 9 requires a minimum of PHP 7.3. Drupal 8 recommends but does not require 7.3. There are new minimum requirements for MYSQL and MariaDB and other databases. And your Drush version, if you use Drush, must be Drush 10. If you need to update any of these, you should be able to do it while still on Drupal 8, if you like. There may be other changes to the Drupal 9 requirements in the future, so double-check the environment requirements.

Audit for conflicting dependencies

Composer manages third-party dependency updates and will update Drupal dependencies when you do Composer updates. However, if anything else in your stack, such as contributed modules or custom code, has conflicting dependencies, you could run into issues after you update. For this reason, you should check your code for any third-party dependency that conflicts with the core dependencies. 

For example, Drupal 9 core requires Symfony 4.4, while Drupal 8 worked with Symfony 3.4. If you have contributed modules or custom code that depends on Symfony 3.4, you'll need to resolve those conflicts before you update to Drupal 9. If your code works with either version, you can update yourcomposer.json to indicate that either version works. For instance, the following code in your module’s composer.json indicates that your code will work with either the 3.4 or 4.4 version of Symfony Console. This makes it compatible with both Drupal 8 and Drupal 9 and any other libraries that require either of these Symfony versions.

{
  "require": {
    "symfony/console": "~3.4.0 || ^4.4"
  }
}

If you have code or contributed modules that require incompatible versions of third party libraries and won’t work with the ones used in Drupal 9, you’ll have to find some way to remove those dependencies. That may mean rewriting custom code, helping your contributed modules rewrite their code, or finding alternative solutions that don’t have these problems.

Check for deprecated code

Sites that are already on Drupal 8 can see deprecated code using a few different tools:

  • IDEs or code editors that understand `@deprecated` annotations;
  • Running Drupal Check,  PhpStan Drupal, or Drupal Quality Checker from the command line or as part of a continuous integration system to check for deprecations and bugs;
  • Installing the Drupal 8 branch of the Upgrade Status module to get Drupal Check functionality, plus additional scanning;
  • Configuring your test suite to fail when it tries to execute a method that calls a deprecated code path.

See Hawkeye Tenderwolf’s article How to Enforce Drupal Coding Standards via Git for more ideas. That article explains how Lullabot uses GrumPHP and Drupal Quality Checker to monitor code on some of our client and internal sites.  

Many organizations already have solutions to check for deprecated code built into their workflow. Some organizations do this as part of testing, while others do it as part of a CI workflow. In the modern software development world, these tools are key components of developing and maintaining complex codebases.

While you can do this check in any version of Drupal 8, you’ll need to do a final pass once you upgrade any older Drupal 8 version to Drupal 8.8, because new deprecations have been identified in every release up to Drupal 8.8.

Refactor, update and remove deprecated code

If you find that your site contains deprecated code, there are a few avenues to fix it prior to upgrading to Drupal 9. Some of those tools include:

Flag modules as Drupal 9 compatible

Once you’ve removed deprecated code from your custom modules, flag them as being compatible with both Drupal 8 and Drupal 9, by adding the following line to your module’s info.yml file.

core_version_requirement: ^8 || ^9

What about contributed modules?

If you're using contributed modules that are deprecated, work with module maintainers to offer help when possible to ensure that updates will happen. You can check reports in the drupal.org issue queue for Drupal 9 compatibility or by checking Drupal 9 Deprecation Status.

You should update all contributed modules to a Drupal 9-compatible version while your site is still on Drupal 8. Do this before attempting an upgrade to Drupal 9!

Update to Drupal 9

One interesting aspect of the Drupal 9 upgrade is that you should be able to do all the preparatory work while you’re still on Drupal 8.8+. Find and remove deprecated code, update all your contributed modules to D9-compatible versions, etc. Once that is done, updating to Drupal 9 is simple:

  1. Update the core codebase to Drupal 9.
  2. Run update.php.

Drupal 9.x+

The Drupal Association has announced its intent to provide minor release updates every six months. Assuming Drupal 9.0 releases successfully in June 2020, the Drupal 9.1 update is planned for December 2020, with 9.2 to follow in June 2021.

To make Drupal 9.0 as stable as possible, no new features are planned for Drupal 9.0. The minor updates every six months may introduce new features and code deprecations, similar to the Drupal 8 release cycle. With this planned release cycle, there is no benefit to waiting for Drupal 9.x releases to upgrade to Drupal 9; Drupal 9.0 should be as stable and mature as Drupal 8.9.

Other resources

Apr 22 2020
Apr 22

Drupal 8 to 9 Upgrade

As the release of Drupal 9 approaches, organizations are starting to think about when to upgrade to Drupal 9. Quick Drupal adoption isn't automatic. Historically, it's taken years for some significant Drupal versions to gain traction. With a relatively short window between the Drupal 9 release and Drupal 8's end-of-life, however, organizations must move more quickly to adopt Drupal 9 or make other arrangements.

No penalty for early Drupal 9 adopters

A common strategy for many technology release cycles is to avoid the initial version of a major software release. Some organizations wait until one or more point releases after a new version, while others prefer to wait months or even years after a major version release for things like bug fixes, additional features, and helpful resources created by early adopters. In the Drupal world, this delay is often exacerbated by waiting for contributed modules to be compatible with the new version.

The nice thing about Drupal 9 is that there is no penalty for early adopters, so there's no reason to wait for a later version. The initial Drupal 9 version release introduces zero new features. Drupal 9.0 core code matches Drupal 8.9 core. The only differences between Drupal 9.0 and Drupal 8.8 or 8.9 are the removal of deprecated code and required upgrades to third-party dependencies.

The primary consideration is whether or not your favorite contributed modules have declared that they are Drupal 9 compatible. With past upgrades, waiting for contributed modules to be ready for the new Drupal version caused months or even years of delays. But the Drupal 9 upgrade path for contributed modules is relatively easy, so they should be able to adapt quickly. Many modules are already compatible, and others will need minimal changes.

When your code is ready

One of the core components of the Drupal 9 upgrade is the removal of deprecated code in Drupal 8. However, this means that when you're planning your release window, you'll need to schedule some time for the pre-work of auditing and refactoring deprecated code. If you've already been doing this, you may not need to pad your schedule to compensate for this work. We'll dive deeper into how to get your code ready in a future article.

In addition to scheduling time to address code deprecations, you'll also need to give yourself time to address any third-party dependencies that require newer versions in Drupal 9. When you're looking at when to upgrade to Drupal 9, you should do it after you've had a chance to resolve any third-party dependency updates that conflict with other things in your stack. If you've got a contrib module or custom code that requires an older version of a third-party dependency, but Drupal 9 calls for a newer version of that dependency, you'll need to make a plan and address this conflict before you upgrade to Drupal 9.

Consider other website work

Many organizations have traditionally used major Drupal version migrations as a time to plan overall website redesign projects, information architecture work, and other web development projects. Because the upgrade to Drupal 9 is more like a minor release than a major one, there's no need to deep dive into information architecture - there's no migration! That means your organization needs to establish a new strategy for these projects; we're working on an upcoming article to cover web development strategy for Drupal 9 for more insights around this.

If business logic dictates that your organization plan other web development projects for this year, make sure you give yourself time to complete the Drupal 9 upgrade before Drupal 8 reaches end-of-life in November 2021. 

Take the availability of preferred partners and development teams into account

If you're planning to work with vendor partners, make sure you factor their availability into your project plan. With an upgrade window of slightly over a year between the release of Drupal 9 and the end-of-life of Drupal 8, some vendor partners may have limited availability, especially if yours is a larger project. Planning ahead helps to ensure you can work with your preferred partners; otherwise, you might add the stress of working with a new partner into the mix.

At the same time, don't forget about internal initiatives such as serving multiple stakeholders. For example, doing new feature development for content editors while simultaneously maintaining an up-to-date platform consistent with your organization's security policies can mean a dance to prioritize development resources to meet everyone's priorities and deadlines. While this complicates the release planning process, it's essential to consider these factors when determining the timing of upgrading to Drupal 9.

We dipped our toes into these considerations in Drupal 8 Release Planning in the Enterprise, and hope to release an updated version of this article soon for Drupal 9 release planning.

Missing the Drupal 9 upgrade window

To summarize, you should upgrade to Drupal 9 earlier rather than later. But what if your site can't upgrade to Drupal 9 before Drupal 8 reaches end-of-life? Unlike Drupal 7, Drupal 8 does not have an extended support program. The upgrade from Drupal 8 to Drupal 9 is such a minor replatforming effort compared to prior versions that the decision was made not to offer an extended support program for Drupal 8. 

Support will continue through November 2021 for sites upgraded to 8.9.x, but support for that version ends when that Drupal 8 end-of-life date arrives. Older Drupal 8.x versions will cease getting support before that date; 8.7.x stops getting security support as of June 3, 2020, and security support ends for Drupal 8.8.x on December 2, 2020.

Long-term, your organization needs a plan to upgrade to Drupal 9, or to consider other options. A future article in this series offers more information about what that plan might look like.

Thanks to the Lullabot team for contributing to this article and to Dachary Carey for drafting it.

Karen Stevenson

Thumbnail

Karen is one of Drupal's great pioneers, co-creating the Content Construction Kit (CCK) which has become Field UI, part of Drupal core.

Apr 15 2020
Apr 15

Drupal 8 to 9 Upgrade

This article is the first in a series discussing Who, What, Why, and How Drupal 8 sites can upgrade to the upcoming Drupal 9 release. In a future series will discuss upgrading Drupal 7 sites.

With Drupal 9 scheduled for release in summer 2020, and with both Drupal 7 and Drupal 8 scheduled for end-of-life (EOL) in November 2021, it’s time to think about whether to upgrade Drupal sites to the new version. Upgrading to newer versions in the past was a significant replatforming effort that required a substantial investment and a non-trivial release window. The Drupal 8 to Drupal 9 upgrade is different, though; this is the first major version upgrade that’s reputed to be as simple as a minor point release. Can it really be that simple? Who should upgrade to Drupal 9?

The Easy Path: Upgrading from Drupal 8

Organizations that are already on Drupal 8 are several steps ahead in upgrading to Drupal 9. One of the biggest benefits of upgrading to Drupal 8 is that the platform and core code of Drupal 8 form the basis for Drupal 9. 

Drupal 9.0 doesn’t introduce any new features or new code, so sites that are on the final Drupal 8 point release are essentially ready to upgrade to Drupal 9.0. No big lift; no major replatforming effort; no content migration; just a final audit to make sure the site doesn’t rely on any deprecated code or outdated Composer dependencies. 

Sites that have kept up-to-date with Drupal 8’s incremental updates (see Andrew Berry’s article Drupal 8 Release Planning in the Enterprise) should be ready to go when it comes to core code. Many sites are already using automated tools or workflows to keep them up-to-date on code deprecations for contributed and custom modules. Ideally, you have been continuously replacing older versions of contributed modules with versions that have removed deprecated code, removing deprecated code in your custom code, and dealing with any Composer dependency conflicts. If so, the upgrade effort for your site should be relatively simple. The same is true if you rely on widely-used and well-supported contributed modules and have little custom code.

If you have custom code and use less widely-used contributed modules, but you’ve been paying attention to code deprecations in your custom code and the readiness of your contributed modules, you’re probably in a good position to upgrade. If you have strong test coverage and aren’t relying on any deprecated third-party dependencies, you’re in even better shape. You shouldn’t see substantial changes from Drupal 8 to Drupal 9.0, so even custom code is likely to work without issue as long as it doesn’t rely on deprecated functions or methods that are removed. 

The caveat is that if your custom code or contributed modules rely on older versions of Composer dependencies that are deprecated in Drupal 9 in favor of newer versions, you may need to do some refactoring to make sure that code works with the new third-party dependencies.

Can you stay on Drupal 8 past its EOL?

There should be no reason for anyone on Drupal 8 not to upgrade to Drupal 9.  There will be a small window of time until November 2021, during which the last Drupal 8 release will be supported with security updates. That allows time to make the necessary changes to move to Drupal 9. But after that, you’ll need to make the switch.

When Drupal 6 reached its end of life, there was a Long Term Support (LTS) program, which made it possible to stay on Drupal 6 past its EOL. There are plans to provide an LTS program for Drupal 7; however, there will be no Long Term Support program for Drupal 8 because the upgrade path from Drupal 8 to Drupal 9 is much easier.

If you don’t make the move, you’ll be on your own to deal with security updates and other maintenance and bug fixes for your Drupal 8 code. And that would likely be more expensive and time-consuming than just doing the upgrade.

Prepare to upgrade or start considering alternatives.

With the Drupal 9 upgrade being relatively uncomplicated for sites that are already on Drupal 8, it's easy to recommend that those sites should upgrade. The main question is when, and what other options do you have? Later articles in this series will delve into more detail about how to prepare for the upgrade.

Thanks to the Lullabot team for contributing to this article and to Dachary Carey for drafting it.

Karen Stevenson

Thumbnail

Karen is one of Drupal's great pioneers, co-creating the Content Construction Kit (CCK) which has become Field UI, part of Drupal core.

Apr 15 2020
Apr 15

Suzanne is a co-founder of Evolving Web. She's in charge of  design, user experience, and development work. She also provides in-depth Drupal training to clients and thought leadership to the Drupal community.

Suzanne is an elected member of the Drupal Association Board and frequently makes presentations at professional events about development best practices and user experience. She loves traveling around helping teams learn Drupal, and making friends and exploring new cities in the process. 

Apr 08 2020
Apr 08

The team of editors who populate content on Newspaper, Inc. has asked the development team to add another image field on the articles they write, one that caters to Pinterest sharing. The project manager writes up the ticket, and the development team gets to work. They add the field to the Article content type. They demo it later that week, and everyone seems happy.

Then some complaints start coming in. A few of the editors can't find the field on the articles they are writing. The development team checks to make sure the field is still there and that everyone has the right permissions, but everything checks out. They ask for a demo of the problem.

One editor attempts to create content of the type, News Item. The field, of course, is not there. "That's not an article," the development team explains. 

"What? Of course, this is an article!" says the editor, a little frustrated. The development team can't detect the lack of an uppercase letter in the word "article." So, they go on to pedantically explain what Drupal content types are, what a real Article is in terms of development, and all the while, the editor just gets more and more frustrated. This organization could benefit from a shared, ubiquitous language.

What is a ubiquitous language?

The term "ubiquitous language" comes from the world of Domain-Driven Design (DDD), specifically from the book by Eric Evans. Still, you don't have to fully commit to DDD to benefit from some of its recommendations.

A ubiquitous language is a vocabulary shared by everyone involved in a project, from domain experts to stakeholders, to project managers, to developers. This language should be used in all documentation, tickets, conversations, meeting invites, post-it notes on your computer, chalkboards that only appear in your dreams — everywhere. The goal is to reduce the ambiguity of communication.

Ideally, each term that surfaces corresponds to something concrete in the code. When people talk about an event, there is a content type named Event in your Drupal installation, for example. 

But don't stop at just content types. The name of a View should also reflect how people actually talk about it. Or a block. If your marketing team consistently calls the email capture block a "lead generation form," think twice before naming it "newsletter sign up" in your code. Your module names should also reflect the vocabulary used by the organization and not just the whims of the development team.

Driven by stakeholders or domain experts

The development of a ubiquitous language is an ongoing conversation, but the terms used should be driven by stakeholders and domain experts, and not developers. If your editors think of everything they post on the website as an Article, then there should probably just be a single content type named Article. Taxonomy fields can handle permutations to handle the different contexts, the terms themselves controlled by a subset of the ubiquitous language, such a News Item, Press Release, or Announcement. (Or, leverage some of the techniques listed below to unify the concept in code). But this depends on the organization.

In some places, editors and teams might be used to referring to different types of content by different names, even though they might be similar. In that case, it might make sense to have different content types for News Item and announcement. It might not make the most sense from a technical perspective, but if it eases communication and reduces ambiguity, it makes sense.

When you have a true ubiquitous language, stakeholders and end-users can talk about things in the code with developers, and with some confidence. Conversations can delve deeper. Problems, needs, and solutions will surface with greater clarity.

Benefits of having a ubiquitous language

First, clearer communication. Less ambiguity. It makes it less likely to have misunderstanding and mismatched expectations.

Second, a useful shorthand can develop. Once the ubiquitous language becomes more ingrained, and good habits are established, an enormous amount of information and context is communicated when someone uses the word "article."

Third, people involved become more aware of potential issues when they come up, creating a beneficial feedback loop. Developers can push back against terms that are ambiguous or inconsistent. Stakeholders have an extra leg to stand on to insist that developers use terms that they can understand or ask questions about words that seem too awkward to use in everyday conversation. This leads to more back and forth communication that strengthens the shared language and makes it more relevant for everyone. 

Naming things is one of the hardest problems in programming. It lets developers offload some of that cognitive effort.

If your editors are continually using the word Lede to refer to the initial summary of an article that appears in lists of articles, it surfaces an opportunity to have a specific field for this concept. In turn, this leads to greater clarity in code and is an opportunity to make the whole system better and more robust. 

We didn't name things right. Or the names evolved. What now?

Things are rarely right the first time, and in today's development world, iteration is king. So what can you do if the code doesn't reflect the language you are using, and you're afraid the concrete has been drying for too long?

  1. Change the label. While machine names in Drupal can be harder to change, labels are easy. This isn't ideal, because machine names can be used throughout custom code, but at least there will be something concrete on the website, and in the code, that conforms to the ubiquitous language. 
  2. Use Decorator functions and classes. If you have several content types that should map to a single concept in your ubiquitous language, like in the example that opened up the Article, consider using a wrapper class. A wrapper class called Article can encapsulate various content types and add additional functions for specific business logic. It is this wrapper class that is used throughout your code and what maps to the vocabulary used in the ubiquitous language.
  3. A mapping document in your documentation that everyone understands. This helps translate concepts from the ubiquitous language to the code. In practice, this could become just another document that seldom gets updated (the threat hanging over all documentation), and so should be used as a last resort. But it could be helpful in certain situations. Always try to map directly to the code, if possible.
  4. Migration. In some cases, migrating several content types into a single type, so your Drupal installation better matches the ubiquitous language might be worth the time and effort. If you are preparing to make a big development push and need better foundations, or you're migrating from another CMS or to a newer Drupal version, the opportunity might present itself.

Conclusion

A ubiquitous language used everywhere throughout your organization, with discipline and intention, has the power to help make your communication easier and your code more maintainable. By mapping the shared language directly to concepts in the code, and in this specific case, Drupal content types, requirements can be made more explicit, and discussions between stakeholders and developers can be more profitable.

Beyond Domain-Driven Design by Eric Evans, referenced above, you can also checkout Domain-Driven Design Distilled by Vaughn Vernon, which is exactly what its title suggests. It contains a good summary of ubiquitous languages.

Apr 02 2020
Apr 02

Drupal 8 has allowed and fostered the use of new design patterns through both Drupal core itself and through some well-known contributed modules. Those patterns are generally oriented to dealing with Drupal the framework, as expected. But, what about your business logic?

Sticking to using just core APIs and concepts is enough to get past the coding standards stage in a codebase. Still, you can take it a bit further and have cleaner business logic that better reflects what's going on in your application. That should also help to avoid what is known as the Big Ball of Mud.

With that goal in mind, a few artifacts from Domain-Driven Design (DDD) were introduced to the Drupal projects. Those artifacts are Application Services and Request/Response Objects. 

Anatomy of an Application Service (AS)

For the sake of simplicity, application services are defined as the classes responsible for controlling the execution flow of our application. They coordinate entities, repositories, other services of your domain, and other infrastructure services.

Another, more intuitive way, is to say that an AS is essentially a use case of the application. If there's a piece of logic that takes some input that is not coming from your domain (e.g., a UUID), does some processing, and persists some data in the database, in one or more entities, that's a candidate to be an Application Service.

DISCLAIMER: Note that in DDD, there's the concept of Domain Services, which are similar to but not quite the same as AS. To keep this post simple and actionable as a single, small step to simplify a Drupal codebase, know that they exist, and are worth reading about as well.

For example, Drupal's BanIpManager class has the following methods:

  • isBanned
  • findAll
  • banIp
  • unbanIp
  • findById

If this were to modeled as custom logic instead of as framework logic, you'd end up creating two different Application Services:

  • BanIpService
  • UnbanIpService

And they both would probably use, via Dependency Injection, a BannedIpRepository. 

So how do you create an AS? Generally speaking, an AS is a normal class with only the constructor and one single public method. Other methods can be added for internal logic, but the public API will only be one method, like this:

class CreateNewClientWithSubgroups {
  
  public function __construct(Dep1Interface $dep1) {
    $this->dep1 = $dep1;
  }
  public function execute(CreateNewClientWithSubgroupsRequest $request): CreateNewClientWithSubgroupsResponse {
    // do necessary checks before attempting any db changes.
    // throw exceptions as needed.
    // Perform changes and return a response object.
    return new CreateNewClientWithSubgroupsResponse($data1, $data2, $data3);
  }

}

The constructor method doesn't need an explanation and is used to pass the dependencies of our service. The execute one gets a bit more interesting. Notice that it has only one argument, a CreateNewClientWithSubgroupsRequest object, and it returns a CreateNewClientWithSubgroupsResponse object. These will be covered shortly, but first, look at the benefits of this simple change:

  • For starters, we're way closer to the "S" (Single responsibility) of SOLID than if we had placed this service as a method in a larger class.
  • As a consequence, our service is likely to have fewer dependencies, making it simpler to test.
  • With the logic encapsulated, it's easier to understand what's going on during that specific operation that takes place in our domain.
  • From a structural point of view, and specifically for maintaining codebases over a long period, this makes the codebase more expressive: A developer can open a Orders/ApplicationServices/ directory and see at a glance all the different use cases that can happen in the system for orders.
  • It's easier to reuse logic from other clients. Imagine a feature that comes up out of an editorial requirement. In most Drupal projects, you'll see the logic for the operation, mixed into the Form class, as part of the submit handler. Next week, a CI job happens to need that feature too, so another developer has to create a Drush command for it. Chances are that code will be copied and pasted in the command file, instead of reused. But if the logic is modeled as an AS in first place, the same service, already in place and tested, can be called from the command because it's no longer coupled with the web UI (Form API).

Request and Response Objects

Back in the AS, as mentioned, there's only one public method called execute. One significant benefit of that is consistency. Every service or utility (or developer!) making use of AS knows it has the same entry point and is more aware of where to find details regarding what the AS is doing and how it's doing it. Such consistency is particularly useful to make the service work with a Command Bus.*

A more important part of the execute method signature is its arguments. Generally speaking, you want this method only to accept a request object, and return a response object. That object is essentially an immutable Data-Transfer Object (DTO), which by definition, is only used to pass data around. They're generally named after the service itself, with a Request or Response suffix for each one, respectively. A request object would look like this (doc-blocks omitted):

class CreateNewClientWithSubgroupsRequest
{

    private $clientName;
    private $groupName;

    public function __construct(string $clientName, string $groupName)
    {
        $this->clientName = $clientName;
        $this->groupName = $groupName;
    }

    public function getClientName(): string
    {
        return $this->clientName;
    }

    // Other getters...
}

Similarly, the response object would contain the data relevant to the service. Following the example, in this case, it could return the IDs of the subgroups created. Here are a few of the immediate benefits of this:

  • No more array madness. If you've been around long enough in the Drupal world, you might fancy this. If you need to expand your service to use a new parameter, update the DTO signature.
  • For complex scenarios where the request parameters are many or the instantiation changes depending on the context, you get encapsulated logic for those scenarios. With objects, you can resort to having multiple static factory methods on the class, and even declare the constructor as private to make sure developers using it look for the appropriate method. For even more complex cases, a factory class to instantiate request objects can be a good choice as well.

The most powerful property of using this approach comes not only with the clarity provided for purpose-specific artifacts but also with the impact it can have on the overall system. Since these DTOs are typed, they can be easily serialized and stored in a database. This makes for a consistent way to track requests for actions that need to happen within your application but are not required to run in the exact moment they're requested. The class MenuTreeParameters in Drupal 8 core, for example, makes for a good example of a DTO.

It's worth noting that, generally, it might not be a good idea to have static factory methods to instantiate your classes. However, in this scenario, that is perfectly fine because, as mentioned above, we're just dealing with a DTO, which is purely for storage, retrieval, serialization, and deserialization. In short, we'd be using a Named Constructor, not to be confused with service instantiation via static factory methods.

Closing and Trying Things Out

Two of the main tactical artifacts of DDD, Application Services and Request/Response objects, have been covered. With these tools, you can start to simplify the code of our Drupal projects, and shape them in a way that will not only bring about a more expressive codebase but also scalability and performance improvements, if you choose to go that way. 

Architecturally, modeling logic this way is one step forward to decouple it from the Drupal framework. While it might seem of little value if Drupal is the only framework you've worked with, it has a lot of potential benefits as you can eventually get parts of the application that could be placed better in a completely separate application that communicates with the main one via an API, message queues, etc. That allows you to experiment with new frameworks in low-risk areas of your business, or segregate certain logic into separate services that will be maintained by different teams without having to tear the whole thing apart or start from scratch.

If you're curious about how this would look like in your project, try it! Find some business logic that meets any of the following criteria (bonus points if it meets all four):

  • Is in the submit handler of a Form API class
  • Is in one of the ever-present Entity API hooks
  • Is left alone in a Drush command file
  • Is in a .module file, as a standalone function

After you find that piece of logic, create a separate Service class for it. Remember to make it with only one public method, which receives and returns a Request/Response object. That's it. You're done!

DDD consists of a broader range of concepts and artifacts, which can and should be combined following certain rules. Only a few of them are mentioned in this article, but hopefully, they have sparked some interest in you. If that's the case, you might enjoy some of the existing literature around this topic, with books like Domain-Driven Design in PHP (recommended if you're just starting), Domain-Driven Design DistilledDomain-Driven Design, or Implementing Domain-Driven Design

If you're unfamiliar with this pattern, a Command Bus is a way to separate infrastructure actions that are meant to happen alongside the changes the service is performing on the application but are not really part of the domain in which the service is interested. Such actions can be logging of certain events, shutdown processes, opening, and closing database transactions, etc.

Acknowledgments

Links

Mar 20 2020
Mar 20

Fostering Inclusion in Tech

In the previous article of this series, we talked about how fostering diversity, equity, and inclusion (DEI) in an organization is no easy feat. However, there are steps you can take to help get you on your way. When it comes to the hiring process specifically, it's important to hire in the spirit of openness, transparency, accountability, and have a shared vision of what constitutes success for the new position. By offering a welcoming process for applying, organizations attract excellent candidates who participate and collaborate well with the existing mission, vision, and values. As we continue learning how to do this successfully, we've rounded up some tips that might be useful to your organization.

Evaluate the Job Title

Look at current job-seeking tools like IndeedGlassdoorIdealist, to make sure the current job title, description, and range of responsibilities are appropriate and reasonable for that role. Modifying the title or level of the position to match generally accepted standards (i.e., the difference between "Senior Product Manager" vs. "Product Manager" vs. "Project Manager" vs. "Product Associate") may make the difference in who applies. 

Be Thoughtful with Language

Terms like "rock stars, ninjas, unicorns" do not suffice as descriptive language. Identify the bulleted list of actual skills required, as well as the desired background or experience, and consider the implications of the language. Hiring for a "unicorn" or a "collaborative team player," will receive different responses: the first from unicorns, the second from team players. Matthew Tift, James Sansbury, and Matt Westgate discuss "The Imaginary Band of Rock Stars at Lullabot" on the Hacking Culture podcast.

Cut out jargon to focus on the required skills and listed responsibilities of the job. If these are not yet clear, re-evaluate the role and its job description, and list out how a person will succeed in the role. 

Ruby Sinreich (http://lotusmedia.org), a web developer, technologist, and strategist who has worked in progressive advocacy organizations and online communities for over two decades, suggests the following tools for minimizing bias within the text (from the Drupal Diversity and Inclusion group):

Identify and Make Any Assumptions Explicit

List all relevant aspects of the position to attract the correct type of applicants and make the implicit assumptions of who can work in this role transparent.

Sample questions to address in the description:

  • Is travel included or required in this job?
  • Is there a need to lift heavy objects or crawl under desk spaces?
  • Is this a remote job or an on-site job?
  • Is the position salaried, contract, temp-to-hire?

Include all non-negotiable aspects of the work up front, and be explicit about what constitutes success. For example, a recent job description for a Tugboat Enterprise Account Executive position provided a coherent, attainable measure of success:

Like anything, we understand it takes a bit of time to ramp up to a new gig. At the end of 6 months, Lullabot will have spent roughly $70,000 in wages for the position, and we'd be looking to come in a little above break-even with this investment. Our minimum expectation is to hit a Monthly Recurring Revenue goal of $20,000 of new business by the end of six months.

Other questions to ask when measuring success include: Am I enjoying the work? Is the market opportunity substantial? Am I having fun? 

Publish the Pay Range

Include a pay range and whether or not the role is salaried, temp-to-hire, short-term contract, or a long-term contract position. When you provide a salary range, studies show that this level of transparency increases job applications by 30%. After all, no one wants to go through a lengthy hiring process only to find out the role isn't a financial fit.

Consider being clear about salary range, requirements, and perhaps, bands inside the role, and you'll come to a quicker agreement with the final candidate who has understood salary expectations from the beginning.

Clearly List Benefits

For many, health, vision, dental, retirement matching, flex-time, parental leave, paid time off, holidays and add-ons like fitness or technology budgets make a job significantly more attractive. At times, it might even be a determining factor. Display listed benefits in the "Work" or "Careers" section, e.g., our annual Events and Education budget is listed publicly on our website, among other benefits we offer.

Encourage People from Marginalized and Underrepresented Groups to Apply

Consider adding language that encourages applicants who identify as being from an underrepresented community to apply. It's important to go beyond the standard "equal opportunity" language and will make your job description appeal far more to diverse groups of people.

Comply with Federal, State, and Local Guidelines

Make sure the organization complies with any guidelines regarding harassment and discrimination. Here is some sample language about how hiring committees might consider candidates (from Green America's hiring statement):

All qualified applicants will receive consideration for employment without discrimination regarding actual or perceived

  • race, 
  • color, 
  • religion, 
  • national origin, 
  • sex (including pregnancy, childbirth, related medical conditions, breastfeeding, or reproductive health disorders),
  • age (18 years of age or older),
  • marital status (including domestic partnership and parenthood),
  • personal appearance,
  • sexual orientation,
  • gender identity or expression,
  • family responsibilities,
  • genetic information,
  • disability,
  • matriculation, 
  • political affiliation,
  • citizenship status,
  • credit information, or
  • any other characteristic protected by federal, state, or local laws.
  • Harassment on the basis of a protected characteristic is included as a form of discrimination and is strictly prohibited.

Focus on the Organization's Culture: Mission, Vision, and Values

Culture is one of the most significant determinants of whether or not the candidate will continue through with the process of applying. How do you attract high-quality teammates? The current organizational mission, vision, and publicly-stated values make a difference. What does the company, team, or project stand for? Say it loud and proud, and make sure the applicant understands organizational values. A blog post, "About" page, or video linked inside the job application will make values clear.

Circulate the Listing to Diverse Audiences

Change up and expand the networks where job listings get circulated. For example, Historically Black Colleges and Universities (HBCUs), remote job boards, or community groups that focused on a specific area, industry, or desired applicant pool, as well as many Slack channels, have job postings. Consider sharing the job post with the following networks first, and then expanding it to general job boards. Some examples include:

Identify Scoring in Advance

Have a sheet that lists out the evaluation system used when evaluating applicants. If possible, include this in the job description to surface candidates who will be able to speak to the desired points and provide transparency in how they will be scored. In parallel, this procedure works when evaluating RFP respondents; for example, here's a sample questionnaire (in the footer is the scoring mechanism) to evaluate a website redesign. 

Same Interviewers, Same Questions

To make a fair assessment, have all interviewers ask the same questions of all the finalists. Use the predetermined points system when interviewers compare notes. Evaluate against the organization's stated responsibilities, and cross-check against mission, vision, and values.

Consider Implementing the Rooney Rule

The National Football League policy requires league teams to interview ethnic-minority candidates for head coaching and senior football operation jobs. Consider making an effort to interview at least one woman or other underrepresented minority, for the role, to mimic the NFL's results: at the start of the 2006 season, after instituting the Rooney Rule (definition from the NFL) in 2002, the overall percentage of African-American coaches increased to 22%, up from 6%.

Offer Alternate Ways of Interviewing

If being successful in a particular role requires a whiteboard walkthrough, 20-minute brainstorming exercise, video or written component, teleconference demonstration, or another method, it is appropriate and understandable to ask for this during the interview process. For example, if the role requires teleconferencing, allow for one of the interviews for the finalists to be held on the teleconferencing software needed. However, don't make these the only mechanisms for evaluation. 

Consider offering multiple ways to answer questions to help the team make the best decision. It's also appropriate to ask for an existing portfolio or demonstration of existing products or tools that are relevant to the job. For example, if you're hiring for a designer, asking for a walkthrough of the three design projects for which the candidate is most proud of, is appropriate. 

For further reading, there's another in-depth review of the hiring process on MoveOn CTO Ann Lewis's blog, "How We Hire Tech Folks." Thanks to James Sansbury, Marc Drummond, and Andrew Berry, for reviewing and providing thoughtful comments and feedback.

Mar 20 2020
Mar 20

Mike and Matt talk with organizers of DrupalCon Europe about the organization of the conference, COVID-19, and differences between it and DrupalCon North America.

Mar 18 2020
Mar 18

One of the founders of Lullabot and former CEO, Jeff Robbins, used to joke that Lullabot has "built-in disaster recovery" because the employees are accustomed to working from just about anywhere. Lullabot, one of the first Drupal consulting companies, started in 2006 after Matt Westgate and Jeff Robbins met on Drupal.org. Drupal has been at the heart of Lullabot's work for more than 14 years, and the core of what Jeff suggested could apply similarly to the Drupal community.

As each of us negotiates a world where COVID-19 dominates the headlines and our everyday interactions, this article considers how some of the lessons that the Drupal community—perhaps an idealized Drupal community—has learned might shape our understanding of these times that feel so extraordinary. Drupal does not have a monopoly on any of these concepts, but in stressful times, similes and metaphors can help us interrogate our underlying assumptions and the communities that we have each constructed.

You Don't Have to Do Anything

Free software communities thrive when people contribute in the ways that feel comfortable to them rather than out of guilt. People support the Drupal community in a wide variety of ways, and we encourage people who choose to contribute to the project to have fun and enjoy the process of contributing. Sometimes this means the best choice is to take a step back and not contribute at all. The Drupal community is huge, with nearly 5,800 contributors to Drupal core alone, and it's okay for people to pause once in a while—or altogether—and let others step forward.

As COVID-19 spreads through the world, and the world works together to slow the progress, sometimes the best option is for us to stay home. This recommendation goes against the natural human urge to fix things, but we can bring to mind the fact that, with Drupal and viruses alike, we simply can't fix everything. No one of us can "fix" the more than 95,000 open issues in Drupal core any more than we can "fix" the very real devastation caused by COVID-19. You can contribute, or you can do nothing, and the world will continue without you. There is no reason to feel guilty about taking a break and pausing to examine what is important to your life.

Honor Your Family

Historically, the Drupal community has supported people who have needed to take a break and focus on themselves or their families. From daily interactions to the highly-visible gestures of support, such as when Aaron Winborn needed it, members of the Drupal community have offered countless acts of kindness.

In a recent example from just weeks ago, before most of us had ever heard of COVID-19, our friend, colleague, and long-time Drupal contributor, Jerad Bitner, needed help after his wife received a diagnosis of Stage 4 Brain Cancer. Jerad and his family have received assistance from people in all areas of their lives, and it was especially heartening to see so many people from the Drupal community among the impressive list of supporters.

While the Drupal community may seem like it exists and organizes itself primarily on the web, in a "socially distant" manner, it can present itself in very human and sincere ways when our members need assistance. Take the time to focus on yourself and your family during this period of uncertainty and take comfort in the fact that the Drupal community has a remarkable capacity to support its members in times of need.

Get Off the Drupal Island

Especially since Drupal 8, the Drupal community has learned about the benefits of drawing from other communities. When we partner with others "off the island," we can save ourselves a lot of work.

Likewise, we can take what we have learned to help others. For those of us with the good fortune to have a job working for one of the many Drupal agencies with "built-in disaster recovery," we have a unique chance to help others. We can use our technical knowledge of online collaboration tools, microphones, cameras, and more to act as resources to those with less technical experience.

We don't have to go far to get off the "island." All around the world, meditation centers, yoga studios, churches, synagogues, and other places people seek during stressful times are scrambling to transform their services models and move them online. We in the Drupal community have an opportunity to volunteer our skills and knowledge to support organizations like these, both non-profits and for-profits. These are not business opportunities, but rather opportunities to help our neighbors. We can share our recommendations about open-source options for collaborating online, such as OBS Studio and Jitsi Meet. Perhaps our station in life allows us to donate money to organizations that need help, such as the food shelves that provide food and groceries to kids in areas where schools have closed.

Or we can put our Drupal skills directly to use. For instance, you might feel especially appreciative of your local public media organization for bringing you impartial news at this time. Many public radio and television stations use Drupal. Without needing access to their entire codebase or infrastructure, you could ask them if any contributed modules need features they will use, bugs they need fixing, or other ways to help that match your skillset.

Because of the prevalence of Drupal among non-profits, non-governmental, and community organizations, there are many opportunities to contribute directly to local organizations doing good. Now that seemingly every in-person conference, vacation, user group, and other regular meetings on the schedule have been canceled, we might be looking for activities to fill those hours. The chances are high that organizations and businesses in our communities -- the ones important to our daily lives -- are struggling to find a way forward, and they might welcome unsolicited offers to help.

We don't need rock stars

In both the Drupal community and our local communities, our capacity to bring about change can feel limited. Anyone who has contributed to Drupal likely knows that the process of getting things done in the community can sometimes take a lot of time, effort, and discussion. We progress one patch at a time. Often it would be much simpler just to make a change to fit one specific use case, but in the Drupal community, we have learned that we need to work together and create consensus around ideas. We realize that we are stronger together and that sharing code feels so much better than hoarding code. Getting code into Drupal is rarely about maximizing revenue, but rather contributing to something bigger.

During a pandemic, the same mindset applies. Social distancing might feel challenging, but it's an act of compassion that benefits others. We help in the ways that feel genuine, not forced. We don't need rock stars and hoarders. We need just enough people to work toward more manageable, short-term goals. Thus, joining a group rather than going it alone can help make your otherwise small contributions feel more significant.

Do Your Homework

Through experience working on Drupal sites, we realize that many of the problems we face are already solved. We don't assume every problem is a bug in Drupal core. We don't assume that the problem with our Drupal site is unique. We encourage the person with a question to "do your homework." We look for others who have encountered similar problems and learn from those who are kind enough to share their solutions.

The argument that we live in exceptional times, while accurate in the short term, does not reflect a broader view of history. Everywhere in human history, people have been affected by violence, war, injustice, widespread fear, and, yes, disease. Our seemingly exceptional problems, which cause real suffering, are variations on similar historical problems. The 1918 flu pandemic, for instance, killed 50 million people. Understanding and connecting to past events can help reduce the sense of exceptionalism that we all feel. In history, we find people who overcame fear and redirected their focus from helping themselves to helping others. As we become more socially distant in this current reality, we can connect to people online and in the past that have encountered problems like ours. We can also see how problems in the past always come to an end, even if they reappear again.

Your Code Won't Last Forever

The Drupal codebase and community, like everything else in the world, changes constantly. Our prized contributions get replaced. With software, it can be easier to accept the fact of change. We have learned that the point in time when we know everything about Drupal will never arrive. For as long as it can take to get a patch into Drupal core, it can simultaneously feel like Drupal moves at a breakneck speed. The list of completed and in-process strategic initiatives just for Drupal 8 is long, and Drupal 9 will arrive before we know it. We have learned to accept the fact that we need to learn continually, all of our contributions to Drupal will eventually be replaced, and change in the Drupal community is inevitable.

Similarly, the cozy worlds that some of us had grown to inhabit now feel threatened. We live in a society that rarely admits the inevitability of sickness and death, and yet both are guaranteed in life. The world, like Drupal, is always changing, and after our initial reactions begin to subside, we can choose how we respond to these ever-changing circumstances. We will each find our way to negotiate these always-changing realities. Some days will be sunny, and others will not.

Ask For Help

The current state of reality might feel overwhelming, but in the Drupal community, our response is to encourage people to ask for help. The Drupal software can feel like a complex, unknowable beast. We have learned to find others with more knowledge in a particular area than we do. We practice acts of kindness when we first look for answers by ourselves before asking others for help. Sometimes we work really hard on a problem and do everything we can before we "bother" another community member. In the Drupal community, we regularly practice a version of "social distancing" out of respect for the other people in our community. But at some point, we must ask for help, and significant relief can follow when the recipient of our question seems happy to offer assistance.

As we find our way through this new (and temporary) reality, we have many options: do nothing, offer help, connect with friends and family, connect our experiences with historical events, dig into the Drupal codebase, ask for help when necessary. None of these responses is incorrect. We can imagine the ways that Drupal can help. Even better, we can stop merely imagining better worlds and embrace this reality by finding activities, words, and thoughts that reduce our struggles and the struggles of the people around us. When you notice that something you are doing is not helpful, consider shifting your efforts. The Drupal software will continue to evolve, and we can, too.

Mar 11 2020
Mar 11

We've worked on countless websites that have social media sharing functionality. You know, those little links that let you easily post to Facebook, Twitter, or some other social network?

These widgets work by requiring a developer to embed a script tag on their site. Like this:

By embedding JavaScript from a third-party source, you've allowed that provider to modify the content of your HTML page. Putting concerns aside about what could go wrong (XSS attacks, unexpected manipulation of the page, JavaScript execution errors), and just focusing on what is happening is a reason to avoid most approaches to share links.

When you embed a share widget on your site, you've added tracking by that social network. Now social networks can associate each visitor’s profile with the content that is on your page. Social networks, and Facebook, in particular, use that to build an advertising profile based on your content.

A hypothetical example: If your site provided medical or self-help advice, the share widget on the page loads JavaScript from Facebook. Like many, visitors to your site are always logged into Facebook even if they don't have it open. When the JavaScript is loaded, it knows the profile of the user—and how it shows when you've liked or retweeted something. The JavaScript can then check the URL of the page, which Facebook can then index. Facebook can associate the content of the page with the user's profile. And finally, Facebook can now show advertisements targeting the medical condition of visitors to your page. And, that's all just from the site visitor looking at your content. This requires no interaction with the Facebook share widget; the mere act of loading the widget is enough to associate the widget with the content of your site.

Combined Share Widgets Can Be Even Worse

An alternative to using the direct widgets provided by social networks are those created by other providers that wrap around social media links. Examples include AddThis, ShareThis, AddToAny, Shareaholic, and many others. However, this further compounds the problem. Not only are Facebook and Twitter tracking your visit, but so is the provider of the sharing widget.

For example, in the privacy policy of AddThis (which is owned by Oracle) states:

Publishers provide us with AddThis Data so that we can build Segments and Profiles to facilitate personalized interest-based advertising for you by Oracle and our Oracle Marketing & Data Cloud customers and partners. By installing the AddThis Toolbar, Toolbar Users provide consent for us to use their AddThis Data for interest-based advertising.

Using a centralized share provider has only introduced another aggregator and broker of people's interests. Not all services are equally bad, but be sure to carefully read the terms of service when using any of these providers. Note in most cases, using one of these widgets will also load the SDKs for each enabled social network to count engagement such as likes, retweets, etc.

Alternatives and Suggestions

The absolute best thing an organization concerned with privacy can do is not include any share links at all. That would avoid any direct connection between your visitors and data aggregators. However, for many clients, designers, and visitors, having some share capabilities is expected. What can developers do to meet the requirements and be responsible for user data?

The answer is pretty simple. Use links. Each social network has a simple URL that you can use to prepopulate a sharing form with the URL of your content. At its simplest, these links look something like this:

No JavaScript. Just HTML.

Be sure to include the rel attribute to prevent the third-party site from being able to manipulate the browser history. And using target = "_blank" opens a new window, so the user doesn't immediately leave your page.

This provides a happy middle ground where sharing is still available for users, but it makes it impossible for social networks to track users simply visiting the page. Once the user clicks/taps on the share link, then they're consenting to use those social networks (and thus be tracked and profiled).

Copy/paste example services that don’t include any JavaScript can help with generating these links, see the following sites for examples:

As of this writing, you can also check out the share links here on Lullabot.com, which uses a combination of these direct-share URLs with lightweight JavaScript to open in a sized new window.

Although privacy is starting to become a focus for the general public, many users still may not realize that their browser is logged into social networks all the time. Websites big and small, then facilitate the tracking of users by loading JavaScript from these social networks, resulting in extensive profiling based on the viewed content used to create targeted advertising. 

Share links are often privacy trojan horses. As the builders of the web, we should take care to account for the privacy of our site visitors. So the next project you're on, advocate for a non-tracking solution.

Mar 06 2020
Mar 06

Fostering Inclusion in Tech

Working with multifaceted and diverse teams to solve complex issues is a part of everyday life at Lullabot. Therefore, becoming stronger, more empathetic communicators who foster diversity, equity, and inclusion (DEI) across the organization is something we’re striving for continuously. That said, DEI is a tough nut to crack, and we’re a work in progress. Like many organizations, we’re constantly asking ourselves, “How do we better foster a sense of inclusion and allow for different types of people, with varied abilities and skills, to work together to solve problems for the future?”

A group that has the benefit of an inclusive environment will:

  • be more agile and culturally competent, and
  • be able to work with a variety of viewpoints, carefully considered, toward building a more thoughtful, and hopefully better, end product or service.

We're learning how to improve internal communication and hold space for each other as we dive into these types of conversations. This series is a compilation of some tips we’ve collected through our continuing work, and we encourage you to share your own. 

Make a Plan to Foster Inclusion

While we consider ways to build up teammates and their sense of belonging in the job, we also desire each individual’s highest and best level of participation in a shared mission. Our team, as well as other knowledge workers, are becoming increasingly aware of the ability to leverage the extreme potential and power of technology to expand ideas, increase access to opportunities, and level the playing field.

Be Explicit

If an organization does not have a policy publicly stated, proactively, in the marketplace, others may have already crafted a narrative about mission and values without management’s participation. Be explicit about where the company stands. Make clear what’s important to the company, both internally and externally. We share our Mission and Core Values on our website and encourage clients and job applicants to review them, and are continuously finding ways of infusing these into our workflows and culture to ensure our team is living up to these ideas.

Start Now and Build into the Schedule

Who represents, guides, leads, and makes decisions for the company? Take a picture of the board or staff—who’s in the photo? Evaluate who’s who and identify whose voices the organization has chosen to elevate, increase, and honor.

  • Are voices missing? If so, why?
  • What opportunities exist for all staff to increase their skills and advance?
  • What opportunities exist for people traditionally excluded from work?
  • In which ways may new people participate as staff, interns, apprentices, consultants, or vendors? 
  • For any of the above, what’s the tracking mechanism? Free and low-cost tools exist to help you collect, analyze, report on, and share data to help with decision-making.
  • Is there a transparent way to demonstrate how staff may advance, either within their career track or if they'd like to switch to a different path such as management or sales?
  • Whose voice is missing? Who is not in the room? Representation matters.

Incorporate Inclusion as a Guiding Value, and Put It into Practice

While it’s great to have ideas, implementing them is some of the most difficult and demanding work. It’s best to start where you are and incrementally improve.

Consider determining three tasks to implement in the next quarter to increase the types of voices you currently have represented, and evaluate every 8-12 weeks how you’re doing, what’s working, what’s not working, and where you want to invest strategically. The Drupal Diversity and Inclusion group offers weekly accountability, support, and connections on initiatives across the Drupal community. Internally, our newly-formed Inclusion and Equity working group is discussing governance, goals, and how to move forward with our efforts toward greater inclusion.

Respect Autonomy

The best work does not always happen in an assigned cubicle, hot desk, or office. An individual might be better suited to doing focus work in the morning, taking a siesta break, then picking up again after dinner and going to midnight. Some of our teammates need to make day-by-day, non-scheduled arrangements for childcare, eldercare, medical appointments, and other issues, and much unpaid labor falls on stay-at-home workers. When a new team member requests to be on-call for a specific block of time, how does a company make it work? A flexible schedule (learn about my colleague Sean Lange’s routine while working from home) allows our teammates to more equitably participate in the workforce and bring their best ideas to us when they’re ready to do so. 

Respect staff’s autonomy and ability to choose and set their hours, and encourage a culture of high expectations as well as high performance. Clearly define deliverables and the practicalities of team needs (such as deadlines for when certain projects need to launch), but allow the team to determine the best way for them to deliver, rather than forcing people to adhere to an inflexible work arrangement where they clock in and immediately tune out.

Practically speaking, workforce trends for remote work are increasing: 26 million employed persons worked at home on an average day in 2018 (Bureau of Labor Statistics), and there are increasing numbers of positions that offer telework, telecommuting, and work-from-home options. Generation Z, which is predicted to become 36% of the global workforce in 2020, is comfortable with technology to make conferences, meetings, and training sessions seamless regardless of location.  

We’re a 100% distributed company, where all workers are remote, and there is no centralized office. As everyone works across multiple time zones, we’ve continually experimented with practices that foster community and connection. Read more about being a distributed company. Based on client desires and existing systems, we use Zoom, Uberconference, Google Meet, Webex, and Slack, among others, for conferences.

Support Mental Health and Wellness

The bulk of staff time is spent in the workplace or doing work-related activities. Burnout, stress, depression, anxiety, and mental health disorders directly impact teammates, colleagues, and clients. The American Institute of Stress survey shows 40% of workers reporting that their job is “very or extremely stressful” and 80% of workers report feeling stress on the job (with almost half saying they “need help in learning how to manage stress”). Lost productivity resulting from depression and anxiety is estimated to cost the global economy US$ 1 trillion each year, according to the World Health Organization.

Over the last year, we have made mental health a priority: learn more about Supporting Mental Health at Lullabot. Three best practices to support reducing stress and increasing mental health at the workplace include:

1) Avoid overscheduling your team: offer flexible work arrangements. For example, we work a 30-hour billable week and have 10 additional hours for contributing back to the company and community.

2) Create an open and relaxed work environment with access to management and ongoing feedback loops. For example, we use the Know Your Team tool (podcast) to organize constructive one-on-ones. Other activities, including scheduled coffee talks for drop-in support and advice, a weekly team call, small group calls on Fridays, a town hall with leadership every month, multiple Slack channels for conversations ranging from #being-human to #cats to #parenting, and a back-end “Daily Report” for internal news and reports.

3) Increase education. For example, offer access to mental health and general health and wellness topics, and provide training and development opportunities. With continuing education and professional skill-building, teams have documentable ways to increase productivity and overall experience.

Mental health awareness means implementing actions small and large across the organization that include:

  • Scheduling breaks in long meetings.
  • Obtaining psychotherapy, counseling, grief support, and similar add-ons to the health package.
  • Scheduling monthly or quarterly gatherings to discuss or practice mental health wellness.
  • Offering paid time off.

Invest in Personal and Professional Development, Training, and Education

“What happens if we train them, and they leave? What happens if we don’t, and they stay?”

While salary and benefits remain the base of any job, investment in an employee’s unique talents also pays off. Consider investing in professional development, training, certification, and other educational hours through an annual allocation or a pooled budget for staff-directed or individually-planned training. As part of our benefits package, we each receive an education and event budget annually. This may take the form of an education budget used for conferences, seminars, training, and continuing education. Determine a process for staff to propose options and receive feedback or vetting, perhaps as part of their employee review process, as they build up a multi-year plan to improve their abilities. 

Required To-Do List, Start Here

Make software, website, and digital products accessible

In web development, making digital properties as accessible as possible is required and best practice (check the a11y project). Technical Account Manager at Lullabot, Helena McCabe’s presentations give excellent tips on how to enable accessibility. By starting with an emphasis on accessibility for the digital property, additional issues around inclusion, culture, the role of technology, and overall trends in society may begin to surface. Our white paper on How Accessibility Protects your Business and your Bottom Line offers examples on how to make your products accessible and why it matters.

Practice transparency

Think of transparency as a way to build the team’s muscles, and to start working with fortitude, grace, and strength when grappling with heavier and more complex issues. For example, we practice open-book management (OBM), a financial practice that allows all employees to understand the current revenue, expenditures, and KPIs of the company (learn more at CEO Matt Westgate’s 2019 Lullabot Team Retreat post). By creating the flexibility and capacity to have tough discussions, everyone may use a shared language and understanding of the company’s direction.

Promote a sense of psychological safety

The open-source movement continues to build on information freely shared, vetted, and evaluated across multiple use cases. The belief in sharing knowledge is in the DNA of companies like ours. In our case, Drupal is the community and platform on which many of the staff have built their careers.

Psychological safety is the bedrock for knowledge-sharing: it’s "a condition in which you feel (1) included, (2) safe to learn, (3) safe to contribute, and (4) safe to challenge the status quo—all without fear of being embarrassed, marginalized or punished in some way." (Timothy R Clark, 2019). And, it’s something to which Lullabot aspires: here’s a link to Matt Westgate’s lightning talk on psychological safety and DevOps.

With accessibility, transparency, and safety in mind, we'll share more tips to begin or advance discussions around: hiring inclusively, engaging with staff, focusing on culture, and easier fixes. As we continue to work on this internally, we offer these ideas in the spirit of sharing and continuous improvement. Do you have ideas? Please drop a comment. We'd love to hear your thoughts and suggestions.

Mar 05 2020
Mar 05

Matt and Mike talk to two organizers of Drupal4Gov, as well as the project manager for Lullabot's Georgia.gov replatform about all things Drupal in the government.

Mar 04 2020
Mar 04

Sending a Drupal Site into Retirement

The previous article in this series explained how to send a Drupal Site into retirement using HTTrack—one of the solutions to maintaining a Drupal site that isn't updated very often. While this solution works pretty well for any version of Drupal, another option is using the Static Generator module to generate a static site instead. However, this module only works for Drupal 7 as it requires the installation of some modules on the site, and it uses Drupal to generate the results. 

The Static Generator module relies on the XML sitemap module to create a manifest. The links in the XML sitemap serve as the list of pages that should be transformed into static pages. After generating the initial static pages, the Cache Expiration module keeps track of changed pages to be regenerated to keep the static site current. This combination of Static Generator, XML sitemap, and Cache Expiration is a good solution when the desire is to regenerate the static site again in the future, after making periodic updates.

There are many module dependencies, so quite a list of modules was downloaded and installed. Once installed, the high-level process is:

  • Create and configure the XML sitemap and confirm it contains the right list of pages.
  • Configure Cache expiration to use the Static Generator and expire the right caches when content changes.
  • Go to  /admin/config/system/static and queue all static items for regeneration.
  • Click a Publish button to generate the static site.

Install Static Generator

The modules are downloaded and enabled using Drush. Enabling additional modules, like xmlsitemap_taxonomy, may be needed depending on the makeup of the site.

drush dl static expire xmlsitemap

drush en static_file, static_views, static_xmlsitemap, static_node, static

drush en expire

drush en xmlsitemap_menu, xmlsitemap_node, xmlsitemap

Configure XMLSitemap

On /admin/config/search/xmlsitemap, make sure the site map is accurately generated and represents all pages that should appear in the static site. Click on the link to the sitemap to see what it contains.

  • Add all content types whose paths should be public.
  • Add menus and navigation needed to allow users to get to the appropriate parts of the site.
  • Make sure Views pages are available in the map.

A lot of custom XML sitemap paths may be required for dynamic views pages. If so, generate XML sitemap links in the code where the database is queried for all values that might exist as a path argument, then create a custom link for each path variation.

Code to add custom XML sitemap links look like this (this is Drupal 7 code):



/**
 * Add a views path to xmlsitemap.
 *
 * @param string $path
 *   The path to add.
 * @param float $priority
 *   The decimal priority of this link, defaults to 0.5.
 */
function MYMODULE_add_xmlsitemap_link($path, $priority = '0.5') {
  drupal_load('module', 'xmlsitemap');

  // Create a unique namespace for these links.
  $namespace = 'MYMODULE';
  $path = drupal_get_normal_path($path, LANGUAGE_NONE);

  // See if link already exists.
  $current = db_query("SELECT id FROM {xmlsitemap} WHERE type = :namespace AND loc = :loc", array(
    ':namespace' => $namespace,
    ':loc' => $path,
  ))->fetchField();
  if ($current) {
    return;
  }

  // Find the highest existing id for this namespace.
  $id = db_query("SELECT max(id) FROM {xmlsitemap} WHERE type = :namespace", array(
    ':namespace' => $namespace,
  ))->fetchField();

  // Create a new xmlsitemap link.
  $link = array(
    'type' => $namespace,
    'id' => (int) $id + 1,
    'loc' => $path,
    'priority' => $priority,
    'changefreq' => '86400', // 1 day = 24 h * 60 m * 60 s
    'language' => LANGUAGE_NONE
  );

  xmlsitemap_link_save($link);
}

Configure Cache Expiration

On admin/config/system/expire, set up cache expiration options. Make sure that all the right caches will expire when content is added, edited, or deleted. For instance, the home page should expire any time nodes are added, changed, or deleted since the changed nodes change the results in the view of the latest content that appears there. 

Generate the Static Site

Once configured, a Publish Site button appears on every page, which is a shortcut. But the first time through, it’s better to visit /admin/config/system/static to configure static site options and generate the static site. Some pages were created automatically, and others not during the initial setup. Once all other modules are configured, and the XML sitemap looks right, clear all the links and regenerate the static site.

The location where the static site is created can be controlled, but the default location is at the path, /static/normal, in the same repository as the original site. That location and other settings are configured on the Settings tab.

Generate the static site and ensure all the pages are accounted for and work correctly. This is an iterative process due to the discovery of missing links from the XML sitemap and elsewhere. Circle through the process of updating the sitemap and then regenerate the static site as many times as necessary.

The process of generating the static site runs in batches. It might also run only on cron depending on what options are chosen in settings. Uncheck the cron option when generating the initial static site and later use cron just to pick up changes. Otherwise, running cron multiple times to generate the initial collection of static pages is required.

For a 3,500 page site, it takes about seven minutes to generate the static pages. Later updates should be faster since only changed pages would have to be regenerated.

When making changes later, they need to be reflected in the XML sitemap before they will be picked up by Static Generator. If XML sitemap updates on cron, run cron first to update the sitemap, then update the static pages.

After generating the static site and sending it to GitHub, it was clear that the Static Generator module transforms a page like /about into the static file /about.html, then depends on an included .htaccess file that uses mod_rewrite to redirect requests to the right place. But, GitHub Pages won’t recognize mod_rewrite. That makes the Static Generator a poor solution for a site to be hosted on Github Pages, although it should work fine on sites where mod_rewrite will work. 

Comparing HTTrack and Static Generator Options

Here’s a comparison of a couple of methods explored when creating a static site: 

  • HTTrack will work on any version of Drupal, Static Generator, only on Drupal 7.
  • HTTrack doesn’t require setup other than the standard preparation of any site, which is required for any static solution. Static Generator took some time to configure, especially since there wasn’t an existing XML sitemap and Cache Expiration installed and configured.
  • HTTrack can take quite a while to run, a half-hour to an hour, possibly longer. Static Generator is much faster—seven minutes for the initial pass over the whole site.
  • The Static Generator solution makes the most sense if there is a need to keep updating the site and regenerating the static pages. That situation justifies the up-front work required to configure it. HTTrack is easier to set up for a one-and-done situation.
  • The file pattern of /about/about.html created by our custom HTTrack arguments works fine for managing internal links on Github Pages. The file pattern of /about.html created by Static Generator will not correctly manage internal links on Github Pages. The second pattern will only work on a host that has mod_rewrite installed and the appropriate rules configured in .htaccess.

Github Pages or the Static Generator module make excellent solutions. To view an example of a site generated with HTTrack, go to https://everbloom.us.

Mar 03 2020
Mar 03

You Might Also Like

Olivero is a new theme that aims to be the new default front-end theme for Drupal 9. The theme's inception took place in a hotel lobby during DrupalCon Seattle and has now grown into a full Drupal core initiative (read about its inception here).

From Design to Proof of Concept

The Drupal 9 theme initiative started with stakeholder meetings and the design process (learn more here). Once the designs were close to being final, I started working on translating the designs into markup, styles, and JavaScript within a static proof of concept.

While working on the proof of concept, Putra Bonaccorsi was laying the theming groundwork by creating boilerplate code for the theme and transpiling of the CSS and JavaScript.

Proof of concept

The process of creating a proof of concept has been invaluable. The overarching goals are to validate major DOM architectural decisions and get sign-off from the Drupal core accessibility team on major decisions before moving into templating. Additionally, the proof of concept has validated and influenced design decisions across multiple devices and viewports.

You can view the proof of concept at https://olivero-poc.netlify.com, but note that progress is rapid, and changing by the hour!

Bumps in the Road

When the Olivero team was creating the initial schedule, the plan was to get the theme into Drupal 9.1, because the first version of Drupal 9 (9.0) was going to be the same as the last version of Drupal 8 — but with deprecated code removed.

However, during the Driesnote at DrupalCon Amsterdam, Drupal project lead, Dries Buytaert, stated that he wanted to get the theme into the initial release of Drupal 9.0. This pushed up the timeline significantly!

Balancing between onboarding and mentoring new developers versus rapidly closing issues has proven to be delicate. Many contributors want to help with the initiative; however, because they are volunteers (as are the core team), they are not on a timetable for closing issues.

Because of the tight timeline, I’ve been leaning toward the latter (rapidly fixing issues).

Florida DrupalCamp Sprint

We decided that the initiative needs a shot in the arm, so we put on a mini code-sprint within the annual contribution sprint at Florida DrupalCamp in last month. Because Putra couldn’t make it down for the actual conference, Florida DrupalCamp sponsored her to fly in for the sprint.

During the 2 day sprint, we accomplished the following

  • Cleaned up the PostCSS build process
  • Integrated a default database export with indicative content into the Tugboat build process.
  • Copied some of the latest scripts/JS and CSS from the PoC repo into the Drupal theme.
  • We've exported block configs for the theme initial install for the following Core's blocks:
    • Primary Menu
    • User Account Menu
    • Powered By Drupal
    • Content
  • The Primary Menu block is themed and configured to expose the drop-down menu by default.
  • The Secondary menu/User Account block is themed and configured.
  • The "Powered by" block is themed and configured.
  • The "Get Started" page has been created and will need to be revisited.
  • Latest preview is viewable on Tugboat: https://8-x-1-x-dev-2t4d1epwkj8tgwxduizmixhzevqwzi8w.tugboat.qa

Olivero's focus states (above) were heavily worked on at the Florida DrupalCamp sprint.

If you were to install the theme initially, you'll be able to see the different regions and blocked configured, however, please note that there is still more theme development that needs to be done for the beta release.

Current Status

The work of the theme includes the proof of concept and the actual theming.

Proof of Concept

We’re working on styling that will enable site owners to choose an “always-on” mobile theme in the event that the primary navigation has more items than the space can manage.

We’re also knocking out various accessibility issues—especially focus states in Windows high contrast mode, which are trickier than expected.

Drupal Theme

The Drupal theme looks close to the designs! Work continues on the search integration into the header, in addition to standard theming.

What’s Next?

We hope to pull in the final proof of concept markup, styling, and JavaScript into the theme toward the end of next week (around March 13th, 2020). At that point, work on the proof of concept will cease, and new styling fixes will go into the theme.

There’s still so much to do! We need:

  • Support for Drupal core features such as:
    • Book module
    • Forum module
    • Embedded media
    • Various Views rows styles (grid, etc.)
  • More accessibility
  • Internationalization
  • Tests
  • Coding standards
  • And more!

Standing on the Shoulders of Giants

I also want to note that the rapid pace of development would not have been possible without the contributions of Claro (Drupal 9’s new administrative theme), and Umami (the theme for the out of the box initiative).

These themes blazed the way by including support for technologies in core such as web fonts, PostCSS, and the overall core theme architecture.

Completion

Olivero was initially slated for inclusion in core in Drupal 9.1. That’s still the most likely scenario. That said, there’s a possibility that Drupal may shift the 9.0 beta deadline to the end of April. If that’s the case, there is a possibility to submit a core patch beforehand.

To commit by this time, we need to submit the patch a minimum of a few weeks ahead of time to give core committers time to review (and even that might not be enough time).

We’re currently working on [META] Add new default Olivero frontend theme to Drupal 9 core to define the minimum beta requirements to submit to core. Expect this issue to be more fleshed out within the coming days.

After Completion

After the theme is in core, we still would love to add additional features such as support for accessible color schemes, dark mode, etc. However, the first step is finishing up the minimal viable product for inclusion in core.

Join Us!

The Olivero team meets on Drupal Slack every Monday at 3 pm UTC (10 am ET) in the #d9-theme channel on drupal.slack.com‬. We post the agendas in the Olivero issue queue beforehand.

We need people to pick up issues and run with them, but keep in mind that for the next week or two, the primary styling is still in progress within the proof of concept on Github.

Upcoming Events and Sprinting

I will be attending Drupal Dev Days in Ghent Belgium, April 6-10th, 2020, and will be sprinting the entire time. We hope to work on getting the code ready for Drupal core inclusion by that time.

Putra and I (along with the majority of the Lullabot team) will be attending DrupalCon Minneapolis in May 2020. We will be heavily sprinting on Olivero during this time (especially on Friday).

Feb 26 2020
Feb 26

Sending a Drupal Site into Retirement

Maintaining a fully functional Drupal 7 site and keeping it updated with security updates year-round takes a lot of work and time. For example, some sites are only active during certain times of the year, so continuously upgrading to new Drupal versions doesn't always make the most sense. If a site is updated infrequently, it's often an ideal candidate for a static site. 

To serve static pages, GitHub Pages is a good, free option, especially when already using GitHub. GitHub Pages deploys Jekyll sites, but Jekyll is perfectly happy to serve up static HTML, which doesn't require any actions other than creating functional HTML pages to get a solution working. Using this fishing tournament website as the basis for this article, here’s how to retire a Drupal site using HTTrack. 

Inactivate the Site

To get started, create a local copy of the original Drupal site and prepare it to go static using ideas from Sending A Drupal Site into Retirement.

Create GitHub Page

Next, create a project on GitHub for the static site and set it up to use GitHub Pages. Just follow the instructions to create a simple Hello World repository to be sure it’s working. It’s a matter of choosing the option to use GitHub Pages in the settings and identifying the GitHub Pages branch to use. The GitHub pages options are way down at the bottom of the settings page. There's an option to select a GitHub theme, but if there's one provided in the static pages, it will override anything chosen. So, really, any theme will do.

A committed index.html file echoes back "Hello World" and the new page becomes viewable at the GitHub Pages  URL. The URL pattern is http://REPO_OWNER.github.io/REPO_NAME; the GitHub Pages information block in the repository settings will display the actual URL for the project. 

Create Static Pages with HTTrack

Now that there's a place for the static site, it's time to generate the static site pages into the new repository. Wget could spider the site, but a preferred solution is one that uses HTTrack to create static pages. This is a tool that starts on a given page, generally the home page, then follows every link to create a static HTML representation of each page that it finds. This will only be sufficient if every page on the site is accessible from another link on the site and the navigation or other links on the home page. HTTrack won't know anything about unlinked pages, although there are ways to customize the instructions to identify additional URLs to spider. 

Since this solution doesn’t rely on Drupal at all, it's possible to use it for a site built with any version of Drupal, or even sites built with other CMSes. It self-discovers site pages, so there's no need to provide any manifest of pages to create. HTTrack has to touch every page and retrieve all the assets on each page, so it can be slow to run, especially when running it over the Internet. It's best to run it on a local copy of the site.

It's now time to review all the link elements in the head of the pages and make sure they are all intentional. Using the Pathauto module, the head elements added by Drupal 7, such as <link rel="shortlink" href="https://www.lullabot.com/articles/sending-drupal-site-retirement-using-h.../node/9999" />, should be removed. They point to URLs that don't require replication in the static site, and HTTrack will try to create all those additional pages when it encounters those links.

When using the Metatags module, configuring it to remove those tags is possible. Instead, a bit of code like the following is used in a custom module to strip tags out (borrowed  from the Metatags module, code appropriate for a Drupal 7 site):


/**
 * Implements hook_html_head_alter().
 *
 * Hide links added by core that we don't want in the static site.
 */
function MYMODULE_html_head_alter(&$elements) {
  $core_tags = array(
    'generator',
    'shortlink',
    'shortcut icon',
  );
  foreach ($elements as $name => &$element) {
    foreach ($core_tags as $tag) {
      if (!empty($element['#attributes']['rel']) && $element['#attributes']['rel'] == $tag) {
        unset($elements[$name]);
      }
      elseif (!empty($element['#attributes']['name']) && strtolower($element['#attributes']['name']) == $tag) {
        unset($elements[$name]);
      }
    }
  }
}

The easiest way to install HTTrack on a Mac is with Homebrew:

brew install httrack

Based on the documentation and further thought, it became clear that the following command string is the ideal way to use HTTrack. After moving into the local GitHub Pages repo, the following command should be executed where LOCALSITE is the path to the local site copy that's being spidering, and DESTINATION is the path to the directory where the static pages should go:

httrack http://LOCALSITE -O DESTINATION -N "%h%p/%n/index%[page].%t" -WqQ%v --robots=0 --footer ''

The -N flag in the command will rewrite the pages of the site, including pager pages, into the pattern /results/index.html. Without the -N flag, the page at /results would have been transformed into a file called results.html. This will take advantage of the GitHub Pages server configuration, which will automatically redirect internal links that point to /results to the generated file /results/results.html.

The --footer '' option means omit comments that HTTrack automatically adds to each page and looks like the following. This gets rid of the first comment, but nothing appears to get rid of the second one. Getting rid of the first one, which has a date in it, eliminates having a Git repository in which every page appears to change every time HTTrack runs. It also obscures the URL of the original site, which may be confusing since it's a local environment.

<!-- Mirrored from everbloom-7.lndo.site/fisherman/aaron-davitt by HTTrack Website Copier/3.x [XR&CO'2014], Sun, 05 Jan 2020 10:35:55 GMT -->

<!-- Added by HTTrack --><meta http-equiv="content-type" content="text/html;charset=utf-8" /><!-- /Added by HTTrack -->

The pattern also deals with paged views results. It tells HTTrack to find a value in the query string called "page" and inserts that value, if it exists, into the URL pattern in the spot marked by [page]. Paged views create links like /about/index2.html, /about/index3.html for each page of the view. Without specifying this, the pager links would be created as meaningless hash values of the query string. This way, the pager links are user-friendly and similar (but not quite the same) as the original link URLs.

Shortly after the process starts, it will stop and ask a question about how far to go in the following links. '*' is the response to that question:

The progress is viewable as it goes to see which sections of the site it is navigating into. The '%v' flag in the command tells it to use verbose output.

HTTrack runs on a local version of the site to spider and creates about 3,500 files, including pages for every event and result and every page of the paged views. HTTrack is to slow to use across the network on the live site URL, so it makes sense to do this on a local copy. The first attempt took nearly two hours because so many unnecessary files were created, such as an extra /node/9999.html file for every node in addition to the desired file at the aliased path. After a while, it was apparent they came from the shortlink in the header pointing to the system URL. Removing the short links, cut the spidering time by more than half. Invalid links and images in the body of some older content that HTTrack attempted to follow (creating 404 pages at each of those destinations) also contributed to the slowness. Cleaning up all of those invalid links caused the time to spider the site to drop to less than a half-hour.

The files created by HTTrack are then committed to the appropriate branch of the repository, and in a few minutes, the results appear at http://karens.github.io/everbloom.

Although incoming links to /results now work while internal links still look like this in the HTML:

/results/index.html

A quick command line fix to clean that up is to run this, from the top of the directory that contains the static files:

find . -name "*.html" -type f -print0 |   xargs -0 perl -i -pe "s/\/index.html/\//g"

That will change all the internal links in those 3,500 pages from results/index.html to /results/ resulting in a static site that pretty closely mirrors the original file structure and URL pattern of the original site.

One more change is to fix index.html at the root of the site. When HTTrack generates the site, it creates an index.html page that redirects to another page, /index/index.html. To clean things up a bit and remove the redirect, I copy /index/index.html to /index.html. The relative links in that file now need to be fixed to work in the new location, so I do a find and replace on the source of that file to remove ../ in the paths in that page to change URLs like ../sites/default/files/image.jpg to sites/default/files/image.jpg.

Once this is working successfully, the final step was to have the old domain name redirect to the new GitHub Pages site. GitHub provides instructions about how to do that.

Updating the Site

Making future changes requires updating the local site and then regenerating the static pages using the method above. Since Drupal is not publicly available, there's no need to update or maintain it, nor worry about security updates, as long as it works well enough to regenerate the static site when necessary. When making changes locally, regenerate the static pages using HTTrack and push up the changes. 

The next article in this series will investigate whether or not there is a faster way of creating a static site.

Feb 19 2020
Feb 19

One of the biggest benefits of an open-source community like Drupal is the ability to collaborate with fantastic people that you wouldn’t otherwise have the opportunity to work with. However, when you have an idea that you think would be a good initiative for a Drupal core release (such as Drupal 9) you might find yourself thinking: "How do I even begin? How can I advocate for my idea?” We all find ourselves asking these questions as we navigate the complex journey of turning an idea into a core initiative.

During DrupalCon Seattle, a handful of people had a casual conversation in a hotel lobby. This conversation turned into an official Drupal core strategic initiative to create a new default front-end theme for Drupal 9. Here's the story of how that happened, the steps we took, and the work we did before opening the project to the community.  

The Beginning: Is "Your Idea" Already in the Works?

On the last day of DrupalCon Seattle, Mike Herchel (Senior Developer at Lullabot), Lauri Eskola (Drupal core committer and front-end framework manager), Angie Byron (Drupal core committer and product manager), and I had a conversation about what exactly distinguishes a good CMS theme. Naturally, that led to the discussion of the current status of Drupal 9.

Mike and I were surprised to find out that there was no initiative in place for a Drupal 9 default theme. Having been in the community for quite a while, we knew that Bartik was created for Drupal 7 and has long served as the default theme, but it’s nearly ten years old. By 2019, the design had begun to look dated and no longer spoke to Drupal's strengths.

We began envisioning what kind of first impression a clean and modern default theme would have on users when Lauri mentioned something along the lines of, “Why don't you get involved since Drupal 9 is just around the corner and is expected to be released around mid-2020?” We were excited by the idea and that we already had buy-in from a key figure within the community. On our way to the airport the following morning, Mike and I continued chatting about ways this project could start.

Setting Goals: Identify Why This Initiative Matters

Before announcing to the world that you have an idea that can be shipped into Drupal core, stop and ask yourself what your goals are for the project. Mike and I started by writing down some of the pain points and challenges of the current status quo. As Dries pointed out in his keynote, experts love Drupal. However, Drupal as a CMS still has a negative connotation among beginners for its outdated interfaces and user experiences. Therefore, prioritizing the beginner experience through potential initiatives like the new default theme, guided tours (aka Out of the Box initiative), and WYSIWYG page building would give Drupal a much-needed new look and feel that users expect from a modern CMS.

Here are the three goals that we identified:

  • Update to modern design: Design a theme that feels modern and ages well for the next 5 to 10 years.
  • Add functionality that supports new features: Include support for commonly used Drupal functionality, such as second-level navigation, embedded media, layout builder, and more.
  • Create a WCAG AA compliant theme: Work closely with accessibility experts and the Drupal Accessibility team to ensure that the theme passes Drupal’s stringent accessibility standards with flying colors.

Drupal Core Ideas Queue

Setting these goals helped us stay focused on what we needed to do and got us prepared to open an “idea issue” for the redesign and development of a theme that could ship with the release of Drupal 9. The ideas queue section of Drupal.org let us propose ideas for Drupal core and got them through validation and planning phases.

Here’s a link to the issue that we submitted to the Drupal ideas project: https://www.drupal.org/project/ideas/issues/3064880

Forming the Band: Putting Together Your Initial Team

With any big or small initiative, you can't do the work all by yourself. You need a team that can help bring in new perspectives and fill in the areas that are outside of your discipline. Once we knew our idea was valid and sought-after by the community, Mike engaged Lullabot designers Jen Witkowski and Jared Ponchot to lead the design effort for the new Drupal 9 default theme. Kat Shaw and Matthew Tift also joined for assistance with accessibility and project management.

Identifying Stakeholders

Part of this team's responsibility was to identify design stakeholders who could help us refine the design. We iterated on the design multiple times internally before presenting it to the community to avoid bikeshedding. Doing this helped speed up the proposal process, which was one of the key contributing points to us getting traction and building excitement for this core initiative.

The following people were chosen as stakeholders:

Document and Design

As the discovery process started to take shape, we continuously documented all of the discussions we had regarding the project. Documentation isn’t as fun or exciting as writing code, but it's one of the contributing factors to keeping us on track and getting to our goal of releasing a proposal to the community. 

Meanwhile, we worked with our stakeholders to identify adjectives that would help guide the visual design. We created a sliding-scale exercise where stakeholders could add points across several tone spectrums, a common practice that the design team at Lullabot likes to conduct on client projects. Some of these were one adjective versus another (“formal” not “casual”), while others highlight the importance of a balance (“approachable” and “official”).

Voice and Tone of the Theme

Below are keywords that were identified to serve as the voice and tone of the new theme:

  • Formal
  • Light and bright (vs. dark & impactful)
  • Contemporary
  • Approachable and official
  • Novel (with some constraint)
  • Cool
  • High contrast with some restraint
  • Light (not heavy)

Design Principles

The following principles were established through research and collaboration, and are useful for guiding future additions and feedback for changes.

  • Accessible: WCAG AA conformance is a top priority for the design. The layout, colors, and functionality should provide an accessible theme that can be enjoyed by everyone.
  • Simple: Avoid unnecessary visual elements, colors, effects, and complexity.
  • Modern: Take advantage of the capabilities and strengths of modern browsers and interaction patterns.
  • Focused: Embrace high contrast, saturated color, and negative space to draw the eye to what’s most important.
  • Flexible: Provide options and overrides to account for varied needs and preferences.

The Meeting / Feedback Loop

Although this initiative is not a client project nor one that we work on daily, we established a routine of meeting every week to discuss what needed to be done to present a design to the stakeholders. Once we established the design principles and the voice and tone, we used zoom mocks to explore style using the adjectives and design principles as our guide. We presented these to the stakeholders, who chose a design with which to move forward.

We continued to iterate on the chosen design direction based on the feedback from the stakeholders. The design process continued with the addition of internal accessibility testing, which highlighted several contrast deficiencies that we subsequently fixed.

Proof of Concept

Throughout the process, we built a prototype in static HTML, CSS, and JavaScript. The intention was to validate the new features and help answer potential UI/UX issues that might arise during the design process. We also used it as an opportunity to vet the use of the CSS grid and ensure the front-end architecture could be accessible, as well as work with Internet Explorer 11 (and other core supported browsers). This proof of concept is not yet fully accessible, although it will be eventually. The next step is to get full sign-off from the Drupal accessibility team, which will hopefully alleviate last-minute time crunches when submitting the patch to Drupal core.

The following are key activities we’re focusing on:

  • Investigating the use of the header on scroll interaction on mobile and tablet devices.
  • Validating the use of the CSS grid in legacy browsers such as Internet Explorer 11 and identifying whether or not we’ll need to account for progressive enhancement features.
  • Verifying that the markup is semantic and meets the accessibility requirements.

Community Announcement: The Formal Processes on Submitting Your Idea to the Community

Once the design was in a good place, we drafted a proposal to the community and sought feedback for the work that had been done (see link - Designs for new front-end theme for Drupal 9). The announcement issue included several processes that we took to get to where we are today. The response from the community was overwhelmingly positive, and we were thrilled to see the excitement.

What's next?

The Drupal 9 theme initiative is currently underway. If you're interested in contributing to the new Olivero frontend theme effort, please check out "How to contribute to Olivero" and get involved with the team.

Building Olivero was the first time some of us have contributed to a Drupal core initiative, and admittedly, it can be scary and a little overwhelming. Sometimes you don't feel like you have enough years of experience or enough in-depth specific knowledge. But no matter what your background or experience level is, chances are there’s something you can do to help within the open-source community. In our case, we happened to be in the right place and know the right people. However, having a well-thought-out proposal, identifying key stakeholders, and having a phenomenal team involved can give legitimacy to your idea. I hope hearing the journey of how we got here provides some helpful takeaways and inspires you to jump-start your idea and advocate for getting your initiatives into Drupal core.

A huge special thank you to everyone who has contributed to the Olivero project so far! We wouldn’t be where we are without your support. 

Feb 07 2020
Feb 07

Tome is a suite of Drupal modules that can make your site into secure, fast, static HTML. 

Long story short, you can use Drupal in the same way you would use other static site generators like Jekyll or Hugo - everything lives in one repository, and Drupal only runs on your local machine.

The creator, Sam Mortenson tells us everything we need to know.

Feb 05 2020
Feb 05

During a recent project, the challenge of providing reusable, interactive web components to allow content editors to build pages presented itself. These components were to be created and developed by different teams, and available on the main Drupal site and a set of static pages, each of which had specific requirements and were already working in production.

Several decisions factored into finding the right solution to this challenge. This article explains what those decisions were and how the solution was implemented.

This project required that the main Drupal site and a number of other static sites must be able to reuse the same pieces of content to build their pages. The Drupal site had a content architecture structured as pages containing a leadspace, or header, plus several other fragments, called bands, which could be created by content editors. Multiple visual layouts were available for the bands - an image and a paragraph, a headline with three cards linking to other pages, etc. These layouts are defined within Drupal as display modes and are used to render each band with a specific layout.

It was important to find a way to provide reusable interactive pieces of content to be rendered as bands on the Drupal site, and make them available “as is” to other sites as well. Enter the widgets.

In this context, a widget is a reusable, self-contained piece of software providing a specific feature for any website. From a technical perspective, a widget is a JavaScript library intended to render within the HTML of a page that presents custom functionality and is 100% reusable without relying on external dependencies.

After a thorough audit of the latest approaches to this problem, the decision was to implement an architecture inspired by the micro frontends technique.

Micro Frontends

Nowadays, one of the most popular ways to implement backend pieces is the microservices technique: a derivative of service-oriented architectures that defines any complex application as a collection of loosely coupled services**. A service is a self-contained component that provides specific functionality to the application and communicates with the rest of the application pieces through a well-defined API.

When the microservices architecture translates to front-end development, a web page can be composed using a set of already-built components that are ready to be rendered. The components are also self-contained, decoupled, and reusable in the same way several microservices build an application on the back end. This approach allows several teams to focus their efforts on a specific set of components, developed in parallel, and not be dependent on other teams. Additionally, components are independent of each other, which allows a content editor to build a page by just selecting components and putting them in the desired place.

There’s more information on micro frontends in this article.

For this particular project, widgets were implemented as JavaScript libraries, following the micro-frontends approach. The widget source code, available as a single file, was loaded as usual by including a <script> tag. It provided an entry point, a function, to allow the widget to render inside a container from the HTML of the page (i.e., a <div> tag) once the DOM finished loading and said function executed.

To transform this theoretical approach into a real live implementation, multiple options were considered for building a widget as a JavaScript single file application, like vanilla JavaScript, or one of the numerous JavaScript frameworks available today. After some research, the Create React App was chosen as a base to build our widgets for several reasons:

  • React is a widely-used JavaScript framework, easy to use, and it has been around for some time so we can trust its reliability.
  • There are plenty of tools that make React development easier and quicker while providing the same starting point for all teams developing a new widget.
  • Create React App is compatible with Webpack, which we needed to efficiently pack widgets as a single JavaScript file with the ability to include assets, like fonts or image files.
  • It makes it trivial to have a local environment for the widget: running npm-start is all that’s required to have a local server where the widget is loaded.

There are some downsides when using React, though. If every widget is a full React application, the number of bytes the browser needs to download to render it is higher than if using other tools. This is because every widget includes a number of dependencies.

Due to how the project works and how editors were used to building their content, loading multiple widgets onto the same page was taken into account. As a result, some of the widget dependencies would likely duplicate. For example, lodash, a very common package when building React apps, may be loaded by more than one widget on the same page. Avoiding this duplication was not easy because it required loading multiple script tags on the browser, one per shared dependency, plus the widget source JS file. The expectation was to place only one or two widgets at most on any given page, so the risk of duplication was worth taking.

Rendering a Widget within a Page

Since the widgets are React applications compiled as an independent, single JavaScript file, it was important to have some control over the loading process so the widgets could render in a specific position on the page once the DOM was ready. To achieve this, the widget file was required to define an entry point function on the window object, as described in this section from the aforementioned micro-frontends article. The function received the DOM element ID of the container tag where the widget should render. It looked something like the following:

import ReactDOM from 'react-dom';

/**
 * Renders the widget.
 *
 * @param {string} instanceId
 *   The HTML element ID where the React app will be rendered.
 * @param {Function} cb
 *   An optional callback executed after the widget has been rendered.
 */
function render(instanceId, cb) {
  const element = document.getElementById(instanceId);
  if (element) {
    ReactDOM.render(
      <YourReactAppHere>,
      element,
      () => cb(element)
    );
  }
}

window.renderExampleWidget = render;

To simply identify the entry point function, the naming convention was to use the word render plus the machine name of the widget in CamelCase. For example, if the widget’s machine name (and its repository) was example-widget, the render function would be named renderExampleWidget. The widget’s machine name was an arbitrary attribute defined by the team working on the widget, but it becomes more relevant later.

Finally, all widgets needed to implement a consistent compile process with two steps:

npm install
PUBLIC_URL='<some url>' npm run build

The first step installed all dependencies, and the second one generated a directory named build which contained the production-ready release of the widget, with the following structure:

build
 ├── js
 │   └── main.js
 └── media
     ├── some-picture.png
     └── some-font.ttf

The main.js library had the responsibility of loading any asset from the build/media directory. Using relative URLs prepended by an environment variable named PUBLIC_URL was required to access these files from the main.js library which looked something like this:

<img src="https://www.lullabot.com/articles/widget-registry-how-serve-reusable-interactive-content-pieces/`${PUBLIC_URL}/media/picture.png`" />

This way, the PUBLIC_URL variable could remain empty for local development, and the assets were loaded. Once uploaded to the registry, the production build of the widget knew where it could locate the assets on the registry. The Create React App documentation contains more information about the PUBLIC_URL parameter.

The lack of a styles file on the build directory structure described above was probably noticeable. The reason is that CSS is global, and there is no way to encapsulate a style rule to be applied on a specific section of a page only. This means that allowing the React application to load a CSS file can potentially break styles on page sections outside of the widget scope. The decision made to prevent this situation was to leave these files out and allow only CSS-in-JS tools to handle widget styles.

The widget registry is a centralized place where production builds of widgets can render into any preferred website. The main goals of the registry were to maintain a list of published widgets as a web service, so it could be programmatically read by sites, and provide a server in which the widgets could be directly injected. The sites loading widgets would not need to download widget files or keep them up to date.

The registry itself is a set of moving pieces that enable automation of the management of widgets, and it is formed by the following tools:

  • A Github repository where the registry code lives. The most relevant piece on the repository is a JSON file containing a listing of available widgets as an array of objects, each one of the objects representing a different widget. These objects contain information like human and machine names, the git repository where the widget code lives, and the latest version published on the registry.

  {
    "repositoryUrl": "https://github.com/Lullabot/example-widget",
    "machineName": "example-widget",
    "title": "Example Widget",
    "version": "v1.0.0",
  }

  • Some scripts to verify that the format of the JSON file matches a definition that was implemented using JSON Schema, which is run on every commit to ensure the registry JSON file is never malformed. There are also a couple of other scripts that compile all registered widgets and upload the production builds to the widget server.
  • A CI tool, properly integrated with the repository. In this case, Travis took the responsibility of running the scripts mentioned above when a new pull request was created against the main branch. Once everything looked good, and the code merged into the main branch, the CI tool iterated over the list of registered widgets and downloaded a GitHub release from each widget’s repository whose name matched the value of the version field on the JSON object. At this point, the tool attempted to compile the widget and, when everything finished successfully, all compiled widgets were ready to be uploaded to a public server.
  • A public server where the widget’s production builds are available, along with the registry JSON file. For this project, IBM Cloud Object Storage was used, but any asset server or cloud provider can do the trick.

Once the deployment process was complete, and the registry JSON file and the production builds for all registered widgets were available on the server, it was time to render a widget directly from the registry.

The first step to inject a widget was to create a container within the HTML where the widget was rendered. In this example, a <div> tag was used with a unique value for the id attribute:

<div id="widget-comes-here"></div>

Next, the widget JavaScript application needed to be included in the HTML of the page. The library can be included in the <head> or <body> tags if preferred.

<script src="https://www.lullabot.com//<widget registry domain>/<widget-machine-name>/js/main.js"></script>

Finally, the widget’s entry point function was called once the DOM is ready. Something like the following can be done (assuming the entry point function set by the widget library is named renderWidget, but it could be any other name):

document.addEventListener("DOMContentLoaded", function() { 
  window.renderExampleWidget('widget-comes-here');
});

Putting everything together, this is how a simple HTML page looked when rendering a widget:

<html>
  <head>
    <script src="https://www.lullabot.com//<widget registry domain>/<widget-machine-name>/js/main.js"></script>
    <script>
      document.addEventListener("DOMContentLoaded", function() { 
        window.renderExampleWidget('widget-comes-here');
      });
    </script>
  </head>
  <body>
    <div id="widget-comes-here"></div>
  </body>
</html>

Drupal Integration

Once the widget registry was up and running and some of the widgets were registered, the micro-frontends needed to be integrated in order to work properly with Drupal. To achieve this, a custom module was created that includes the following features:

  • A custom entity type to represent the widgets definitions available on the registry JSON file, named Widget Type. This way, the site was able to identify which widgets were ready to be used, and which versions were the latest published for each one of them.
  • A cron job to update local Widget Type entities with the latest information available from the registry JSON file.
  • A Drupal JavaScript behavior that took care of rendering all widgets on a page.
  • Some configuration to locate the URL of the registry, among others.

Widgets could be referenced from an entity reference field that was added to the band entity, only visible when a specific layout for the band is selected. This field allowed for creating a new widget instance from a specific widget type. Once the page was saved, Drupal rendered the band, attaching the widget’s JavaScript file as an external library with the behavior mentioned earlier. Then, the behavior executed the entry-point function for all widgets present on the page, and each one of them rendered within its parent band.

It’s worth mentioning that the decision was made to load the widget source files directly from the registry rather than downloading them to do the following:

  • Prevent the need to maintain the files locally
  • Improve performance, as the site does not need to serve those files
  • Serve a widget’s latest build immediately when a new version hits the registry. This is achieved because the new code is served under the same URL unless there is a major version change. Because of this logic, there is no need to clear caches on the Drupal site to have it render the new version, as the client browser realizes the file's Etag header is changed and downloads the new widget library if needed.

Something to note about this last point is that there isn’t any CDN in front of the widget registry at the moment of writing this article. If one is put in place in the future, there would probably be some time until the new widget version is present on the CDN after deployment, so the new code will not render immediately. But, for now, it does!

In summary

The described architecture is a simplification of the widget system built for the project, but the article illustrates how the micro-frontends approach was implemented to allow the content editors of a Drupal site to reuse components on multiple pages and sites, how a centralized service was created to allow these components to be available to both Drupal and non-Drupal sites, and how they were integrated into Drupal specifically.

There are some additional topics to discuss, such as passing field values from Drupal to a widget, multi-language support based on Drupal’s current language, allowing external CSS styles which do not interfere with the rest of the page or a CLI tool to manage the registry JSON file, and those may be the basis for future articles.

Jan 14 2020
Jan 14

How can more maintainable custom code in Drupal be written? Refactor it to follow SOLID software design principles. As long SOLID purity isn't pursued into an endless rabbit hole, SOLID principles can improve project maintainability. When a project has low complexity, it is worthwhile to respect these principles because they're simple to implement. When a project is complex, it is worthwhile to respect these principles to make the project more maintainable.

Wrapped entities are custom classes around existing entities that will allow custom Drupal code to be more SOLID. This is achieved in three steps:

  1. Hide the entities from all custom code.
  2. Expose meaningful interaction methods in the wrapped entity.
  3. Implement the wrapped entities with the SOLID principles in mind.

Finally, many of the common operational challenges when implementing this pattern are solved when using the Typed Entity module.

SOLID Principles

More information can be found about the SOLID principles in the Wikipedia article, but here’s a quick summary:

Single responsibility principle: A class should only have a single responsibility, that is, only changes to one part of the software's specification should be able to affect the specification of the class.

Open–closed principle: Software entities should be open for extension, but closed for modification.

Liskov substitution principle: Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program. This is often referred to as design by contract.

Interface segregation principle: Many client-specific interfaces are better than one general-purpose interface.

Dependency inversion principle: One should "depend upon abstractions, [not] concretions."

Here, the focus will be on the wrapped entity pattern. In this pattern, it's discovered that PHP classes can be created that wrap Drupal entities to encode a project's business logic and make the code more SOLID. This is the logic that makes a project unique in the world and requires custom code to be written in the first place because it’s not completely abstracted by libraries and contributed modules.

First, understand the theory, and then look at the actual code.

Building a Library Catalog

To help exemplify this pattern, imagine being tasked with building the catalog for a local library. Some of these examples may be solved using a particular contrib module. However, this exercise focuses on custom code.

Entities Are Now Private Objects

Business logic in Drupal revolves around entities and the relationships between them. Often, the business logic is scattered in myriad hooks across different modules. The maintainability of custom code can be improved by containing all this custom logic in a single place. Then, the myriad of hooks can make simple calls to this central place.

The business logic is often contained around entities. However, entities are very complex objects in Drupal 8. They’re already associated with many features; they can be moderated, rendered, checked for custom access rules, part of lists, extended via modules, and more. How can the complexity be managed with the business logic when already starting with such a complex object? The answer is by hiding the entity from the custom code behind a facade. These objects are called wrapped entities.

In this scenario, only the facade can interact directly with the entity at any given time. The entity itself becomes a hidden object that only the facade communicates with. If $entity->field_foo appears anywhere in the code, that means that the wrapped entity needs a new method (or object) that describes what's being done.

Wrapped Entities

In the future, it may be possible to specify a custom class for entities of a given bundle. In that scenario, Node::load(12) will yield an object of type \Drupal\physical_media\Entity\Book, or a \Drupal\my_module\Entity\Building. Nowadays, it always returns a \Drupal\node\Entity\Node. While this is a step forward, it’s better to hide the implementation details of entities. Adding the business logic to entity classes increases the API surface and is bad for maintenance. This approach also creates a chance of naming collisions between custom methods and other added-to entities in the future.

A wrapped entity is a class that can access the underlying entity (also known as "the model") and exposes some public methods based on it. The wrapper is not returned from a Node::load call, but created on demand. Something like \Drupal::service(RepositoryManager::class)->wrap($entity) will return the appropriate object Book, Movie, etc. In general, a WrappedEntityInterface.

A wrapped entity combines some well known OOP design patterns:

In this catalog example, the class could be (excuse the code abridged for readability):

    interface BookInterface extends
      LoanableInterface,
      WrappedEntityInterface,
      PhysicalMediaInterface,
      FindableInterface {}

    class Book extends WrappedEntityBase implements BookInterface {

      private const FIELD_NAME_ISBN = 'field_isbn';

      // From LoanableInterface.
      public function loan(Account $account, DateTimeInterval $interval): LoanableInterface { /* … */ }
      // From PhysicalMediaInterface.
      public function isbn(): string {
        return $this->entity->get(static::FIELD_NAME_ISBN)->value;
      }
      // From FindableInterface.
      public static function getLocation(): Location { /* … */ }

    }

More complete and detailed code will follow. For now, this snippet shows that using this pattern can turn a general-purpose object (the entity) into a more focused one. This is the S in SOLID. There are commonalities that group methods together when considering the semantics of each method written. Those groups become interfaces, and all of a sudden, it's apparent interface segregation has been implemented. SOLID. With those granular interfaces, things like the following can be done:

array_map(
  function (LoanableInterface $loanable) { /* … */ },
  $loanable_repository->findOverdue(new \DateTime('now'))
);

Instead of:

array_map(
  function ($loanable) {
    assert($loanable instanceof Book || $loanable instanceof Movie);
    /* … */
  },
  $loanable_repository->findOverdue(new \DateTime('now'))
);

Coding to the object's capabilities and not to the object class is an example of dependency inversion. SOLID.

Entity Repositories

Wrapped entity repositories are services that retrieve and persist wrapped entities. In this case, they're also used as factories to create the wrapped entity objects, even if that is not very SOLID. This design decision was made to avoid yet another service when working with wrapped entities. The aim is to improve DX.

While a wrapped entity Movie deals with the logic of a particular node, the MovieRepository deals with logic around movies that applies to more than one movie instance (like MovieRepository::findByCategory), and the mapping of Movie to database storage.

Further Refinements

Sometimes, it's preferred to not have only one type per bundle. It would be reasonable that books tagged with the Fantasy category upcast to \Drupal\physical_media\Entity\FantasyBook. This technique allows for staying open for extension (add that new fantasy refinement) while staying closed for change (Book stays the same). This takes O into account. SOLID. Since FantasyBook is just a subtype of Book, this new extension can be used anywhere where a book can be used. This is called the Liskov substitution principle, our last letter. SOLID.

Of course, the same applies to heterogeneous repositories like LoanableRepository::findOverdue.

Working On The Code

This section shows several code samples on how to implement the library catalog. To better illustrate these principles, the feature requirements have been simplified. While reading, try to imagine complex requirements and how they fit into this pattern.

Typed Entity Module

I ported the Typed Entity module to Drupal 8 to better support this article, and it has helped a lot with my Drupal 7 projects. I decided to do a full rewrite of the code because I have refined my understanding of the problem during the past four years.

The code samples leverage the base classes in Typed Entity and its infrastructure. The full code samples can be accessed in this repository. This repo contains the latest code and corrections.

Hands-On

The first focus will be to create a wrapped entity for a book. This facade will live under \Drupal\physical_media\WrappedEntities\Book. The class looks like this (for now):

namespace Drupal\physical_media\WrappedEntities;

use Drupal\physical_media\Location;
use Drupal\typed_entity\WrappedEntities\WrappedEntityBase;

class Book extends WrappedEntityBase implements LoanableInterface, PhysicalMediaInterface, FindableInterface {

  const FIELD_NAME_ISBN = 'field_isbn';
  const FIELD_NAME_LOCATION = 'field_physical_location';

  // From PhysicalMediaInterface.
  public function isbn(): string {
    return $this->getEntity()->get(static::FIELD_NAME_ISBN)->value;
  }

  // From FindableInterface.
  public function getLocation(): Location {
    $location = $this->getEntity()->get(static::FIELD_NAME_LOCATION)->value;
    return new Location(
      $location['building'],
      $location['floor'],
      $location['aile'],
      $location['section']
    );
  }

}

Then, the repository will be registered in the service container. The physical_media.services.yml contains:

services:
  physical_media.typed_entity.repository.book:
    # If you don't have custom logic for your repository you can use the base
    # class and save yourself from writing another empty class.
    # class: Drupal\typed_entity\TypedRepositories\TypedEntityRepositoryBase
    class: Drupal\physical_media\TypedEntityRepositories\BookRepository
    parent: Drupal\typed_entity\TypedRepositories\TypedEntityRepositoryBase
    public: true
    tags:
      -
        name: typed_entity_repository
        entity_type_id: node
        bundle: book
        wrapper_class: Drupal\physical_media\WrappedEntities\Book

Important bits are:

  • Specify the parent key to inherit additional service configuration from the contrib module.
  • If there's no reason to have a custom repository, the base class under the class key can be used.
  • Add the service tag with all the required properties:
    • name: this should always be typed_entity_repository.
    • entity_type_id: the entity type ID of the entities that will be wrapped. In this case, books are nodes.
    • bundle: the bundle. The bundle can be omitted only if the entity type has no bundles, like the user entity.
    • wrapper_class: the class that contains the business logic for the entity. This is the default class to use if no other variant is specified. Variants will be covered later.

Once these are done, the wrapped entity can begin integration into the hook system (or wherever it pertains). An integration example that restricts access to books based on their physical location could be:

<?php
use Drupal\Core\Access\AccessResult;
use Drupal\Core\Session\AccountInterface;
use Drupal\node\NodeInterface;
use Drupal\physical_media\WrappedEntities\FindableInterface;
use Drupal\typed_entity\RepositoryManager;

/**
 * Implements hook_node_access().
 */
function physical_media_node_access(NodeInterface $node, $op, AccountInterface $account) {
  if ($node->getType() !== 'book') {
    return;
  }
  $book = \Drupal::service(RepositoryManager::class)->wrap($node);
  assert($book instanceof FindableInterface);
  $location = $book->getLocation();
  if ($location->getBuilding() === 'area51') {
    return AccessResult::forbidden('Nothing to see.');
  }
  return AccessResult::neutral();
}

Better yet, the Book could be refactored and that leaking business logic (the one checking for specific buildings) could be put into it, and then a reasonable interface could be implemented for it (favor existing ones) like Book … implements \Drupal\Core\Access\AccessibleInterface.

…
use Drupal\Core\Access\AccessibleInterface;
use Drupal\Core\Access\AccessResult;

class Book … implements …, AccessibleInterface {
  …
  // From AccessibleInterface.
  public function access($operation, AccountInterface $account = NULL, $return_as_object = FALSE) {
    $location = $this->getLocation();
    if ($location->getBuilding() === 'area51') {
      return AccessResult::forbidden('Nothing to see.');
    }
    return AccessResult::neutral();
  }

}

While evolving the hook into:

function physical_media_node_access($node, $op, $account) {
  return $node->getType() === 'book' ?
    \Drupal::service(RepositoryManager::class)
      ->wrap($node)
      ->access($op, $account, TRUE)
    : AccessResult::neutral();
}

The code can still be improved to remove the check on 'book'. Books are checked for access because of knowledge about the business logic. That leak can be avoided by trying to code to object capabilities instead (D in SOLID).

function physical_media_node_access($node, $op, $account) {
  try {
    $wrapped_node = \Drupal::service(RepositoryManager::class)->wrap($node);
  }
  catch (RepositoryNotFoundException $exception) {
    return AccessResult::neutral();
  }
  return $wrapped_node instanceof AccessibleInterface
    ? $wrapped_node->access($op, $account, TRUE)
    : AccessResult::neutral();
}

After that, the hook can remain the same when access control is implemented in movies. Not having to trace the whole codebase for potential changes when new features are added or existing ones are changed is a big win for maintainability.

Handling Complexity

As code grows in complexity, making it maintainable becomes more difficult. This is a natural consequence of complexity so it's best to be pragmatic and not take the SOLID principles as a hard rule. They exist to serve a purpose, not the other way around.

There are some ways to contain complexity. One is to avoid passing other entities into the wrapper methods by using wrapped entities instead. This creates more discipline in making the business logic explicit. This example gets the wrapped user entity as the author of an article.

public function owner(): ?WrappedEntityInterface {
  $owner_key = $this->getEntity()->getEntityType()->getKey('owner');
  if (!$owner_key) {
    return NULL;
  }
  $owner = $this->getEntity()->{$owner_key}->entity;
  if (!$owner instanceof EntityInterface) {
    return NULL;
  }
  $manager = \Drupal::service(RepositoryManager::class);
  assert($manager instanceof RepositoryManager);
  return $manager->wrap($owner);
}

Another way to contain complexity is by splitting wrapped entities into sub-classes. If having several methods that don’t apply to some books, variants can be beneficial. Variants are wrappers for a given bundle that are specific to a subgroup.

The Typed Entity Examples submodule contains an example on how to create a variant. In that example, the repository for articles hosts variant conditions. Calls to RepositoryManager::wrap($node) with article nodes now yield Article or BakingArticle depending on whether or not the node is tagged with the term 'baking'. The contrib module comes with a configurable condition to create a variant based on the content of a field. That is the most common use case, but any condition can be written. If having a different wrapper is preferred for Wednesday articles, that (odd) condition can be written implementing VariantConditionInterface.

Summary

Decades-old principles were highlighted as still being relevant to today’s Drupal projects. SOLID principles can guide the way custom code is written and result in much more maintainable software.

Given that entities are one of the central points for custom business logic, the wrapped entities pattern was covered. This facade enables making the business logic explicit in a single place while hiding the implementation details of the underlying entity.

Finally, the Typed Entity module as a means of standardization across projects when implementing this pattern was explored. This tool can only do so much because ultimately, the project’s idiosyncrasies cannot be generalized; each project is different. However, it is a good tool to help promote more maintainable patterns.

Jan 10 2020
Jan 10

Senior Drupal Engineer at MTech LLC. In his spare time, he is a frequent core contributor, core contribution mentor, project application reviewer, and general nice guy. He's from the Midwest in the US but now lives in Nicaragua where he splits his time between his family, clients and running a largely Nicaraguan-based Drupal services business.

Jan 02 2020
Jan 02

Site owners and administrators often want to send emails to users telling them about content creation or changes. This sounds basic, but there are a lot of questions. What exactly needs to be accomplished? Some examples could include:

  • Send notifications to authors when their content is published.
  • Send notifications to authors when users comment on it.
  • Send notifications to administrators when content is created or updated.
  • Send notifications to site users when content is commented on.
  • Mail the content of a node to site users when content is created or updated.
  • And the list goes on…

The first step in assessing solutions is to identify the specific need by asking the following questions:

Who needs to be notified? 

  • So many options! It could be an editor, the author, all site users, all site users of a given role, a specific list of users, anyone who commented on the node, or anyone who subscribed to the node.

When should notifications be created?

  • A little simpler, but it could be when the node is created, when it is published, or when it is commented on. A message might be initiated every time the action happens, or postponed and accumulated into a digest of activity that goes out once a day or once a week or once a month.

When should notifications be sent?

  • This could be immediate, sent to a queue and processed on cron, or scheduled for a specific time.

What should deliver the notification?

  • Is it both possible and feasible for the web server to be responsible for delivering the notification? Does a separate server need to deliver the mail, or perhaps a third party mail delivery service needs to be used? 

How should recipients be notified?

What should recipients receive?

  • Notifications could be just messages saying that the node has been created, changed, or published. It might include a summary of the node content, or the entire content of the node could be sent in the email. This could also be a digest of multiple changes over time.

How much system flexibility is required?

  • This could encompass anything from a very simple system, like a fixed list of users or roles who are always notified, all the way to complicated systems that allow users to select which content types they want to be notified about, maybe even allowing them to subscribe and unsubscribe from specific nodes.

Clarifying the answers to these questions will help define the solution you need and which module(s) might be options for your situation. There are dozens of modules that have some sort of ability to send emails to users. While I did not review all of them, below are reviews of a few of the most widely-used solutions.

Note: There are also several ways to use the Rules module (possibly in conjunction with one of these solutions), but I did not investigate Rules-based solutions in this collection of ideas. 

Admin Content Notification

The Admin Content Notification module is designed to notify administrators when new content is created, but it can also be used to notify any group of users, administrators or not. The configuration allows you to choose either a hard-coded list of email recipients or send notifications to all users that have one or more specified roles. Since you can send notifications to users by role, you could create a new ‘email’ role and assign that role to anyone who should be notified.

Some of the capabilities of this module include:

  • Selecting content types that should generate notifications.
  • Choosing whether to send notifications only on node creation or also on updates, and whether to notify about unpublished nodes or only published nodes.
  • Selecting roles that have permissions to generate notifications.
  • Selecting either a hard-coded list of email addresses OR a list of roles that should receive notifications.
  • Creating a notification message to send when content is created or changed.
  • Adding the node content to the message by using tokens.

The settings for this module are in an odd location, in a tab on the content management page.

This module is extremely easy to set up; just download and enable the module and fill out the configuration settings. Because of its simplicity, it has little flexibility. All content types and situations use the same message template, and there is no way for users to opt in or out of the notifications. There is no integration with changes in workflow states, only with the published or unpublished status of a node. This module provides no capability to send notifications when comments are added or changed. If this capability matches what you need, this is a very simple and easy solution.

Workbench Email

Workbench Email is part of the Workbench collection, but it also works just with core Content Moderation. This module adds the ability to send notifications based on changes in workflow states. 

You begin by creating any number of email templates. In each template, you identify which content types it applies to, who should get emailed, and what the email should contain. The template uses tokens, so you can include tokens to display the body and other custom fields on the node in the message.

Once you’ve created templates, you edit workflow transitions and attach the templates to appropriate transitions. The screenshot below is a bit confusing. The place where it says Use the following email templates is actually below the list of available templates, not above the list. In this example, there is only one template available, called Notification, which has been selected for this transition.

Documentation for the Drupal 8 version does not exist and the screenshots on the project page don’t match the way the site pages look. There is, however, good integration with core Workflow and Content Moderation, and there is a certain amount of flexibility provided in that you can create different messages and send messages to different people for different transitions. There is a documented bug when uninstalling this module, so test it on a site where you can recover your original database until you decide if you want to keep it! This module provides no capability to send notifications when comments are added or changed.

Comment Notify

This module fills a gap in many of the other solutions: a way to get notified about comments on a post. It’s a lightweight solution to allow users to be notified about comments on content they authored or commented on. Configuration is at admin/config/people/comment_notify:

The module has some limitations:

  • Only the author can automatically receive notices about all comments on a thread.
  • You won’t receive notices about other comments unless you add a comment first.
  • You can only unsubscribe to comments by adding another comment and changing your selection.
  • There is only one template and it applies to all content types.
  • You can’t automatically subscribe groups of users to notifications. Each user manages their own subscriptions.

With the above restrictions, the module is an easy way to get the most obvious functionality: be notified about comments on content you created, and be notified about replies to any comments you made. This module could be combined with solutions that only provide notifications about changes to nodes for a more comprehensive solution.

Message Stack

The Message Stack is an API that you must implement using custom code. Unlike the above modules, this is absolutely not an out-of-the-box solution. It’s much more complex but highly flexible. Much of the available documentation is for the Drupal 7 version, so I spent quite some time trying to understand how the Drupal 8 version should work. 

The Message module creates a new “Message” entity. Your custom code then generates a new message for whatever actions you want to track—node creation or updates, comment creation or updates—all using Drupal’s hooks. You can create any number of token-enabled templates for messages, and implement whichever template applies for each situation in the appropriate hook. Using a separate module in the stack, Message Notify, you choose notification methods. It comes with email or SMS plugins, and you can create other notification methods by creating custom plugins. A third, separate module, Message Subscribe, is used to allow users to subscribe to content using the Flag module. You then create custom code that implements Drupal hooks (likehook_node_insert()) to create the appropriate message(s) and send the messages to the subscriber list.

One head-scratcher was how to set up the module to populate the email subject and message. You do it by creating two messages in the template. The first contains text that will go into the email subject, the second contains the text for the email body. You’ll see two new view modes on the message, mail_body and mail_subject. The subject is populated with Partial 0 (the first message value), the body with Partial 1 (the second message value).

Another thing that took me a while to get my head around is that you designate who the message should be sent to by making them the author of the message, which feels odd. "Author" is a single value field, so if you want to send a message to multiple people, you create a basic message, clone it for each recipient, and change the author on the cloned version for each person on the list to send them their own copy of the message. This way each recipient gets their own message, making it possible to dynamically tweak the specific message they receive.

The Message Subscribe module does all that work for you, but it’s useful to know that’s what it’s doing. It creates Flag module flags that allow users to select which content they want to be notified about and hooks that allow you to alter the list of subscribers and the messages each receives in any way you like. That means that your custom code only needs to create the right message in the right hook. If you want the message to just go to the list of people who already subscribed to the content, you do something like this:

/**
 * Implements hook_node_update().
 */
function mymodule_node_update(Node $node) {
  $message = Message::create([
    'template' => 'update_node',
    'uid' => $node->getOwnerId(),
  ]);
  $message->set('field_node_reference', $node);
  $message->set('field_published', $node->isPublished());
  $message->save();
  $subscribers = \Drupal::service('message_subscribe.subscribers');
  $subscribers->sendMessage($node, $message);
}

You can also add to or alter the list of subscribers and/or the specific text you send to each one. To add subscribers, you create a DeliveryCandidate with two arrays. The first is an array of the flags you want them to have, the second is an array of notification methods that should apply. This is very powerful since you don’t have to wait until users go and subscribe to each node. You can “pseudo-subscribe” a group of users this way. This is probably most applicable for admin users since you might want them to automatically be subscribed to all content. Note that this also eliminates any possibility that they can unsubscribe themselves, so you’d want to use this judiciously.

/**
 * Implements hook_message_subscribe_get_subscribers().
 */
function mymodule_message_subscribe_get_subscribers(MessageInterface $message,
array $subscribe_options, array $context) {
  $admin_ids = [/* list of uids goes here */];
  $uids = [];
  foreach ($admin_ids as $uid) {
    $uids[$uid] = new DeliveryCandidate([], ['email'], $uid);
  }
  return $uids;
}

A lot of configuration is created automatically when you enable the Message Stack modules, but some important pieces will be missing. For instance, message templates and fields might be different in different situations, so you’re on your own to create them.

I created a patch in the issue queue with a comprehensive example of Message, Message Notify, and Message Subscribe that contains the code I created to review these modules. It’s a heavily-commented example that creates some initial configuration, combines code from all the Message example modules, is updated with information I derived from various issues and documentation that I found after searching the Internet and a bunch of trial and error in my test site. It’s a comprehensive alternative to the other examples, which are pretty basic, and should answer a lot of the most common questions about how to configure the module. I included lots of ideas of things you can do in the custom code, but you’ll probably want to remove some of them and alter others using what’s there as a starting point for your own code. Read the README file included in that patch, visit the configuration pages, and review the example code for more ideas of how to use the stack. 

Note that I ran into several issues with Message Subscribe and submitted potential patches for them. Most of them related to the Message Subscribe Email module, and I ultimately realized I didn’t even need that module and removed it from my example. I found an easier way to email the subscribers using just one line of code in my example module. The remaining issue I ran into was one that created serious problems when I tried to queue the subscription process. I suggest you review and use that patch if you alter the list of recipients and use the queue to send messages.

Which module to use?

That’s it for the modules I reviewed this time. Which one(s) should you use? This is Drupal! Take your pick! There are many ways to solve any problem.

Seriously, it really comes down to whether you want something out-of-the-box that requires little or no code, or whether you need more customization and flexibility than that. 

In summary:

  • If you want an easy, out-of-the-box, solution to notify users about published content and comments, a combination of Admin Content Notification and Comment Notify could work well.
  • If you need to notify users about changes in workflow states, Workbench Email is an easy solution.
  • If you need a highly customized solution and you are comfortable writing code and using Drupal’s APIs, the Message stack has a lot of potential.

For my project, an intranet, I ultimately selected the Message stack. I created a public repository with a custom module,  Message Integration, that integrates the Message stack with Swiftmailer and Diff to automatically subscribe all users to new nodes and email the subscriber list with the node’s content when nodes are published, a diff of the changes when new revisions are created, and the text of new comments on nodes they subscribe to. The code is too opinionated to be a contrib module, but it could be forked and used on other sites as a quick start to a similar solution.

Any of these modules (and probably lots of others) might work for you, depending on the specific needs of your site and your ability and desire to write custom code. Hopefully, this article will help you assess which modules might fit your communication needs, and provide some comparison for any other solutions you investigate.

Dec 14 2019
Dec 14

Matt and Mike talk about the Drupal 8's core media module, including the processes, issues, and ecosystems needed to make it happen.

Nov 20 2019
Nov 20

By default, Drupal Views comes with several access plugins that will probably cover most of what you need, across various scenarios. You can restrict access by role or permission, and by default, a new view comes with the Permission plugin selected, with the “View published content” option.

Plugin options show up in the UI under Page Settings -> Access.

What if you need something specific? Maybe you have created some of your own access rules that you can't distill to roles or permissions. Maybe you have a view called the Temple of Gozer and only two users can access it: the Gatekeeper or the Keymaster. (You work for a very strange company.)

Determining if someone is the Gatekeeper or the Keymaster involves a lot of rules and condition trees that I won’t detail here, but they are all encapsulated in a helper class that has methods isUserGatekeeper and isUserKeymaster, and each of these methods takes a parameter of Drupal\Core\Session\AccountInterface.

It looks something like this:

Class TempleOfGozerAccessHandler {
  public function isUserTheGatekeeper(AccountInterface $account) {
      // Code to figure it out
  }

  public function isUserTheKeymaster(AccountInterface $account) {
      // Code to figure it out
  }

  public function access(AccountInterface $account, Route $route) {
      return $this->isUserTheGatekeeper($account) || $this->isUserTheKeymaster($account);
  }
}

You’ll notice an extra access() method that also takes a Route object which is explained further below.

How do you get a view to use this class to determine access? You write a custom Views Access Plugin.

The Beginning of a Views Access Plugin

Most access plugins will extend the Drupal\views\Plugin\views\access\AccessPluginBase class, and will need to define at least three methods: summaryTitle(), access(), and alterRouteDefinition(). 

Since this is a plugin, your class needs an annotation to describe some metadata. Here is what our TempleOfGozerAccess plugin looks like:

/**
 * @ingroup views_access_plugins
 *
 * @ViewsAccess(
 *   id = "temple_of_gozer_access",
 *   title = @Translation("Temple of Gozer"),
 *   help = @Translation("Access will be granted to only the Gatekeeper or the Keymaster.")
 * )
 */
class TempleOfGozerAccess extends AccessPluginBase {

}

The summaryTitle() method returns what shows up in the Views UI, so it should return a descriptive name that distinguishes it from other access plugins.

The access() method implementation might look something like this:

public function access(AccountInterface $account) {
  return $this->templeAccess->isUserTheGatekeeper($account) || $this->templeAccess->isUserTheKeymaster($account);
}

With a method name like “access” in our access plugin, you might be tempted to think that’s all that matters, but you also need to implement alterRouteDefinition().

Altering the View Route

Why does this need to be done? Why is this extra method required?

Each of these methods handles different contexts within the Drupal system and different times when permissions and access need to be determined. The access() method is specifically called when a request is attempted on the view. The only thing called out during route discovery is the alterRouteDefinition(). Every time Drupal builds its route table, it calls this method as well. This shouldn’t happen very often because it's an expensive operation. 

At first, this can feel a little awkward, but it allows some flexibility.

The simplest implementation of alterRouteDefinition() would look like:

public function alterRouteDefinition(Route $route) {
  return TRUE;
} 

This allows every user to visit the Temple of Gozer view route, but then the access() method would be called, and they would not see any content. They could see the Temple, but the door would be shut in their faces, which might be what you want.

But what if you want to keep the existence of the Temple secret? You want the view accessible via a menu item, and that menu item is only visible to the Keymaster or Gatekeeper. In that case, you need to set the permissions at the route level, which is what determines the visibility of and access to menu items.

Here is our new alterRouteDefinition():

public function alterRouteDefinition(Route $route) {
  $route->setRequirement('_custom_access', 'temple_of_gozer.access_handler::access');
} 

This assumes that the TempleOfGozerAccessHandler class is defined as a service with the name of “temple_of_gozer.access_handler.”

Clear your caches, and your menu item pointing to the view will only be visible to the Gatekeeper or Keymaster.

Adding User-defined Options to the Access Plugin

The functioning view access plugin only allows access to exactly who you want based on your complicated business logic inside TempleOfGozerAccessHandler.

But now, another view needs to be created—one that uses the exact same access rules, but can only be accessed during a full moon. (Did I mention that this is a very strange company?).

Instead of creating a whole new access plugin, you can define some options to allow more flexibility, and your current access plugin can meet both use cases. You'll need to add the following methods to your TempleOfGozerAccess class:

 protected function defineOptions() {
   $options = parent::defineOptions();
   $options['restriction'] = ['default' => 'none'];

   return $options;
 }

 public function buildOptionsForm(&$form, FormStateInterface $form_state) {
   parent::buildOptionsForm($form, $form_state);
   $form['restriction'] = [
     '#type' => 'select',
     '#title' => $this->t('Additional Restriction'),
     '#default_value' => $this->options['restriction'],
     '#options' => [
       'none' => $this->t('None'),
       'full_moon' => $this->('Full moon'),
     ],
   ];
 }
} 

When a user selects your access plugin in a view, it will now present them with this form, similar to how the Permission plugin will allow a user to select a permission.

For usability, you might want to update your summaryTitle() to take these new options and the addition of future ones into account:

 public function summaryTitle() {
   if (isset($this->options['restriction'])) {
     return $this->t('Temple of Gozer (Restriction: @restriction)', ['@restriction' => $this->options['restriction']]);
   }

   return $this->t('Temple of Gozer');
 }

The access function will look like this:

public function access(AccountInterface $account) {
  $full_moon = TRUE;
  if ($this->options['restriction'] === 'full_moon') {
    $full_moon = $this->templeAccess->isFullMoon();
  }

  return ($this->templeAccess->isUserTheGatekeeper($account) || $this->templeAccess->isUserTheKeymaster($account)) && $full_moon;
}

This is starting to get a little unwieldy, so if more options are added, you will want to think about encapsulating that logic somewhere else, but for now, it meets its purposes and is still easy to reason with and read.

At this point, you might also need this option passed through to the route. You can pass the selected option in the alterRouteDefinition() method, and then use it in the access handler.

public function alterRouteDefinition(Route $route) {
  $route->setRequirement('_custom_access', 'temple_of_gozer.access_handler::access');
  $route->setOption('_additional_restriction', $this->options['restriction']);
}

And then the TempleOfGozerAccessHandler::access():

public function access(AccountInterface $account, Route $route) {
  $full_moon = TRUE;
  if ($route->getOption('_additional_restriction') === 'full_moon') {
    $full_moon = $this->isFullMoon();
  }
  return ($this->isUserTheGatekeeper($account) || $this->isUserTheKeymaster($account)) && $full_moon;
}

You probably noticed that both the plugin access() method and the TempleOfGozerAccessHandler::access() method have very similar logic. That is something you'll want to refactor if things start to get more complicated. However, since the TempleOfGozerAccessHandler::access() callback used for the route takes a Route object as well as an AccountInterface, you can’t just use one function in place of the other. You can’t just...ahem...cross the streams, if you will.

Conclusion

You have survived this terrible homage to the movie Ghostbusters, so pat yourself on the back. But you should now have the tools required to build custom views access plugins for any need or occasion. If you want another example, the core Permission plugin is a great one, especially if you need to think about more granular caching related to your access rules.

And don’t worry. It’s totally safe to cross the streams*.

*Lullabot is not responsible for any catastrophe that results from crossing the streams.

Nov 14 2019
Nov 14

Mike and Matt drag Adam Bergstein onto the show to talk about the free SimplyTest.me service, which is used to quickly spin up Drupal environments for quick patch testing, reviews, and more. 

Feb 08 2019
Feb 08

Welcome to the latest version of Lullabot.com! Over the years (since 2006!), the site has gone through at least seven iterations, with the most recent launching last week at the 2019 Lullabot team retreat in Palm Springs, California.

Back to a more traditional Drupal architecture

Our previous version of the site was one of the first (and probably the first) decoupled Drupal ReactJS websites. It launched in 2015.

Decoupling the front end of Drupal gives many benefits including easier multi-channel publishing, independent upgrades, and less reliance on Drupal specialists. However, in our case, we don’t need multi-channel publishing, and we don’t lack Drupal expertise.

One of the downsides of a decoupled architecture is increased complexity. Building blocks of our decoupled architecture included a Drupal 7 back end, a CouchDB middle-layer, ReactJS front end, and Node.js server-side application. Contrast this with a standard Drupal architecture where we only need to support a single Drupal 8 site.

The complexity engendered by decoupling a Drupal site means developers take longer to contribute certain types of features and fixes to the site. In the end, that was the catalyst for the re-platforming. Our developers only work on the site between client projects so they need to be able to easily understand the overall architecture and quickly spin up copies of the site.

Highlights of the new site

In addition to easily swapping in and out developers, the primary goals of the website were ease of use for our non-technical marketing team (hi Ellie!), a slight redesign, and to maintain or improve overall site speed.

Quickly rolling developers on and off

To aid developers quickly rolling on and off the project, we chose a traditional Drupal architecture and utilized as little custom back-end code as possible. When we found holes in functionality, we wrote modules and contributed them back to the Drupal ecosystem. 

We also standardized to Lando and created in-depth documentation on how to create a local environment. 

Ease of use

To enable our marketing team to easily build landing pages, we implemented Drupal’s new experimental Layout Builder module. This enables a slick drag-and-drop interface to quickly compose and control layouts and content.

We also simplified Drupal’s content-entry forms by removing and reorganizing fields (making heavy use of the Field Group module), providing useful descriptions for fields and content types, and sub-theming the Seven theme to make minor styling adjustments where necessary.

Making the front end lean and fast 

Normally, 80% of the delay between navigating to a webpage and being able to use the webpage is attributed to the front end. Browsers are optimized to quickly identify and pull in critical resources to render the page as soon as possible, but there are many enhancements that can be made to help it do so. To that end, we made a significant number of front-end performance optimizations to enable the rendering of the page in a half-second or less.

  • Using vanilla JavaScript instead of a framework such as jQuery enables the JS bundle size to be less than 27kb uncompressed (to compare, the previous version’s bundle size was over 1MB). Byte for byte, JavaScript impacts the performance of a webpage more than any other type of asset. 
  • We heavily componentize our stylesheets and load them only when necessary. Combined with the use of lean, semantic HTML, the browser can quickly generate the render-tree—a critical precursor to laying out the content.
  • We use HTTP2 to enable multiplexed downloads of assets while still keeping the number of HTTP requests low. Used with a CDN, this dramatically lowers the time-to-first-byte metric and time to download additional page assets.
  • We heavily utilize resource-hints to tell the browser to download render-blocking resources first, as well as instructing the browser to connect third-party services immediately.
  • We use the Quicklink module to pre-fetch linked pages when the browser is idle. This makes subsequent page loads nearly instantaneous.

There are still some performance @todos for us, including integrating WEBP images (now supported by Chrome and Firefox), and lazy-loading images. 

Contributing modules back to the Drupal ecosystem

During the development, we aimed to make use of contributed modules whenever it made sense. This allowed us to implement almost all of the features we needed. Only a tiny fraction of our needs was not covered by existing modules. One of Lullabot’s core values is to Collaborate Openly which is why we decided to spend a bit more time on our solutions so we could share them with the rest of the community as contributed modules.

Using Layouts with Views

Layout Builder builds upon the concept of layout regions. These layout regions are defined in custom modules and enable editors to use layout builder to insert these regions, and then insert content into them.

Early on, we realized that the Views module lacked the ability to output content into these layouts. Lullabot’s Director of Technology, Karen Stevenson, created the Views Layout module to solve this issue. This module creates a new Views row plugin that enables the Drupal site builder to easily select the layout they want to use, and select which regions to populate within that layout.

Generating podcast feeds with Drupal 8

Drupal can generate RSS feeds out of the box, but podcast feeds are not supported. To get around this limitation, Senior Developer Mateu Aguiló Bosch created the Podcast module, which complies with podcast standards and iTunes requirements.

This module utilizes the Views interface to map your site’s custom Drupal fields to the necessary podcast and iTunes fields. For more information on this, checkout Mateu’s tutorial video here.

Speeding up Layout Builder’s user interface

As stated earlier, Layout Builder still has “experimental” status. One of the issues that we identified is that the settings tray can take a long time to appear when adding a block into layout builder.

Lullabot Hawkeye Tenderwolf identified the bottleneck as the time it takes Drupal to iterate through the complete list of blocks in the system. To work around this, Karen Stevenson created the Block Blacklist module, in which you can specify which blocks to remove from loading. The result is a dramatically improved load time for the list of blocks.

Making subsequent page loads instantaneous 

A newer pattern on the web (called the PRPL pattern) includes pre-fetching linked pages and storing them in a browser cache. As a result, subsequent page requests return almost instantly, making for an amazing user experience. 

Bringing this pattern into Drupal, Senior Front-end Dev Mike Herchel created the Quicklink module using Google’s Quicklink JavaScript library. You can view the result of this by viewing this site’s network requests in your developer tool of choice. 

Keeping users in sync using the Simple LDAP module

Lullabot stores employee credentials in an internal LDAP server. We want all the new hires to gain immediate access to as many services as possible, including Lullabot.com. To facilitate this, we use the Simple LDAP module (which several bots maintain) to keep our website in sync with our LDAP directory.

This iteration of Lullabot.com required the development of some new features and some performance improvements for the D8 version of the module.

Want to learn more?

Built by Bots

While the site was definitely a team effort, special thanks go to Karen Stevenson, Mike Herchel, Wes Ruvalcaba, Putra Bonaccorsi, David Burns, Mateu Aguiló Bosch, James Sansbury, and, last but not least, Jared Ponchot for the beautiful designs.

Feb 08 2019
Feb 08

This Episode's Guests

Karen Stevenson

Thumbnail

Karen is one of Drupal's great pioneers, co-creating the Content Construction Kit (CCK) which has become Field UI, part of Drupal core.

Mateu Aguiló Bosch

Thumbnail

Loves to be under the sun. Very passionate about programming.

Wes Ruvalcaba

Thumbnail

Wes is a designer turned front-end dev with a strong eye for UX.

Putra Bonaccorsi

Thumbnail

An expert in content management systems like Drupal, Putra Bonaccorsi creates dynamic, user friendly sites and applications.

Feb 06 2019
Feb 06

If you are a programmer looking to improve your professional craft, there are many resources toward which you will be tempted to turn. Books and classes on programming languages, design patterns, performance, testing, and algorithms are some obvious places to look. Many are worth your time and investment.

Despite the job of a programmer often being couched in technical terms, you will certainly be working for and with other people, so you might also seek to improve in other ways. Communication skills, both spoken and written, are obvious candidates for improvement. Creative thinking and learning how to ask proper questions are critical when honing requirements and rooting out bugs, and time can always be managed better. These are not easily placed in the silo of “software engineering,” but are inevitable requirements of the job. For these less-technical skills, you will also find a plethora of resources claiming to help you in your quest for improvement. And again, many are worthwhile.

For all of your attempts at improvement, however, you will be tempted to remain in the non-fiction section of your favorite bookstore. This would be a mistake. You should be spending some of your time immersed in good fiction. Why fiction? Why made-up stories about imaginary characters? How will that help you be better at your job? There are at least four ways.

Exercise your imagination

Programming is as much a creative endeavor as it is technical mastery, and creativity requires a functioning imagination. To quote Einstein:

Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.

You can own a hammer, be really proficient with it, and even have years of experience using it, but it takes imagination to design a house and know when to use that hammer for that house. It takes imagination to get beyond your own limited viewpoint. This can make it easier to make connections and analogies between things that might not have seemed related which is a compelling definition of creativity itself.

Your imagination works like any muscle. Use it or lose it. And just like any other kind of training, it helps to have an experienced guide. Good authors of fiction are ready to be your personal trainers.

Understanding and empathy

The best writers can craft characters so real that they feel like flesh and blood, and many of those people can be similar to actual people you know. Great writers are, first and foremost, astute observers of life, and their insight into minds and motivations can become your insight. Good fiction can help you navigate real life.

One meta-study suggests that reading fiction, even just a single story, can help improve someone’s social awareness and reactions to other people. For any difficult situation or person you come across in your profession, there has probably been a writer that has explored that exact same dynamic. The external trappings and particulars will certainly be different, but the motivations and foibles will ring true.

In one example from Jane Austen’s Mansfield Park, Sir Thomas is a father who puts too much faith in the proper appearances, and after sternly talking to his son about an ill-advised scheme, the narrator of the books says, “He did not enter into any remonstrance with his other children: he was more willing to believe they felt their error than to run the risk of investigation.”

You have probably met a person like this. You might have dealt with a project manager like this who will limit communication rather than “run the risk of investigation". No news is good news. Austen has a lot to teach you about how one might maneuver around this type of personality. Or, you might be better equipped to recognize such tendencies in yourself and snuff them out before they cause trouble for yourself and others.

Remember, all software problems are really people problems at their core. Software is written by people, and the requirements are determined by other people. None of the people involved in this process are automatons. Sometimes, how one system interfaces with another has more to do with the relationship between two managers than any technical considerations.

Navigating people is just as much a part of a programmer’s job as navigating an IDE. Good fiction provides good landmarks.

Truth and relevance

This is related to the previous point but deserves its own section. Good fiction can tell the truth with imaginary facts. This is opposed to much of the news today, which can lie with the right facts, either by omitting some or through misinterpretation.

Plato, in his ideal republic, wanted to kick out all of the poets because, in his mind, they did nothing but tell lies. On the other hand, Philip Sidney, in his Defence of Poesy, said that poets lie the least. The latter is closer to the truth, even though it might betray a pessimistic view of humanity.

Jane Austen’s novels are some of the most insightful reflections on human nature. Shakespeare’s plays continue to last because they tap into something higher than “facts”. N.N. Taleb writes in his conversation on literature:

...Fiction is a certain packaging of the truth, or higher truths. Indeed I find that there is more truth in Proust, albeit it is officially fictional, than in the babbling analyses of the New York Times that give us the illusions of understanding what’s going on.

Homer, in The Iliad, gives us a powerful portrait of the pride of men reflected in the capriciousness of his gods. And, look at how he describes anger (from the Robert Fagles translation):

...bitter gall, sweeter than dripping streams of honey, that swarms in people's chests and blinds like smoke.

That is a description of anger that rings true and sticks. And maybe, just maybe, after you have witnessed example after vivid example of the phenomenon in The Iliad, you will be better equipped to stop your own anger from blinding you like smoke.

How many times will modern pundits get things wrong, or focus on things that won’t matter in another month? How many technical books will be outdated after two years? Homer will always be right and relevant.

You also get the benefit of aspirational truths. Who doesn’t want to be a faithful friend, like Samwise Gamgee, to help shoulder the heaviest burdens of those you love? Sam is a made up character. Literally does not exist in this mortal realm. Yet he is real. He is true.

Your acts of friendship might not save the world from unspeakable evil, but each one reaches for those lofty heights. Your acts of friendship are made a little bit nobler because you know that they do, in some way, push back the darkness.

Fictional truths give the world new depth to the reader. C.S. Lewis, in defending the idea of fairy tales, wrote:

He does not despise real woods because he has read of enchanted woods: the reading makes all real woods a little enchanted.

Likewise, to paraphrase G.K. Chesterton, fairy tales are more than true — not because they tell us dragons exist, but because they tell us dragons can be beaten.

The right words

One of the hardest problems in programming is naming things. For variables, functions, and classes, the right name can bring clarity to code like a brisk summer breeze, while the wrong name brings pain accompanied by the wailing and gnashing of teeth.

Sometimes, the difference between the right name and the wrong name is thin and small, but represents a vast distance, like the difference between “lightning” and “lightning bug,” or the difference between “right” and “write”.  

Do you know who else struggles with finding the right words? Great authors. And particularly, great poets. Samuel Taylor Coleridge once said:

Prose = words in their best order; — poetry = the best words in the best order. 

"The best words in the best order" could also be a definition of good, clean code. If you are a programmer, you are a freaking poet.

Well, maybe not, but this does mean that a subset of the fiction you read should be poetry, though any good fiction will help you increase your vocabulary. Poetry will just intensify the phenomenon. And when you increase your vocabulary, you increase your ability to think clearly and precisely.

While this still won’t necessarily make it easy to name things properly - even the best poets struggle and bleed over the page before they find what they are looking for - it might make it easier.

What to read

Notice the qualifier “good”. That’s important. There were over 200,000 new works of fiction published in 2015 alone. Life is too short to spend time reading bad books, especially when there are too many good ones to read for a single lifetime. I don’t mean to be a snob, just realistic.

Bad fiction will, at best, be a waste of your time. At worst, it can lie to you in ways that twist your expectations about reality by twisting what is good and beautiful. It can warp the lens through which you view life. The stories we tell ourselves and repeat about ourselves shape our consciousness, and so we want to tell ourselves good ones.

So how do you find good fiction? One heuristic is to let time be your filter. Read older stuff. Most of the stuff published today will not last and will not be the least bit relevant twenty years from now. But some of it will. Some will rise to the top and become part of the lasting legacy of our culture, shining brighter and brighter as the years pass by and scrub away the dross. But it's hard to know the jewels in advance, so let time do the work for you.

The other way is to listen to people you trust and get recommendations. In that spirit, here are some recommendations from myself and fellow Lullabots:

Jan 21 2019
Jan 21

Tom Sliker started Broadstreet Consulting more than a decade ago, and has made Drupal a family affair. We dragged Tom out of the South Carolina swamps and into DrupalCamp Atlanta to get the scoop.  How does Tom service more than 30 clients on a monthly basis with just a staff of five people?  His turn-key Aegir platform, that's how!

Jan 07 2019
Jan 07

Note: This article is a re-post from Mateu's personal blog.

I have been very vocal about the JSON:API module. I wrote articles, recorded videos, spoke at conferences, wrote extending software, and at some point, I proposed to add JSON:API into Drupal core. Then Wim and Gabe joined the JSON:API team as part of their daily job. That meant that while they took care of most of the issues in the JSON:API queue, I could attend the other API-First projects more successfully. I have not left the JSON:API project by any means, on the contrary, I'm more involved than before. However, I have just transitioned my involvement to feature design and feature sign-off, sprinkled with the occasional development. Wim and Gabe have not only been very empathic and supportive with my situation, but they have also been taking a lot of ownership of the project. JSON:API is not my baby anymore, instead we now have joint custody of our JSON:API baby.

As a result of this collaboration Gabe, Wim and I have tagged a stable release of the second version of the JSON:API module. This took a humongous amount of work, but we are very pleased with the result. This has been a long journey, and we are finally there. The JSON:API maintainers are very excited about it.

I know that switching to a new major version is always a little bit scary. You update the module and hope for the best. With major version upgrades, there is no guarantee that your use of the module is still going to work. This is unfortunate as a site owner, but including breaking changes is often the best solution for the module's maintenance and to add new features. The JSON:API maintainers are aware of this. I have gone through the process myself and I have been frustrated by it. This is why we have tried to make the upgrade process as smooth as possible.

What Changed?

If you are a long-time Drupal developer you have probably wondered how do I do this D7 thing in D8? When that happens, the best solution is to search a change record for Drupal core to see if it change since Drupal 7. The change records are a fantastic tool to track the things changed in each release. Change records allow you to only consider the issues that have user-facing changes, avoiding lots of noise of internal changes and bug fixes. In summary, they let users understand how to migrate from one version to another.

Very few contributed modules use change records. This may be because module maintainers are unaware of this feature for contrib. It could also be because maintaining a module is a big burden and manually writing change records is yet another time-consuming task. The JSON:API module has comprehensive change records on all the things you need to pay attention when upgrading to JSON:API 2.0.

Change Records

As I mentioned above, if you want to understand what has changed since JSON:API 8.x-1.24 you only need to visit the change records page for JSON:API. However, I want to highlight some important changes.

Config Entity Mutation is now in JSON:API Extras

This is no longer possible only using JSON:API. This feature was removed because Entity API does a great job ensuring that access rules are respected, but the Configuration Entity API does not support validation of configuration entities yet. That means the responsibility of validation falls on the client, which has security and data integrity implications. We felt we ought to move this feature to JSON:API Extras, given that JSON:API 2.x will be added into Drupal core.

No More Custom Field Type Normalizers

This is by far the most controversial change. Even though custom normalizers for JSON:API have been strongly discouraged for a while, JSON:API 2.x will enforce that. Sites that have been in violation of the recommendation will now need to refactor to supported patterns. This was driven by the limitations of the serialization component in Symfony. In particular, we aim to make it possible to derive a consistent schema per resource type. I explained why this is important in this article.

Supported patterns are:

  • Create a computed field. Note that a true computed field will be calculated on every entity load, which may be a good or a bad thing depending on the use case. You can also create stored fields that are calculated on entity presave. The linked documentation has examples for both methods.
  • Write a normalizer at the Data Type level, instead of field or entity level. As a benefit, this normalizer will also work in core REST!
  • Create a Field Enhancer plugin like these, using JSON:API Extras. This is the most similar pattern, it enforces you to define the schema of the enhancer.

File URLs

JSON:API pioneered the idea of having a computed url field for file entities that an external application can use without modifications. Ever since this feature has made it into core, with some minor modifications. Now the url is no longer a computed field, but a computed property on the uri field.

Special Properties

The official JSON:API specification reserves the type and id keys. These keys cannot exist inside of the attributes or relationships sections of a resource object. That's why we are now prepending {entity_type}_ to the key name when those are found. In addition to that, internal fields like the entity ID (nid, tid, etc.) will have drupal_internal__ prepended to them. Finally, we have decided to omit the uuid field given that it already is the resource ID.

Final Goodbye to _format

JSON:API 1.x dropped the need to have the unpopular _format parameter in the URL. Instead, it allowed the more standard Accept: application/vnd.api+json to be used for format negotiation. JSON:API 2.x continues this pattern. This header is now required to have cacheable 4XX error responses, which is an important performance improvement.

Benefits of Upgrading

You have seen that these changes are not very disruptive, and even when they are, it is very simple to upgrade to the new patterns. This will allow you to upgrade to the new version with relative ease. Once you've done that you will notice some immediate benefits:

  • Performance improvements. Performance improved overall, but especially when using filtering, includes and sparse fieldsets. Some of those with the help of early adopters during the RC period!
  • Better compatibility with JSON:API clients. That's because JSON:API 2.x also fixes several spec compliance edge case issues.
  • We pledge that you'll be able to transition cleanly to JSON:API in core. This is especially important for future-proofing your sites today.

Benefits of Starting a New Project with the Old JSON:API 1.x

There are truly none. Version 2.x builds on top of 1.x so it carries all the goodness of 1.x plus all the improvements.

If you are starting a new project, you should use JSON:API 2.x.

JSON:API 2.x is what new installs of Contenta CMS will get, and remember that Contenta CMS ships with the most up-to-date recommendations in decoupled Drupal. Star the project in GitHub and keep an eye on it here, if you want.

What Comes Next?

Our highest priority at this point is the inclusion of JSON:API in Drupal core. That means that most of our efforts will be focused on responding to feedback to the core patch and making sure that it does not get stalled.

In addition to that we will likely tag JSON:API 2.1 very shortly after JSON:API 2.0. That will include:

  1. Binary file uploads using JSON:API.
  2. Support for version negotiation. Allows latest or default revision to be retrieved. Supports the Content Moderation module in core. This will be instrumental in decoupled preview systems.

Our roadmap includes:

  1. Full support for revisions, including accessing a history of revisions. Mutating revisions is blocked on Drupal core providing a revision access API.
  2. Full support for translations. That means that you will be able to create and update translations using JSON:API. That adds on top of the current ability to GET translated entities.
  3. Improvements in hypermedia support. In particular, we aim to include extension points so Drupal sites can include useful related links like add-to-cart, view-on-web, track-purchase, etc.
  4. Self-sufficient schema generation. Right now we rely on the Schemata module in order to generate schemas for the JSON:API resources. That schema is used by OpenAPI to generate documentation and the Admin UI initiative to auto-generate forms. We aim to have more reliable schemas without external dependencies.
  5. More performance improvements. Because JSON:API only provides an HTTP API, implementation details are free to change. This already enabled major performance improvements, but we believe it can still be significantly improved. An example is caching partial serializations.

How Can You Help?

The JSON:API project page has a list of ways you can help, but here are several specific things you can do if you would like to contribute right away:

  1. Write an experience report. This is a Drupal.org issue in the JSON:API queue that summarizes the things that you've done with JSON:API, what you liked, and what we can improve. You can see examples of those here. We have improved the module greatly thanks to these in the past. Help us help you!
  2. Help us spread the word. Tweet about this article, blog about the module, promote the JSON:API tooling in JavaScript, etc.
  3. Review the core patch.
  4. Jump into the issue queue to write documentation, propose features, author patches, review code, etc.

Photo by Sagar Patil on Unsplash.

Dec 19 2018
Dec 19

Earlier this year, Lullabot began a four-month-long content strategy engagement for the state of Georgia. The project would involve coming up with a migration plan from Drupal 7 to Drupal 8 for 85 of their state agency sites, with an eye towards a future where content can be more freely and accurately shared between sites. Our first step was to get a handle of all the content on their existing sites. How much content were we dealing with? How is it organized? What does it contain? In other words, we needed a content inventory. Each of these 85 sites was its own individual install of Drupal, with the largest containing almost 10K unique URLs, so this one was going to be a doozy. We hadn't really done a content strategy project of this scale before, and our existing toolset wasn't going to cut it, so I started doing some research to see what other tools might work. 

Open up any number of content strategy blogs and you will find an endless supply of articles explaining why content inventories are important, and templates for storing said content inventories. What you will find a distinct lack of is the how: how does the data get from your website to the spreadsheet for review? For smaller sites, manually compiling this data is reasonably straightforward, but once you get past a couple hundred pages, this is no longer realistic. In past Drupal projects, we have been able to use a dump of the routing table as a great starting point, but with 85 sites even this would be unmanageable. We quickly realized we were probably looking at a spider of some sort. What we needed was something that met the following criteria:

  • Flexible: We needed the ability to scan multiple domains into a single collection of URLs, as well as the ability to include and exclude URLs that met specific criteria. Additionally, we knew that there would be times when we might want to just grab a specific subset of information, be it by domain, site section, etc. We honestly weren't completely sure what all might come in handy, so we wanted some assurance that we would be able to flexibly get what we needed as the project moved forward.
  • Scalable: We are looking at hundreds of thousands of URLs across almost a hundred domains, and we knew we were almost certainly going to have to run it multiple times. A platform that charged per-URL was not going to cut it.
  • Repeatable: We knew this was going to be a learning process, and, as such, we were going to need to be able to run a scan, check it, and iterate. Any configuration should be saveable and cloneable, ideally in a format suitable for version control which would allow us to track our changes over time and more easily determine which changes influenced the scan in what ways. In a truly ideal scenario, it would be scriptable and able to be run from the command line.
  • Analysis: We wanted to be able to run a bulk analysis on the site’s content to find things like reading level, sentiment, and reading time. 

Some of the first tools I found were hosted solutions like Content Analysis Tool and DynoMapper. The problem is that these tools charge on a per-URL basis, and weren't going to have the level of repeatability and customization we needed. (This is not to say that these aren't fine tools, they just weren't what we were looking for in terms of this project.) We then began to architect our own tool, but we really didn't want to add the baggage of writing it onto an already hectic schedule. Thankfully, we were able to avoid that, and in the process discovered an incredibly rich set of tools for creating content inventories which have very quickly become an absolutely essential part of our toolkit. They are:

  • Screaming Frog SEO Spider: An incredibly flexible spidering application. 
  • URL Profiler: A content analysis tool which integrates well with the CSVs generated by Screaming Frog.
  • GoCSV: A robust command line tool created with the sole purpose of manipulating very large CSVs very quickly.

Let's look at each of these elements in greater detail, and see how they ended up fitting into the project.

Screaming Frog

A screenshot of the Screaming Frog SEO SpiderThe main workspace for the Screaming Frog SEO Spider

Screaming Frog is an SEO consulting company based in the UK. They also produce the Screaming Frog SEO Spider, an application which is available for both Mac and Windows. The SEO Spider has all the flexibility and configurability you would expect from such an application. You can very carefully control what you do and don’t crawl, and there are a number of ways to report the results of your crawl and export it to CSVs for further processing. I don’t intend to cover the product in depth. Instead, I’d like to focus on the elements which made it particularly useful for us.

Repeatability

A key feature in Screaming Frog is the ability to save both the results of a session and its configuration for future use. The results are important to save because Screaming Frog generates a lot of data, and you don’t necessarily know which slice of it you will need at any given time. Having the ability to reload the results and analyze them further is a huge benefit. Saving the configuration is key because it means that you can re-run the spider with the exact same configuration you used before, meaning your new results will be comparable to your last ones. 

Additionally, the newest version of the software allows you to run scans using a specific configuration from the command-line, opening up a wealth of possibilities for scripted and scheduled scans. This is a game-changer for situations like ours, where we might want to run a scan repeatedly across a number of specific properties, or set our clients up with the ability to automatically get a new scan every month or quarter.

Extraction

A screenshot showing Screaming Frog's extraction configurationThe Screaming Frog extraction configuration screen

As we explored what we wanted to get out of these scans, we realized that it would be really nice to be able to identify some Drupal-specific information (NID, content type) along with the more generic data you would normally get out of a spider. Originally, we had thought we would have to link the results of the scan back to Drupal’s menu table in order to extract that information. However, Screaming Frog offers the ability to extract information out of the HTML in a page based on XPath queries. Most standard Drupal themes include information about the node inside the CSS classes they create. For instance, here is a fairly standard Drupal body tag.

<body class="html not-front not-logged-in no-sidebars page-node page-node- page-node-68 node-type-basic-page">

As you can see, this class contains both the node’s ID and its content type, which means we were able to extract this data and include it in the results of our scan. The more we used this functionality, the more uses we found for it. For instance, it is often useful to be able to identify pages with problematic HTML early on in a project so you can get a handle on problems that are going to come up during migration. We were able to do things like count the number of times a tag was used within the content area, allowing us to identify pages with inline CSS or JavaScript which would have to be dealt with later.

We’ve only begun to scratch the surface of what we can do with this XPath extraction capability, and future projects will certainly see us dive into it more deeply. 

Analytics

A screenshot showing some of the metrics available through Screaming Frog's integration with Google AnalyticsA sample of the metrics available when integrating Google Analytics with Screaming Frog

Another set of data you can bring into your scan is associated with information from Google Analytics. Once you authenticate through Screaming Frog, it will allow you to choose what properties and views you wish to retrieve, as well as what individual metrics to report within your result set. There is an enormous number of metrics available, from basics like PageViews and BounceRate to extended reporting on conversions, transactions, and ad clicks. Bringing this analytics information to bear during a content audit is the key to identifying which content is performing and why. Screaming Frog also has the ability to integrate with Google Search Console and SEO tools like Majestic, Ahrefs, and Moz.

Cost

Finally, Screaming Frog provides a straightforward yearly license fee with no upcharges based on the number of URLs scanned. This is not to say it is cheap, the cost is around $200 a year, but having it be predictable without worrying about how much we used it was key to making this part of the project work. 

URL Profiler

Screenshot of the main workspace for URL ProfilerThe main workspace for URL Profiler

The second piece of this puzzle is URL Profiler. Screaming Frog scans your sites and catalogs their URLs and metadata. URL Profiler analyzes the content which lives at these URLs and provides you with extended information about them. This is as simple as importing a CSV of URLs, choosing your options, and clicking Run. Once the run is done, you get back a spreadsheet which combines your original CSV with the data URL Profiler has put together. As you can see, it provides an extensive number of integrations, many of them SEO-focused. Many of these require extended subscriptions to be useful, however, the software itself provides a set of content quality metrics by checking the Readability box. These include

  • Reading Time
  • 10 most frequently used words on the page
  • Sentiment analysis (positive, negative, or neutral)
  • Dale-Chall reading ease score
  • Flesh-Kincaid reading ease score
  • Gunning-Fog estimation of years of education needed to understand the text
  • SMOG Index estimation of years of education needed to understand the text

While these algorithms need to be taken with a grain of salt, they provide very useful guidelines for the readability of your content, and in aggregate can be really useful as a broad overview of how you should improve. For instance, we were able to take this content and create graphs that ranked state agencies from least to most complex text, as well as by average read time. We could then take read time and compare it to "Time on Page" from Google Analytics to show whether or not people were actually reading those long pages. 

On the downside, URL Profiler isn't scriptable from the command-line the way Screaming Frog is. It is also more expensive, requiring a monthly subscription of around $40 a month rather than a single yearly fee. Nevertheless, it is an extremely useful tool which has earned a permanent place in our toolbox. 

GoCSV​

One of the first things we noticed when we ran Screaming Frog on the Georgia state agency sites was that they had a lot of PDFs. In fact, they had more PDFs than they had HTML pages. We really needed an easy way to strip those rows out of the CSVs before we ran them through URL Profiler because URL Profiler won’t analyze downloadable files like PDFs or Word documents. We also had other things we wanted to be able to do. For instance, we saw some utility in being able to split the scan out into separate CSVs by content type, or state agency, or response code, or who knows what else! Once again I started architecting a tool to generate these sets of data, and once again it turned out I didn't have to.

GoCSV is an open source command-line tool that was created with the sole purpose of performantly manipulating large CSVs. The documentation goes into these options in great detail, but one of the most useful functions we found was a filter that allows you to generate a new subset of data based on the values in one of the CSV’s cells. This allowed us to create extensive shell scripts to generate a wide variety of data sets from the single monolithic scan of all the state agencies in a repeatable and predictable way. Every time we did a new scan of all the sites, we could, with just a few keystrokes, generate a whole new set of CSVs which broke this data into subsets that were just documents and just HTML, and then for each of those subsets, break them down further by domain, content type, response code, and pre-defined verticals. This script would run in under 60 seconds, despite the fact that the complete CSV had over 150,000 rows. 

Another use case we found for GoCSV was to create pre-formatted spreadsheets for content audits. These large-scale inventories are useful, but when it comes to digging in and doing a content audit, there’s just way more information than is needed. There were also a variety of columns that we wanted to add for things like workflow tracking and keep/kill/combine decisions which weren't present in the original CSVs. Once again, we were able to create a shell script which allowed us to take the CSVs by domain and generate new versions that contained only the information we needed and added the new columns we wanted. 

What It Got Us

Having put this toolset together, we were able to get some really valuable insights into the content we were dealing with. For instance, by having an easy way to separate the downloadable documents from HTML pages, and then even further break those results down by agency, we were able to produce a chart which showed the agencies that relied particularly heavily on PDFs. This is really useful information to have as Georgia’s Digital Services team guides these agencies through their content audits. 

Graph showing the ratio of documents to HTML pages per state agencyRatio of documents to HTML pages per state agency

One of the things that URL Profiler brought into play was the number of words on every page in a site. Here again, we were able to take this information, cut out the downloadable documents, and take an average across just the HTML pages for each domain. This showed us which agencies tended to cram more content into single pages rather than spreading it around into more focused ones. This is also useful information to have on hand during a content audit because it indicates that you may want to prioritize figuring out how to split up content for these specific agencies.

Graph showing the the average word count of all content per state agency, grouped by pages of text.Average word count per state agency, grouped by how many pages of text they have.

Finally, after running our scans, I noticed that for some agencies, the amount of published content they had in Drupal was much higher than what our scan had found. We were able to put together the two sets of data and figure out that some agencies had been simply removing links to old content like events or job postings, but never archiving it or removing it. These stranded nodes were still available to the public and indexed by Google, but contained woefully outdated information. Without spidering the site, we may not have found this problem until much later in the process. 

Looking Forward

Using Screaming Frog, URL Profiler, and GoCSV in combination, we were able to put together a pipeline for generating large-scale content inventories that was repeatable and predictable. This was a huge boon not just for the State of Georgia and other clients, but also for Lullabot itself as we embark on our own website re-design and content strategy. Amazingly enough, we just scratched the surface in our usage of these products and this article just scratches the surface of what we learned and implemented. Stay tuned for more articles that will dive more deeply into different aspects of what we learned, and highlight more tips and tricks that make generating inventories easier and much more useful. 

Dec 13 2018
Dec 13

Mike and Matt talk with the team that helped implement content strategy on Georgia.gov.

Dec 05 2018
Dec 05

It's never too late to start thinking about user experience design when working on a project. To help ensure the project is a success, it's best to have a UX designer involved in the project as early as possible. However, circumstances may not always be ideal, and User Experience may become an afterthought. Sometimes it isn't until the project is already well on its way when questions around user experience start popping up, and a decision is made to bring in a professional to help craft the necessary solutions. 

What’s the best way for a UX designer to join a project that is well on its way? In this article, we will discuss some actions that UX designers can take to help create a smooth process when joining a project already in progress.

General Onboarding

Planning and implementing an onboarding process can help set the tone for the remainder of the project. If it’s disorganized and not well planned out, you can feel underprepared for the first task, which can lead to a longer design process. It’s helpful to designate a project team member to help with on-boarding. It should be someone who knows the project well and can help answer questions about the project and process. This is usually a product owner or a project manager but isn’t limited to either. If you haven’t been assigned someone to help you with the on-boarding process, reach out to help identify which team member would be best for this role. During the on-boarding process, discuss what user experience issues the team is hoping to solve, and also review the background of significant decisions that were made. This will help you to evaluate the current state of the project as well as the history of the decision-making process. You should also make sure you understand the project goals and the intended audience. Ask for any documentation around usability testing, acceptance criteria, competitive reviews, or notes for meetings that discuss essential features. Don’t be afraid ask questions to help you fully grasp the project itself. And don’t forget to ask why. Sometimes entertaining the mindset of a five-year-old when trying to understand will help you find the answers you’re seeking.

Process Evaluation

How you climb a mountain is more important than reaching the top. - Yvon Chouinard

Processes help ensure that the project goes smoothly, is on time, and on budget. They can also be a checkpoint for all those involved.  If a process doesn't already exist that includes UX Design, work together with the team to establish a process to discuss, track and review work. If you feel that a process step is missing or a current system isn't working, speak up and work with the team to revise it. Make sure to include any critical process that the team may be lacking. You also may want to make sure that discussions around any important features include a UX Designer. Ask if there are any product meetings that you should be joining to help give input as early as possible.

Schedule Weekly Design Reviews

One example of improving the process to include UX Design is scheduling weekly meetings to review design work that’s in progress. This also gives project members an opportunity to ask questions and discuss upcoming features and acceptance criteria.

Incorporate Usability Testing

Another suggestion is to include usability tests on a few completed important features before moving ahead. The results of the usability tests may help give direction or answer questions the product team has been struggling with. It can also help prioritize upcoming features or feature changes. The most important thing to remember is that usability testing can help improve the product, so it’s tailored to your specific users, and this should be communicated to the project team.

Collect General User Feedback

Establishing early on the best way to collect and give feedback on a design or feature can help streamline the design process. Should it be written feedback? Or would a meeting work better where everyone can speak up? Sometimes, when multiple people are reviewing and giving feedback, it’s best to appoint one person to collect and aggregate the input before it filters down to you.

Track Project Progress

You also want to discuss the best way to track work in progress. If your team is using an agile process, one idea is to include design tickets in the same software that you’re using to keep track of sprints such as Jira [link] or Trello [link]. Discuss the best way for summarizing features, adding acceptance criteria and tracking input in whatever system you decide to use.

Prioritization of Work

Efficiency is doing things right; effectiveness is doing the right things. - Peter Drucker

The team should be clear on priorities when it comes to work, features, and feedback. Joining a team that’s in progress can be very overwhelming to both the designers and stakeholders and creating clear priorities can help set expectations and make it clear to both sides on what the team should focus on first. If a list of priorities doesn't already exist, create one. It doesn't have to be fancy. A simple excel sheet or Google Sheets will do. You can create separate priority lists for things like upcoming features that need design, QA, or user feedback. You can also combine everything into a single list if that works better for your team. Just make sure that it links to or includes as much detail as possible. In the example below, a feature that has completed acceptance criteria is linked to a ticket in Jira that explains all of the details.

Google Sheets

It’s also helpful to group related features together, even though they may have different priorities. This will help you think about how to approach a feature without needing to reworking it later down the line. Be proactive. Ask questions around the priority of items if something doesn't make sense to you. If needed, volunteer to help prioritize features based on what makes sense for a holistic finished product or feature. Creating diagrams and flowcharts can help get everyone to understand how separate features can be connected and what makes the most sense to tackle first. Make sure that QA and user feedback is also part of the priority process.

Process flowchart

Summary

Having any team member join a project mid-process can be intimidating for all parties involved, but it’s important to be open and understanding. Improving the process and the end result is in everyone's interest, and giving and accepting feedback with an open mind can play an important role in ensuring that the project runs smoothly for everyone involved.

For User Experience Designers, it’s important to respect what’s already been accomplished and established with the idea that you should tread lightly to make small improvements at first. This will help gain confidence from the team, while also giving you time to learn about the project and understand the decisions that lead up to where it’s at today. For stakeholders involved, it’s important to listen with an open mind and take a small step back to reevaluate the best way to include UX in the process moving forward. The above suggestions can help both parties understand what actions they can take to help make the onboarding process for a UX Designer a smooth transition.

Nov 28 2018
Nov 28

We're excited to announce that 15 Lullabots will be speaking at DrupalCon Seattle! From presentations to panel discussions, we're looking forward to sharing insights and good conversation with our fellow Drupalers. Get ready for mass Starbucks consumption and the following Lullabot sessions. And yes, we will be hosting a party in case you're wondering. Stay tuned for more details!

Karen Stevenson, Director of Technology

Karen will talk about the challenges of the original Drupal AMP architecture, changes in the new branch, and some big goals for the future of the project.

Zequi Vázquez, Developer

Zequi will explore Drupal Core vulnerabilities, SA-CORE-2014-005 and SA-CORE-2018-7600, by discussing the logic behind them, why they present a big risk to a Drupal site, and how the patches work to prevent a successful exploitation.

Sally Young, Senior Technical Architect (with Matthew Grill, Senior JavaScript Engineer at Acquia & Daniel Wehner, Senior Drupal Developer at Chapter Three)

Discussing common problems and best practices of decoupled Drupal has surpassed the question of whether or not to decouple. Sally, Matthew, and Daniel will talk about why the Drupal Admin UI team went with a fully decoupled approach as well as common approaches to routing, fetching data, managing state with autosave and some level of extensibility.

Sally Young, Senior Technical Architect (with Lauri Eskola, Software Engineer in OCTO at Acquia; Matthew Grill, Senior JavaScript Engineer at Acquia; & Daniel Wehner, Senior Drupal Developer at Chapter Three)

The Admin UI & JavaScript Modernisation initiative is planning a re-imagined content authoring and site administration experience in Drupal built on modern JavaScript foundations. This session will provide the latest updates and a discussion on what is currently in the works in hopes of getting your valuable feedback.

Greg Dunlap, Senior Digital Strategist

Greg will take you on a tour of the set of tools we use at Lullabot to create predictable and repeatable content inventories and audits for large-scale enterprise websites. You will leave with a powerful toolkit and a deeper understanding of how you use them and why.

Mike Herchel, Senior Front-end Developer

If you're annoyed by slow websites, Mike will take you on a deep dive into modern web performance. During this 90 minute session, you will get hands-on experience on how to identify and fix performance bottlenecks in your website and web app.

Matt Westgate, CEO & Co-founder

Your DevOps practice is not sustainable if you haven't implemented its culture first. Matt will take you through research conducted on highly effective teams to better understand the importance of culture and give you three steps you can take to create a cultural shift in your DevOps practice. 

April Sides, Developer

Life is too short to work for an employer with whom you do not share common values or fits your needs. April will give you tips and insights on how to evaluate your employer and know when it's time to fire them. She'll also talk about how to evaluate a potential employer and prepare for an interview in a way that helps you find the right match.

Karen Stevenson, Director of TechnologyPutra Bonaccorsi, Senior Front-end DeveloperWes Ruvalcaba, Senior Front-end Developer; & Ellie Fanning, Head of Marketing

Karen, Mike, Wes, and team built a soon-to-be-launched Drupal 8 version of Lullabot.com as Layout Builder was rolling out in core. With the goal of giving our non-technical Head of Marketing total control of the site, lessons were learned and successes achieved. Find out what those were and also learn about the new contrib module Views Layout they created.

Matthew Tift, Senior Developer

The words "rockstar" and "rock star" show up around 500 times on Drupal.org. Matthew explores how the language we use in the Drupal community affects behavior and how to negotiate these concepts in a skillful and friendly manner.

Helena McCabe, Senior Front-end Developer (with Carie Fisher, Sr. Accessibility Instructor and Dev at Deque)

Helena and Carie will examine how web accessibility affects different personas within the disability community and how you can make your digital efforts more inclusive with these valuable insights.

Marc Drummond, Senior Front-end Developer Greg Dunlap, Senior Digital Strategist (with Fatima Sarah Khalid, Mentor at Drupal Diversity & Inclusion Contribution Team; Tara King, Project lead at Drupal Diversity & Inclusion Contribution Team; & Alanna Burke, Drupal Engineer at Kanopi Studios)

Open source has the potential to transform society, but Drupal does not currently represent the diversity of the world around us. These members of the Drupal Diversity & Inclusion (DDI) group will discuss the state of Drupal diversity, why it's important, and updates on their efforts.

Mateu Aguiló Bosch, Senior Developer (with Wim Leers, Principal Software Engineer in OCTO at Acquia & Gabe Sullice, Sr. Software Engineer, Acquia Drupal Acceleration Team at Acquia)

Mateu and his fellow API-first Initiative maintainers will share updates and goals, lessons and challenges, and discuss why they're pushing for inclusion into Drupal core. They give candy to those who participate in the conversation as an added bonus!

Jeff Eaton, Senior Digital Strategist

Personalization has become quite the buzzword, but the reality in the trenches rarely lives up to the promise of well-polished vendor demos. Eaton will help preserve your sanity by guiding you through the steps you should take before launching a personalization initiative or purchasing a shiny new product. 

Also, from our sister company, Drupalize.Me, don't miss this session presented by Joe Shindelar:

Joe will discuss how Gatsby and Drupal work together to build decoupled applications, why Gatsby is great for static sites, and how to handle private content, and other personalization within a decoupled application. Find out what possibilities exist and how you can get started.


 

Photo by Timothy Eberly on Unsplash

Nov 28 2018
Nov 28

At Lullabot, we’ve been using GitHub, as well as other project management systems for many years now. We first wrote about managing projects with GitHub back in 2012 when it was still a bit fresh. Many of those guidelines we set forth still apply, but GitHub itself has changed quite a bit since then. One of our favorite additions has been the Projects tab, which gives any repository the ability to organize issues onto boards with columns and provides some basic workflow transitions for tickets. This article will go over one of the ways we’ve been using GitHub Projects for our clients, and set forth some more guidelines that might be useful for your next project.

First, let’s go over a few key components that we’re using for our project organization. Each of these will be explained in more detail below.

  1. Project boards
  2. Epics
  3. Issues
  4. Labels

Project boards

A project board is a collection of issues being worked on during a given time. This time period is typically what is being worked on currently, or coming up in the future. Boards have columns which represent the state of a given issue, such as “To Do”, “Doing”, “Done”, etc.

For our purposes, we’ve created two main project boards:

  1. Epics Board
  2. Development Board

Epics Board

ex: https://github.com/Lullabot/PM-template/projects/1

The purpose of this Project board is to track the Epics, which can be seen as the "parent" issues of a set of related issues. More on Epics below. This gives team members a birds-eye view of high-level features or bodies of work. For example, you might see something like “Menu System” or “Homepage” on this board and can quickly see that “Menu System” is currently in “Development”, while the “Homepage” is currently in “Discovery”.

The “Epics” board has four main columns. (Each column is sorted with highest priority issues at the top and lower priority issues at the bottom.) The four columns:

  • Upcoming - tracks work that is coming up, and not yet defined.
  • Discovery - tracks work that is in the discovery phase being defined.
  • Development - tracks work that is currently in development.
  • Done - tracks work that is complete. An Epic is considered complete when all of its issues are closed.

Development Board

ex: https://github.com/Lullabot/PM-template/projects/2

The purpose of the Development board is to track the issues which are actionable by developers. This is the day-to-day work of the team and the columns here are typically associated with some state of progression through the board. Issues on this board are things like “Install module X”, “Build Recent Posts View”, and “Theme Social Sharing Component”.

This board has six main columns:

  • To do - issues that are ready to be worked on - developers can assign themselves as needed.
  • In progress - indicates that an issue is being worked on.
  • Peer Review - issue has a pull request and is ready for, or under review by a peer.
  • QA - indicates that peer review is passed and is ready for the PM or QA lead for testing.
  • Stakeholder review - stakeholder should review this issue for final approval before closing.
  • Done - work that is complete.

Epics

An Epic is an issue that can be considered the "parent" issue of a body of work. It will have the "Epic" label on it for identification as an Epic, and a label that corresponds to the name of the Epic (such as "Menu"). Epics list the various issues that comprise the tasks needed to accomplish a body of work. This provides a quick overview of the work in one spot. It's proven very useful when gardening the issue queue or providing stakeholders with an overall status of the body of work.

For instance:

Homepage [Epic]

  • Tasks

    • #4 Build Recent Posts View
    • #5 Theme Social Sharing Component

The Epic should also have any other relevant links. Some typical links you may find in an Epic:

  • Designs
  • Wiki entry
  • Dependencies
  • Architecture documentation
  • Phases

Phases

Depending on timelines and the amount of work, some Epics may require multiple Phases. These Phases are split up into their own Epics and labeled with the particular Phase of the project (like “Phase 1” and “Phase 2”). A Phase typically encompasses a releasable state of work, or generally something that is not going to be broken but may not have all of the desired functionality built. You might build out a menu in Phase 1, and translate that menu in Phase 2.

For instance:

  • Menu Phase 1

    • Labels: [Menu] [Epic] [Phase 1]
    • Tasks
    • Labels: [Menu] [Phase 1]
  • Menu Phase 2

    • Labels: [Menu] [Epic] [Phase 2]
    • Tasks
    • Labels: [Menu] [Phase 2]
  • Menu Phase 3

    • Labels: [Menu] [Epic] [Phase 3]
    • Tasks
    • Labels: [Menu] [Phase 3]

Issues within Phase 3 (for example) will have the main epic as a label "Menu" as well as the phase, "Phase 3", for sorting and identification purposes.

Issues

Issues are the main objects within GitHub that provide the means of describing work, and communicating around that work. At the lowest level, they provide a description, comments, assignees, labels, projects (a means of placing an issue on a project board) and milestones (or a means of grouping issues by release target date).

Many times these issues are directly linked to from a pull request that addresses the issue. By mentioning the issue with a pound(#) sign, GitHub will automatically create a link out of the text and add a metadata item on the issue deep linking to the pull request. This is relevant as a means of tracking what changes are being made with the original request which then can be used to get to the source of the request.

For our purposes, we have two "types" of issues: Epics or Tasks. As described above, Epics have the "Epic" label, while all others have a label for the Epic to which it belongs. If an issue does not have a value in the "Project" field, then it does not show up on a project board and is considered to be in the Backlog and not ready for work.

Labels

Labels are a means of having a taxonomy for issues.

We have 4 main uses for Labels currently:

  1. Epic - this indicates the issue is an Epic and will house information related to the body of work.
  2. [name of epic] (ex: Menu) - indicates that this is a task that is related to the Menu epic. If combined with the Epic label, it is the Menu Epic.
  3. [phase] (ex: Phase 1) - indicates this is part of a particular phase of work. if there is no phase label, it's considered to be a part of Phase 1.
  4. bug - indicates that this task is a defect that was found and separated from the issue in which it was identified.
  5. Blocked - Indicates this issue is blocked by something. The Blocker should be called out in the issue description.
  6. Blocker - indicates that this issue is blocking something.
  7. front-end - indicates that an issue has the underlying back-end work completed and is ready for a front-end developer to begin working on it.

There are other labels that are used sometimes to indicate various meta, such as "enhancement", "design", or "Parking Lot". There are no set rules about how to use these sort of labels, and you can create them as you see fit if you think they are useful to the team. Though be warned, if you include too many labels they will become useless. Teams will generally only use labels if they are frictionless and helpful. The moment they become overwhelming, duplicative, or unclear, the team will generally abandon good label hygiene.

These are just some guidelines we consider when organizing a project with GitHub. The tools themselves are flexible and can take whatever form you choose. This is just one recommendation which is working pretty well for us one of our projects, but the biggest takeaway is that it’s versatile and can be adapted to whatever your situation may require.

How have you been organizing projects in GitHub? We’d love to hear about your experiences in the comments below!

Oct 11 2018
Oct 11

Mike and Matt interview members of the Drupal 8 JavaScript modernization initiative to find out what's going on, and the current status.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web