Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Mar 15 2024
Mar 15

We look into the challenges and solutions in managing Drupal and other website updates across hundreds of websites, a topic brought to life by members of Lullabot's Support and Maintenance team and Jenna Tollerson from the State of Georgia.

Learn how Lullabot deals with update processes, drastically reducing manual labor while enhancing security and efficiency through automation.

"It's almost like adding another whole developer to your team!"

Mar 05 2024
Mar 05

Component-based web development has become the de facto approach for new Drupal projects. Componentizing your UI has multiple advantages, like modular development, improved automated testing, easier maintenance, code reusability, better collaboration, and more. Because of these benefits, we led the effort to create a component solution for Drupal core: Single Directory Components (SDC).

As part of that suite of modules, we created CL Server. Initially developed for the CL Components module (a contrib precursor of SDC in core), it eventually became the basis for SDC in core. 

In coordination with a Storybook plugin, this module allowed anyone to create a catalog of components using Storybook, an open-source project for developing UI components in isolation. It allows developers to build, test, and document their components without constant integration with the rest of the codebase. Storybook promotes consistency across components by providing a centralized location for designers, developers, and QA engineers to share their work, with documentation capabilities to help ensure that components are easily understood and maintained.

Our Drupal integration worked great for a while. But we started to feel its limitations.

Limitations with the current approach to Storybook

Storybook has the concept of decorators, which are divs that surround the components inside of Storybook. These are useful when your component needs some contextual markup. 

For example, a grid element. This component may not look correct when rendered in isolation. It needs more context. In general, the CL Server approach was coupling the HTML in the Storybook story to the HTML that a component generates. We needed to have more control over the markup rendered in Storybook.

Another issue was that CL Server worked with the YAML/JSON version of stories, as documented in the official docs. YAML and JSON are not programming languages and lack dynamic features that we missed, like setting variables, translating strings, looping over complex structures, and providing non-scalar arguments to Storybook (like objects of type Drupal\Core\Template\Attribute). You cannot instantiate a PHP object in JSON since JSON knows nothing about PHP.

Finally, we experienced issues upgrading the necessary addon @lullabot/storybook-drupal-addon after major releases of Storybook. We kept running into backward compatibility issues, tangled dependencies, and the need to support two major versions of Storybook simultaneously during this transition.

In search of an alternative

The initial implementation of CL Server started as an experiment that people liked and used. We started it before Storybook committed to their server-side framework. Still, it worked, thanks to the addon @lullabot/storybook-drupal-addon and some conventions in JSON/YAML stories format for SDC components.

During our research for an alternative, we realized that the Storybook team did not mean for us to write JSON stories. They only used JSON because all languages know how to read and write with it. We needed to generate the JSON from our stories, which should be written in our native templating language.

In Drupal's case, that’s Twig.

If a server-side application writes the stories in their template language, then stories can import components and generate contextual markup with front-end developers' tools. Using Twig to write stories would solve all of our current limitations.

Moreover, by generating the JSON, we can have Drupal add some finicky metadata that will make the Storybook addon unnecessary. All the problems of upgrading Storybook to newer versions would be solved. By adding this extra metadata, we would only need the native integration for server-rendered components maintained by the Storybook team.

But how do we make Storybook, a JavaScript application, understand our Twig-PHP components while avoiding Twig.js?

The proposal

The goal is to create a way to write several stories in a single Twig file, in line with the official Storybook documentation on writing stories:

 

A story is a function that describes how to render a component. You can have multiple stories per component, and the simplest way to create stories is to render a component with different arguments multiple times.

For this to happen, we created two new Twig tags, {% stories %} and {% story %}

{% stories %} is a container for each individual story, but it also allows us to add metadata to the file as a whole. This metadata will be included in the transpiled JSON version. 

{% story %} also contains metadata for Storybook, but it is the actual Twig template to render for that particular story. In other words, it contains the twig code we want to render in the Storybook canvas when the story is selected. Pretty intuitive.

Consider this extremely simplified example:

{% stories my_id with { title: 'Components/Examples/Card' } %}

  {% story first with { name: '1. Default', args: { foo: 'bar' } } %}

    {# Here, use any Twig you want, including custom functions, filters, etc. #}

  ...

  {% endstory %}

{% endstories %}

Two main things are going on. First, the data object after with in both tags will be sent to Storybook as the story metadata. See the Storybook docs for a complete list of the available options.

The second thing is that the code inside the {% story %} tag will be turned into HTML by Drupal and sent to the Storybook application to display. You can paste static HTML, embed a component using {% embed %}, loop over a data array, and more. 

Note that the variables available in this Twig code are the ones added via args or params. Watch this 5-minute video for more details on how to write stories that embed nested single-directory components.

After writing your template, transpile it from Twig to JSON. To do so, you can use the drush commands provided by the storybook module. You can compile a single template by passing the path to the template:

drush storybook:generate-stories path/to/your/file.stories.twig

Or, you can have drush find all the template files in your system and then transpile them one by one. The command for this is:

drush storybook:generate-all-stories

This command will also check whether the JSON stories exist and whether the file is newer than the Twig stories. If so, it will skip the transpilation process for that particular file, thus saving resources in repeated runs. This is useful when executing the drush command continuously in the background every few seconds. You can "forget" about the transpilation process. You can use watch in Linux and macOS (you may need to install it with homebrew in macOS).

watch --color drush storybook:generate-all-stories

Running Storybook

Once our stories are generated in JSON, it's time to do our initial one-time setup. This will configure Storybook to search for the stories in the correct folders in your filesystem and configure CORS for Drupal.

Check out the module's README for detailed documentation on how to do this. Alternatively, this video demonstrates a full setup in less than 5 minutes.

After setup, you will find your stories in the sidebar when you start Storybook. They will all be grouped by a folder with the name specified in the `title` metadata property for the {% stories %} tag, with the name used in name on the {% story %} tag.

Also, note how the Controls tab lets you update the variables passed into Twig in real time. These correspond to the `args` provided in the `{% story %}`.

From here, you can start using your Storybook for your purposes. Take a minute to explore the available add-ons, which add functionality like a language selector, automated accessibility audits, a breakpoint selector, and more.

For more information on this module, view the full demonstration video.

Feb 29 2024
Feb 29

We sit down with Greg Dunlap, Lullabot's Director of Strategy, who not only shares his latest endeavor on Kickstarter but also discusses with us the art and science behind "Designing Content Authoring Experiences."

As we learn about content creation and management, Greg provides unique insights and practical advice, drawing from his extensive experience in the field. (He's been doing Drupal for almost 18 years!)

Also, we learn what Drupal is doing right (and wrong) when it comes to content authoring experiences.

Greg would certainly appreciate your support of his book on Kickstarter.

Feb 16 2024
Feb 16

We embark on a journey, guided by a Tugboat, through the evolving landscape of Drupal development. This episode of the Lullabot Podcast dives deep into the world of Tugboat's seamless integration on Drupal.org. It's a pivotal tool that's redefining the paradigms of building, testing, and deploying Drupal projects.

Our voyage is enriched by the insights of a distinguished Drupal Core Committer, who unveils the myriad development challenges Tugboat adeptly navigates and resolves. Joining the conversation is the Captain of Tugboat himself, offering a rare glimpse into the mechanics behind Tugboat's ability to streamline the development workflow and foster unprecedented levels of collaboration among developers.

And there's more on the horizon—this episode is also the launch pad for our new podcast host!

Dec 04 2023
Dec 04

Manually maintaining a Drupal website is time-consuming, especially for a small team already busy with new features and bug fixes. When the Drupal Security team announces a new security update, this requires one or more members of the development team to:

  1. Stop what they are currently doing
  2. Apply the update and create a new pull request
  3. Test the security update

Add in other non-critical updates, and a development team will spend a significant amount of time on something that, while necessary, doesn't seem to add much value. If you have multiple websites and codebases, this problem becomes even worse. It could easily become someone's full-time job to do nothing but package updates.

Why you should automate these updates

You need to update your codebase. It's nonnegotiable. It keeps your website secure and might even address bugs your team has been working on or bugs they haven't yet identified. But you can make the process much easier without the huge administrative lift every time.

Automating your updates solves a lot of problems. When set up correctly, automation will update each package independently, one pull request at a time, making identifying and fixing regressions easier. If you combine automated updates with end-to-end testing, you will have more confidence that new problems are not being introduced.

One of our support clients has twelve different Drupal repositories (as well as multiple WordPress websites) with an in-house team of three developers and one designer. Doing only security updates on these websites took us 30 hours per month. When the time came to upgrade from Drupal 8 to Drupal 9, there was a large backlog of non-security updates to perform, which slowed down the process.

For each update, the client also did extensive QA testing and scheduled that testing weeks in advance. If they discovered any problems, all other releases would be blocked. It would take another few weeks to review the next round of fixes because of the scheduling, even if it were a one-line change.

Once these updates were automated using Renovate, it was just a matter of keeping up with all the latest changes, improvements, and bug fixes released with minor version updates. With end-to-end and visual regression testing, a developer only needs to be involved when an automated test or build fails. Having Renovate (properly configured) is like having an extra developer on your team.

What is Renovate?

Renovate is a tool for automating the updating of dependencies in software projects. It can scan your software repositories, identify out-of-date packages, create branches, and submit pull requests for each one. Renovate supports a wide range of programming languages and platforms like PHP and NodeJS.

To use Renovate in its basic form:

  • Set Up: Install Renovate from your package manager or the platform-specific distribution.
  • Configuration: Create a configuration file, renovate.json, in the root of your repository. This file dictates how Renovate will behave – which dependencies to update, how often, etc. See the example below.
  • Running: Execute the Renovate tool, which will scan your repository for outdated dependencies based on your configuration.
  • Review & Merge: Renovate creates pull requests (or merge requests) for each dependency update. Review the changes and merge them if everything looks good.

You can leverage many other advanced features and configurations, such as grouping multiple dependency updates into a single PR, scheduling when updates should occur, and more.

Setting Renovate up varies between Git hosting providers. The easiest is GitHub because it's accessible in the GitHub Marketplace.

Are there other options?

We've also tried Violinist.io and Dependabot.

Dependabot's configuration was very limited for our needs, and some features have been removed since being acquired by GitHub. Overall, it's a little harder to work with.

We used Violinist.io on some projects that were not using GitHub. Configuring Dependabot for BitBucket and GitLab takes more work because you must set up custom runners or pipelines. 

Our experience has shown that, over time, Violinist.io has failed to create automated pull requests if/when there are minor issues with composer. Even something like an out-of-date patch, though unrelated to the package it's attempting to update, can halt the entire process. Once it fails, it creates a backlog of pending updates. We haven't had these issues with Renovate because of its flexible configuration.

Our Renovate configuration and preferences

Renovate integration is free, and once it is connected, it will open a pull request for the bare minimum configuration to start looking for package updates. We build upon this base configuration to group some packages together and set up rules around automerging. 

For example, we allow automerging during off-peak hours to avoid rebasing pull requests when others are actively working on the site. We also set branch protection rules that require certain tests to pass before Renovate is allowed to automerge. We only allow minor and patch releases to automerge because we want a developer to review any major version upgrade.

Here's an example of a renovate.json we've been using on our projects.


{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": ["config:base", "group:symfony"],
  "timezone": "America/New_York",
  "automergeSchedule": ["every weekend"],

  "rebaseWhen": "auto",
  "platform": "github",
  "baseBranches": ["main"],
  "prConcurrentLimit": 2,
  "rangeStrategy": "bump",
  "branchPrefix": "renovate/",
  "automerge": false,
  "packageRules": [
    {
      "matchManagers": ["composer"]
    },
    {
      "matchManagers": ["npm"]
    },
    {
      "matchPackagePrefixes": ["stylelint"],
      "groupName": "Stylelint packages"
    },
    {
      "matchPackageNames": [
        "drupal/core",
        "drupal/core-recommended",
        "drupal/core-composer-scaffold"
      ],
      "groupName": "Drupal Core"
    },
    {
      "matchPackageNames": ["lullabot/drainpipe", "lullabot/drainpipe-dev"],
      "groupName": "Drainpipe"
    },
    {
      "matchPackagePrefixes": ["gulp"],
      "groupName": "Gulp packages"
    },
    {
      "matchPackagePrefixes": ["jquery"],
      "groupName": "jQuery packages"
    }
  ]
}

Conclusion

There is no end to updating packages. Configuring Renovate allows more time and expertise to be spent on improving your website instead of constantly performing maintenance.

Nov 20 2023
Nov 20

In our previous article, we went over the basics of how Drupal handles revisions and content moderation. But one of Drupal's strengths is what we call "structured content" and its ability to implement complex content models, as opposed to a big blob of HTML in a WYSIWYG field. Entities can have lots of different fields. Those fields can refer to other entities that also have lots of other fields. It is easy to establish content relationships. 

Even with Drupal websites that don't have complex content requirements, there are almost always a few entity reference fields that carry a lot on their shoulders. An article might reference other related articles. An album might reference an "artist" content type. A biography might reference an "address" content type.

And to make things easier for editors, Drupal also has tools to allow inline editing of this referenced content within the context of a parent entity. But this lays potential, unexpected traps. When using content moderation and revisions with structured content, there are some dangers involved that you should be aware of.

Implementation approaches for structured content

The more structured we have our content, the more responsibility we take on to make sure we implement that structured content responsibly.

We won't go over details about why structured content is preferable in modern content-rich sites and will assume you have already decided to move away from the everything-in-a-single-field as much as possible. For our purposes, “structured content” will mean a set of relationships between “components” that constitute the pieces of your content.

In Drupal, when we want to create entity relationships, there are several implementation options, each with its pros and cons. We will focus here on two of the most popular implementation approaches: Entity reference fields and Paragraphs.

Entity reference fields are probably the most common way of relating two entities together. Drupal core does that extensively. Examples of this are: when assigning Users as authors to Nodes, in file or image media fields, when using Taxonomy Terms, etc. This means that often the “components” of your content will likely be an entity of some sort, and you will probably be using entity_reference fields to put your components together.

Another prevalent approach to creating structured content is the Paragraphs contributed module. The Paragraphs module lets you pre-define your components (called “paragraphs” under the hood), and by doing so, you ensure their appearance is consistent when rendered. Content editors can choose on-the-fly which paragraph types they want to use when creating the page, and you know the components will always look the same. We will get into more details about this option later.

Challenges when moderating structured content and inline editing

Consider one of the simplest and most common content modeling scenarios: a page (node) with an entity_reference field to another node. Let’s assume the main page is a “Bio” profile page, and the component we are interested in is called “Location.”

Note on implementation choices for your components: Using nodes as the data storage mechanism for components that don’t have a standalone version (page) is common but requires additional contributed modules, such as micronode, rabbit hole, etc. Other approaches and modules that don’t use nodes are equally valid, such as using core Content Blocks, the micro-content module, or even custom entities that you create in code. However, for the purposes of this example, all of these approaches are equivalent since they all use an entity-reference field to relate the host entity with the target entity (component).

By default, Drupal core doesn’t provide a great UX for inline editing. For example, the entity reference field only comes with an autocomplete widget by default, which means that when creating a Bio node, we aren’t able to finish the page unless the Location we want to use is already created.

We can add inline editing of referenced entities through different contributed modules, and Inline Entity Form and Entity Browser are the most popular solutions. If we configure Inline Entity Form, for example, we will get a node form similar to the one below: 

The whole UX could still arguably be improved, but for the sake of this example, let’s assume this is what our editors usually work with. After creating a first published version of our page, we would have something like: 

 Sometime after this page is published, the editor needs to perform a few modifications, which will require review and approval before going live. Content moderation to the rescue! They can just create a new draft, right?

 If we don’t pay close attention, everything seems to have worked as expected, since when we save the form, we see the /node/123/latest version of that page, which is a forward (unpublished) revision, and this indeed contains all changes we expect to get approved before they go live:

However, if we log out and visit the currently-published version of this page, we see that the new office location for the bio is already live. That's not what we wanted. 

We edited a draft. So how did the changes leak to the live version?

Well, it turns out that is indeed the expected behavior. Here is what happened.

When the editor clicks on the “Edit” button, they are opening the edit form of that referenced node entity. Once entity_reference fields only store information about the entity type and entity ID, the action being performed really is “let’s modify the default revision of this referenced entity, and save that as a new default revision.” This is the same as if they went to /node/45/edit and edited the location node there. Editing the referenced entity like this is almost never what you want to do in the context of using an inline form because it will:

  • Affect this (re-usable) component everywhere it may be being used
  • Even if it’s not being used elsewhere, in this scenario, it will change the location entity default revision, so the published content referencing it will reflect the changes immediately.

How to mitigate or reduce the risk of this happening on your site

There is no one-size-fits-all solution for these dangers, but you can minimize the risk.

Train your editors

If your editors understand how revisions and moderation workflows work, they can more easily work around CMS limitations when necessary. For example, in this case, it might be enough just to remove the reference to that particular Location component and create a new node instead. When the main page draft is published, it will display the new node instead of the old one. Admittedly, this is not always possible or desirable in all scenarios and teams.

Avoid inline editing when moderation workflows are in place

If editors have to go to the standalone form to modify the referenced content, this might make it more visible that the changes could affect more instances than desired.

Use helper contributed modules to reduce confusion

There are modules created to help editors know better the repercussions of the editorial changes. For example, the Entity Usage module can be configured to display a warning message in the edit form when we are altering content referenced from other places. Additionally, the Entity Reference Preview module helps editors preview unpublished content that references items that also have forward revisions.

Architect your implementation to account for the scenarios your editors will find

Maybe none of the mitigation ideas mentioned above are enough for you, or you need a more robust way to guarantee your inline edits in moderated content will be safe regardless of the editor’s skills. In this case, you might want to consider stopping the use of entity_reference fields to point to entities as components and start using the Paragraphs module instead.

What is different with Paragraphs?

The Paragraphs module still creates entities to store your components' data, but the difference is that it enforces that a given revision of the component (paragraph) is always tied to a specific revision of the host entity (parent). In fact, this is often referred to by developers as a “composite entity,” meaning the paragraph itself is only ever expected to exist “in the context of its parent.”

This solves our problem of inline editing moderated content nicely since when we create a new draft of the parent content, we will also be generating new draft revisions of all components on the page, which will always travel together through the moderation workflow.

This also has some downsides you should consider when choosing your implementation model. For example, in a paragraphs scenario, your components can’t be re-used directly. You will need to create one-off versions of a given component every time you need to place it on a page. Also, depending on your content model, if you have deep nesting levels of components inside components, the UX for inline editing might be tricky. On sites with a high volume of data, this could lead to large database tables since all revisions of all components will be created independently.

Conclusion

If you have to take one thing from this read, it should be “be careful with inline editing of referenced content in entity_reference fields when using content moderation.” This is a high-risk scenario that you should discuss early in the project with all stakeholders and plan accordingly. Unfortunately, there is no one-size-fits-all solution, and you should create your Drupal architecture to best serve the use cases that matter for your site users.

Page-building strategy is a complex subject that we haven’t explored in depth here either. Layout Builder options, embedding content in WYSIWYG areas, re-usability of components, media handling, translation workflows, theming repercussions, decoupled scenarios, etc., are all topics you should have in mind when deciding on a page-building approach. Understanding how revisions, entity_reference fields, and content moderation all play together is a good start.

Lullabot has helped plan and build editorial workflows for organizations of all shapes and sizes so that Drupal helps them work toward their content goals. If you want proven help navigating these issues, contact us.

Nov 16 2023
Nov 16

Host Matt Kleve assembles a crack team of Lullabot experts from various company departments to share their hands-on experiences and insights into how innovative technology influences and enhances our field.

We discuss integrating AI into coding, design, and tasks like writing emails and RFP responses, along with the broader implications for the future of web development.

Join us as we navigate the complexities, challenges, and vast potential of Generative AI in shaping our world.

Nov 02 2023
Nov 02

Dive into the heart of Drupal GovCon with host Matt Kleve as he captures the energy and insights from attendees who have experienced the conference firsthand. We showcase a mosaic of perspectives that embody the spirit and community of GovCon, bringing you the voices that animate the world of Drupal.

Nov 01 2023
Nov 01

For Lullabot's first sponsored contribution, we've been focused on improving Drupal's main navigation. We decided on this direction for two reasons. First, it's one of the most visually impactful areas of the Admin UI. Second, its redesign will support other active initiatives to improve Drupal's Admin UI.

Since our last update, we've been focused on a redesign of Drupal's main navigation, or "the toolbar." It's one of the most visually impactful changes to the Drupal Admin UI, and when it's completed, it will complement other efforts like the Dashboard and Field UX initiatives. Our overarching goal is to improve the usability, accessibility, and design of the navigation system to provide a better user experience for site builders and content editors.

We based our initial designs and prototypes on the research of competitors, industry standards, and previous UX studies on the topic. We also gathered insights from the admin theme Gin, which helped us validate several hypotheses. At a high level, this suggested a left/top/top layout as being the easiest to scale and scan.

Multiple rounds of user testing

We approached this design work by testing, iterating, and testing again. Our goals were to get fast, iterative feedback before jumping into development. Multiple rounds of user testing helped ensure we were going in the right direction. As the saying goes, thirty hours of development time can save you three hours of planning, and we'd rather that be flipped around.

We used a combination of card sorting, surveys, and moderated usability tests to collect feedback, which was used to iterate on the toolbar over several months. We plan to "rinse and repeat" this process until there is a contributed module ready for Drupal Core.

User testing: round 1

An HTML mockup served as the testing ground to gauge user satisfaction. Our first round of testers provided overall positive feedback for the new collapsable, vertical layout. All participants preferred the new navigation over the old, though they also provided valuable insights for iteration. 

They got us thinking about the words (Drupalisms) we use and how we might use plain(er) language to reduce onboarding time for less experienced users. As a result, we introduced separate task groups tailored for editors and site builders, enhancing the overall user experience for these specific audiences. The original idea to add these groupings came from discussions years ago and then crystallized into this proposal.

User testing: round 2

Ahead of our second round of testing, we felt optimistic. Our initial prototype was well received, and the menu groups made sense to users.

For round two, David Rosen, a User Experience Analyst from the University of Minnesota, organized a group of experienced Drupal site builders and content editors accustomed to the current toolbar implementation. This group of testers wasn't as enthusiastic about the big change because it meant modifying their current setup.

However, they were able to orient themselves quickly. They completed tasks arranged for them in the testing environment and generally agreed that the new layout could help with onboarding less experienced users.

This round of testing taught us that we'll have to work on the change management and communication strategy to help align more experienced users.

User testing: round 3

During DrupalCon Lille's Contribution Day, we conducted seven pop-up tests to evaluate the mobile implementation of the new toolbar. Participants had varying levels of experience using Drupal Core's admin interface. These in-person tests allowed us to observe how users interacted with the menu on their mobile devices, something that can't be done on a video call.

The mobile testing highlighted users' ability to navigate the admin interface but revealed a few concerns about font size, spacing, and user expectations about the "expand sidebar" feature. So, another round of usability testing is in the works!

Where we are now

Creating a contributed module

On the development side, we've been focused on converting the HTML mockup to a new menu module. We don't want to reinvent the wheel. Instead, we'll try to reuse things that exist in Drupal already, like blocks. We'll provide tools that people already know how to use and customize while also creating a manageable solution for site builders.

When this is complete, we'll start seeking reviews and obtaining approvals from all significant contributors to incorporate it as an experimental module in Drupal Core, hopefully when 10.3 is released.

Exploring a contextual top bar

During this work, we realized the main toolbar doesn't address every requirement we have. For example, space for a contributed module like Environment Indicator or the core module Workspaces must be accounted for. We're looking to contributed modules and Gin to understand how this could be solved.

At the same time, we need to understand the most common customizations website admins apply to provide the best experience for their clients. So, we designed a short survey. It will be distributed to agency website admins responsible for maintaining client websites.

This work will inform our ideas for a sticky, contextual top bar that holds all the extras that users will need based on where they are in the interface. This work is in progress and will soon be available for testing.

Accessibility review

Now that the markup is nearly completed, we are focused on making everything accessible. We've opened an issue in the queue that serves as the parent to the accessibility issues we've collected so far.

What we heard at DrupalCon Lille

The Driesnote at DrupalCon Lille was the first public presentation of the new toolbar to the Drupal community. It was the first time people got a glimpse of what it looks like and how it functions. Attendees were surprised that they could actually help test it during contribution days.

Overall, the new toolbar was well accepted. People appreciated our approach, which emphasized research, usability testing, iteration, and more rounds of usability testing. Combined with feedback from the larger Drupal community, we've made a lot of progress.

Our global community of makers and users create the code, solve the problems, and form the bonds that sustain it.

Drupal.org's Open Web Manifesto

Alongside Cristina Chumillas, a handful of Lullabots from all disciplines, individuals from the Drupal community, Acquia DAT, 1xInternet, and Skilld have been collaborating to improve the admin UI. We are so grateful for how people have self-organized and supported this work. Contact us if you think you would like to contribute, too.

Oct 20 2023
Oct 20

You've likely heard repeatedly about the impending end-of-life for Drupal 7 and the potentially daunting task of updating to the latest version.

We're joined by the esteemed Matt Glaman, a prominent Drupal contributor and Principal Software Engineer at Acquia, who introduces his latest innovation: Drupal Retrofit.

This tool serves as a compatibility layer, enabling you to execute legacy Drupal 7 code on your Drupal 10 site. The goal? To potentially ease your upgrade process and expedite your transition to modern Drupal.

Oct 05 2023
Oct 05

Join us as we sit down with Joyce Peralta from McGill University to explore their world of "Web Standards."

We'll discuss how McGill's strategy for maintaining its impressive roster of 1,500 websites has grown to influence more than just the structure of the sites. These digital standards, developed, enforced, and cultivated by their web community, have become a cornerstone of the university’s technology framework.

Together, we’ll unpack the 9 standards that are fundamental to McGill University's web success and explore how they have significantly transformed its digital landscape, and how you might be able to do something similar in your organization.

Sep 21 2023
Sep 21

It’s back and more exciting than ever! We are thrilled to announce the highly anticipated return of Drupal GovCon, the third biggest Drupal Conference in the world! This notable event is returning to the Washington DC area on November 1 & 2, marking a lively return to in-person Drupal camps.

Join host Matt Kleve as he engages in insightful discussions with accomplished organizers Nina Ogor and Christoph Weber, unveiling the plans and expectations for the upcoming conference.

Sep 01 2023
Sep 01

Most people don't love their content management system. 

In my experience, the number one complaint of organizations looking to replace their current CMS is, simply, “Our editors hate it.” (link)

Deane Barker, Real World Content Modeling

At best, they tolerate it with ambivalence. But many use their CMS with teeth firmly clenched, pushing through the pain because they have no choice. They might have to perform complicated UX gymnastics to do their jobs, but at least it sort of works.

But do they really hate the technology? In most cases, they hate that their CMS was built without their needs and use cases in mind. The platform they use every day was planned without their input, based on opaque business requirements dictated from on high.

If you are rolling out a CMS implementation that costs millions of dollars, wouldn't you want editors to engage with it? You want people to create content. If they are filled with dread every time they log in, they aren't going to do their best work.

Moreover, there is no single "best" content authoring experience. There is only the best content authoring experience for the authors you are working with, the organizations they are working for, and the content they are creating.

Given the parameters we're working with, what can we do that encourages better content for the use case at hand? These answers will be different from organization to organization.

How do we create a better authoring experience?

We need to understand the contexts we enter into and all the forces at play in a given organization. You do this with a process similar to a web design process.

  • Research
  • Prototyping
  • Implementation
  • Testing
  • Ongoing support

This makes sense because content authors are users, too, just like the visitors to the website. They should be given the same care and consideration. For this article, we'll focus on research and implementation.

Researching the needs of content authors

A lot has been written on the benefits of user research. But there's another goal with content authors: to get them invested in the process. There can be a high level of distrust between content authors and the people who run the CMS, and you need to bridge that gap. Sit down and listen to people. Some have never been asked, "What do you need?" or "What problems do you have?"

Give them a seat at the table with the other stakeholders. If you want people not to hate the CMS you are building, they must be part of the process. 

Interviews

Sit down and talk with people. Interviews provide the most direct feedback.

  • Learn day-to-day frustrations
  • Identify functionality that is missing or vital
  • Establish big-picture goals and priorities

If you hear multiple people talking about, for example, document management, that's something you'll need to figure out. Above all, connect with people and get them excited about what you are building.

Who do you interview? 

Sometimes, this is easy, especially for a smaller organization. But for larger organizations, you can't just interview everybody. You need to get a broad range of viewpoints representing a cross-section of experiences. You want full-time web professionals but also people who only dabble in the CMS. You want your biggest cheerleaders and your crankiest curmudgeons. You want the biggest departments represented but also the smallest.

How do you conduct the interview?

Try to only talk with one person at a time. Follow the protocol, but chase down threads when you need to. Always be digging and asking "why." If you hear something broad or vague like "we need more design freedom," don't take it at face value. People often offer solutions but don't know their actual problem. Your job is to uncover the root problem. Ask "why" at least five times until you get to the actual problem.

Someone might say that they need a color picker so they can make their text multiple colors. But what are they trying to achieve? Do they really need a way to highlight passages of text? Or, their school colors are green and gold, and they need to represent those colors. These are very different problems. A color picker probably isn't the best solution. 

Finally, have them show you. Conduct the interview in their workspace so they can grab their computer and show you the problem without having to describe it. You might hear things like "uploading images is a pain," but that won't tell you much. Being able to see it in action is important.

Interview matrix

Unless you're a team of one, you won't be in every interview. You need to summarize interviews for others. 

The matrix is also helpful in identifying the users who are engaged and excited about the process. These people will take the time to test your prototypes and offer future feedback.

Implementing the authoring experience

Now, we take the insights gained and begin to implement them. Remember, there are no "correct solutions" that are right for everyone, only the right solution for a specific context and a specific team, both within a particular organization.

But there are some principles we can follow. We'll come at this from two angles: content entry and content administration.

Content Entry

People are used to filling out forms online, but CMS forms have several differences from public forms. Content authors will visit and fill out CMS forms hundreds, maybe thousands of times. Public forms are meant to be used once. 

And while public forms are usually kept simple because anyone should be able to fill them out, CMS content forms can have dozens of fields and ways of relating to other content. Complex content needs often lead to complex forms. Training and documentation are required.

We want to try to reduce cognitive load, which is the mental effort needed to learn new things. John Sweller developed this theory in the late 90s and holds that individuals can only hold so much information in their brains simultaneously. If you present more than they can hold, learning becomes more difficult. This concept can be applied to UX and design.

The more cognitive load needed, the more difficult something will be to use, and the less users will carry over from session to session. 

Cognitive load isn't a static, set-in-stone constant. The more familiar someone is, the less cognitive load it adds to their work. If your authors are web professionals who have been using CMSs for a long time, they are in a different starting place than someone unfamiliar with web concepts.

But we can't just reduce the cognitive load, so content entry is so simple that it doesn't meet the complex needs of the organization. A form with a title and body field is simple but inadequate.

We want to build a system appropriate for your use base but keep it as simple as possible.

Here are some guidelines to do this.

Consistency

  • Where information is placed. Keep taxonomy all in one place. If a field is on multiple content types, keep that field in the same general area. Metadata should not be in the sidebar for one form but at the bottom of another form.
  • Widgets and other paradigms. Don't use three different layout technologies for three different content types. If a field uses a dropdown in one content type, don't make it an autocomplete in another.
  • Terminology. Refer to items on a form using the exact words you use across the rest of the system.

Words

Help text and descriptions are vital to providing authors with important context. Often, this is left to developers. This is not ideal.

The core of the interaction is the questions asked and answers given. This is why the words we use in the form, and now how it looks, are the most important part of the design.

Jessica Enders, Designing UX: Create Forms That Don’t Drive Your Users Crazy

Words on forms are everywhere. They can be valuable in helping authors navigate your interface. Or not. Know your audience and write for those needs.

Some guidance: 

  • Keep it simple. The fewer words, and the clearer they are, the better. The more diverse group of content authors you have, the more attention this requires to keep laser-focused on clarity.
  • Use the terminology of the users. Use terminology they understand. If all your content authors use a certain technical language, don't hesitate to use the language.
  • Avoid the terminology of the CMS. Drupal developers might use words like "node" or "entity," but they don't necessarily make sense to authors. Or, they mean entirely different things.
  • Eliminate directional language. Don't say something is "above" or "to the right" of something. Relative positioning can change over time. Updating the text to match can be a maintenance headache. The position also depends on context, like screen size.
  • Focus on the "why" rather than the "what." Saying what something is isn't as important as saying why it's important and how it impacts the content being created. The less knowledgeable your users are, the more important this is. Why does this field need to be filled in? How will your content look if you do certain things? What impact will it have on site visitors? 

Ordering

During the design phase of a project, we create a hierarchy of content. The most important information appears at the top of the page. When authors enter this content, it makes sense that they should enter it in the same order it will appear on the page.

When authors enter content into a system, they often visualize what it will look like. This is especially true for people who are not experienced content authors. And so, fields should be laid out in the order people expect. It's more intuitive. It helps them focus on the task at hand.

Also, ask how often a field gets used. If something is at the top of the page but is optional and only used about 5% of the time, most authors will be scrolling past that field and ignoring it. In this case, putting that field lower in the form hierarchy makes sense. This is why you do research.

At Lullabot, we create a spreadsheet for every project that outlines the content types and their fields and properties. This is ordered based on how we expect the fields to appear on the node edit form. This gives developers an easy guide to follow.

We could talk about other things, like error handling and field validation, but the above guidelines will go a long way to get you started.

Content administration

Content administration is the support of content that has already been created or is in the process of being created. How do people find and manage the content they need? What is the workflow?

Editorial dashboards

An author's experience starts as soon as they log on to the site. What is the first thing they see? Sometimes, it's a dashboard with the most recent content. But what if we tailored it specifically to their needs? What if they were greeted with a personalized experience that sets the tone for their session?

For the American Booksellers Association, we created a platform for bookstores to manage their online presence, which included e-commerce. The dashboard we designed doesn't just have a list of content but also custom links to look at orders placed and orders currently being processed. There are also custom menus tailored to what they do the most.

What's appropriate for a dashboard will change from organization to organization and from role to role within a single organization. But here are some things you might want to include.

  • Work in progress. Work the author is currently working on or what they have recently worked on.
  • Content for review. Other authors have created work that needs to be reviewed for accuracy and style.
  • Alerts. These can highlight accessibility or SEO issues.
  • Links to documentation and support.

Choosing a content type

Content types exist for a specific business purpose, but many authors will choose the wrong one for the content they are creating. This might be out of habit. They might not understand the differences. One content type might be more flexible, so they are trying to get around some guardrails.

Some choices are easy, like between an event or a blog post. But some are not so obvious. For example, what is the difference between news and press release in the image below? 

This also highlights the importance of words. These descriptions talk about the "what" but not the "why." Clarification of the "why" of a content type can guide authors toward making better decisions.

Here is an improvement that provides more context. In addition to a short description, there is a longer description with clarifications. For example, press releases cannot embed images. That's important information for authors to have. Thumbnails help them visualize how the finished product will look. Links to current examples help authors learn what other people have done.

However, the needs of new users are different from those of experienced users. Those extra descriptions take up a lot of real estate. That means more scrolling. Users who have been there a while know what they need and don't need the extra help. So, we added a toggle.

Again, context matters, and context can change over time.

The work never ends

There is so much more we could talk about in regards to improving a content authoring experience, like accessibility and support. If you want good content, you'll want to craft an experience that your content authors don't hate. That takes thought, collaboration, intention, and continuous improvement.

If you want to get a start on improving the flow of content in your CMS, contact us.

Resources

Jul 12 2023
Jul 12

Drupal's Webform module is great. It allows users to create forms with all sorts of field types through a UI that's easy to use. It's a great friend to marketers. We use it to power our contact form, collect webinar registrations, and as a lead capture for ebooks.

But data trapped in Drupal doesn't do much good. Data needs to move and flow. It needs to be exported and imported and analyzed and followed up with. It needs to be used.

You can view Webform results on your Drupal website and export them in various formats, such as a CSV or tab-delimited file. But depending on the number of your forms and what you want to do, that could be time-consuming. You can automate it with custom plugins and modules, but that takes development time, and anything you do will need to be maintained to keep up with API changes.

But there is a way to automate the exporting of your Webform data and the sending of it to other services and with zero code. Any marketer can do it. We've been using it to streamline some of our processes.

The Webform plugin that makes this possible

You don't need any extra modules to get this to work. First, go to the Settings→Emails/Handlers of a Webform. Click on Add handler.

Find Remote Post and add the handler.

Give it a good title. Since we'll be sending results to Zapier, we'll call ours "Zapier." The key field to focus on is Completed URL. You can do other things, but we'll keep this as simple as possible while still being useful.

Next, we need to go to Zapier.

Getting the data into Zapier

You can use Zapier for free for many things. However, for what we want to do, you'll need a paid account because it requires a premium feature. You can usually get a free trial of the premium features to see if it works for you.

Zapier lets you create automations called Zaps, which have Triggers that will make them run certain Actions. A single Trigger can have multiple actions attached to it.

First, we'll create a Zap and choose a Webhook for our Trigger. Choose the Catch Hook event as shown below.

Next, you'll have the option of entering a Child Key. We will ignore this because we don't need it and have a simple form we want to catch. You might want to use this if you only want access to a subset of the data from a very large or complicated form.

After you click Continue, Zapier will give you the webhook URL and ask for a test. This is the URL you will put in the Completed URL field of your Webform.

After you put the webhook URL in your Webform handler, a new field labeled Completed custom data will appear. You can use this to send Zapier hardcoded data or pass along data from tokens. Why would you want to do this? Maybe you want to use the same Zap for multiple Webforms and use this to pass the webform name or a tag you want to attach to a CRM entry. For now, we're going to ignore it and keep things simple.

Save your handler and then visit your WWebform on your Drupal website. Fill it out and submit it with test data. Go back to Zapier and press the Test trigger button. You should see your test data on the screen, and if everything looks good, click Continue with selected record. You'll use this data to help set up your actions.

What do you want to do with your Webform data? For us, we send it straight to our CRM as a lead. Sometimes we add it to a Google Spreadsheet. Maybe you want to automatically add a reminder to your calendar to follow up with this person. Or maybe all three at the same time. If/then blocks are available in Zapier, so you can check if the newsletter value is "1" and send it to the appropriate newsletter list. This is especially useful if your email marketing provider doesn't have a Drupal Webform plugin.

You still have your data inside Drupal if you need it, but now it's doing more work for you.

Go forth and make your life easier

We won't cover setting up specific actions because there are thousands of them, and Zapier already has excellent documentation. Now that the data has reached Zapier, you are free to do anything you want, and you haven't written any code. Most of the major CRM providers have Zapier integrations. Look for what you want to do on their website.

But don't feel like you have to be locked into Zapier. Other platforms aim to do what Zapier does, like Make and IFTTT. Test things out. Play around.

If you have your own workflows and automations that have made your life easier, we'd love to hear about them. Feel free to reach out on Twitter or LinkedIn.

Jul 07 2023
Jul 07

Host Matt Kleve discusses a recent Lullabot Drupal project and its transformative impact on government websites. In this episode, we have an exciting conversation with a representative from the Office of the CIO from the state of Iowa and the Director of Strategy at Lullabot, as we explore the importance of structured content and its role in the groundbreaking work being done on Iowa.gov, powered by Drupal.

Join us as we uncover the key insights, strategies, and successes behind implementing structured content using Drupal on Iowa.gov, showcasing how it has revolutionized how information is organized, accessed, and delivered to citizens.

Jun 28 2023
Jun 28

For our first sponsored contribution time, we will be focusing on efforts to improve the usability of Drupal’s administration UI. We chose this because there are some deliverables we beleive we can achieve within six months that will make Drupal better and benefit our existing clients.

The good news: we don’t need to start from scratch. Lots of work has already been done in the Admin UI & JavaScript Modernisation initiative, so together with Sascha Eggenberger and others, we defined some ideas to make Drupal’s administration more usable. A lot of those efforts focused on making Claro the default admin theme, but several ideas were implemented in the Gin admin theme, which allowed us to evaluate which should be included in Drupal core.

But for those ideas to land in core, they needed research and design, which could fill up an entire six months. We didn’t want to get to the end of this period and not have something concrete and usable. So, together with Lauri Eskola and Andrew Berry, we came up with a plan to improve Drupal’s administration UI experience.

The required research still needs to be done. So far, we’ve made efforts to define things like User Personas, User Journeys, and the main Information Architecture, along with creating prototypes that test new navigation patterns.

Several key projects to improve the admin UI have already kicked off or have been going on for a while, and we plan to coordinate our efforts. For example, several prototypes created for the Field UX have been reused to test the initial layout improvements. We also tested moving the Toolbar to the left and several new navigation patterns.

These were covered during the Initiative Leads Keynote at DrupalCon Pittsburgh, but let’s go deeper into each one.

Main navigation redesign

This is the first visually impactful thing we’ll be working on and will create the space for the rest of the improvements. The planning to make this happen started at DrupalCon Prague 2022. We designed a plan that would let us implement it in different phases and would let us isolate blockers that could keep this from moving forward. The Information Architecture refactor is one of those potential blockers because of the big efforts involved. Right now, we’re focused on three main sections:

Card sorting

A Card Sorting exercise helped us gather which Information Structure represents a better mental model for Drupal users. This will help us organize menu links into groups and labels that make the most sense. We hope to learn what each type of user expects from the menu structure and base future changes on that insight.

Prototype tests

We used the weeks before DrupalCon to assemble a prototype to test layout design changes with new navigation patterns and an initial design with the Toolbar moved to the left. During the contribution day, several people tested these prototypes, so now we’re iterating on them for further improvements.

Implementation into static HTML

The existing Toolbar has several issues ranging from accessibility, usability, and dated code. For example, it still uses jQuery. We’ve decided to start the code from scratch with static HTML prototypes. We’ll be able to more easily test design interactions and start work on improving the final code. Once this is ready, we’ll create a contrib project that we can test with other modules and eventually propose it for Drupal core to replace the existing Toolbar.

Dashboard

Some of our research showed the need to improve the starting point for user journeys, whether it’s a Site Builder or a content user. So we decided that a good way to improve the experience would be to provide customizable dashboards. A technical solution for these Dashboards is in the works, but we’re also working on the default content each should have. If you want to know more about this, you can attend our session with Christian López Espinola at Drupal Dev Days and DrupalCon Lille.

Field UX

The Field UX effort is something Acquia’s Drupal Acceleration Team has been working on already. They have done some development and research but needed some help with the design. This collaboration evolved at several levels and ended up helping us define several navigation patterns that would benefit the whole UI.

Beyond the existing efforts

We’ve covered only the more defined and advanced UX efforts, but several critical pieces of Drupal still need UX improvements, like Layout Builder and Paragraphs. We also want to glean what we can from projects like Gutenberg and Fieldable Fields. Overall, we need to improve how we deal with Drupal’s key structured content capabilities in a way that is easy to use and understand. 

Thankfully, several of the winners of the Pitch-burgh innovation contest pitches are related to this, so hopefully, collaboration to improve Drupal’s UX will expand, and we’ll be able to make big changes in the following years.

Who is “we”

When discussing Drupal or Open-Source contributions, we tend to think about people, not companies. But we need to be realistic about sustainable contributions. Drupal has a great community of individual contributors that have helped make Drupal what it is today. But company sponsorships and collaborative efforts have helped push some of these larger, longer-term initiatives along. That is one of our motivations for our own sponsored contribution program.

So far, Lullabot, Acquia DAT, and 1xInternet have been collaborating to improve the admin UI, and if you think your company could help, contact us!

More sponsored contributions show the maturity of our community. It’s not sustainable to ask so many individuals to work for free to get big changes pushed through.

How to get involved

We’re organizing ourselves mainly on the #admin-ui Slack channel, mostly for the work on the main navigation and other research. For more specific initiatives, you can join the #dashboard channel for the Dashboard efforts and the #field-ux channel for the Field-UX changes. We need help from developers but also from people that can help us write documentation, help with the research, or help with the designs.

May 31 2023
May 31

There are many good reasons for Drupal-focused organizations to contribute to Drupal, but for Lullabot, it comes down to these:

  • Contributions to Drupal make Drupal more useful, benefiting our existing clients and encouraging organizations to switch to Drupal.
  • Public recognition of our contributions leads to public recognition of our expertise, which leads to more sustainable business for Lullabot.
  • Our team likes to contribute to Drupal! Recently, they’ve enjoyed the focus on “Making Drupal Beautiful.” It’s also the right thing to do.

As one of the first Drupal agencies, Lullabot has a strong history of Drupal contribution. We have led or significantly contributed to new modern Drupal features, such as:

Where our contribution time has come from

Traditionally, we book client-facing team members for 30 hours per week on client projects. A typical 40-hour work week leaves around 10 hours for internal meetings, contributions to working groups like our internal security team and ESOP committee, and more. Our team members have a great deal of autonomy in how they spend this time. Drupal contribution is just one outlet they could choose.

A few years ago, we expanded our definition of Education Time Off (currently three days a year) to include contributing to open-source projects. This works great for projects that need larger blocks of time to get things done. It can be hard to make progress when bouncing in and out for an hour or two.

As Lullabot grew, getting close to 70 employee-owners, we started to sense that our approach to Drupal contributions needed more thought. We saw that:

  • There were many competing demands for unbooked client time and ETO time. This differed from 10 or 15 years ago when Lullabot’s internal time was mostly used for writing articles or contributing to Drupal.
  • Time we could get in between projects to focus on Drupal contribution was mostly reactive. It was hard to plan for this time because client renewals often go all the way to the last minute.
  • Team members were finding it difficult to plan and execute on larger Drupal initiatives without also dedicating significant amounts of their personal time to the work.

With these challenges in mind, we asked ourselves: How could we leverage our larger team to optimize and improve our contributions for our clients, our employees, and the Drupal community at large?

Our sponsored contributions program

Lullabot plans to dedicate one of our team members to a Drupal contribution project for three to six months each year, starting in 2023. This program aims to do the following:

  • To sponsor projects, not positions. For us, success means some finished work that we can take and use on most of our client projects, even if it does not get committed to Drupal core.
  • To focus on Drupal contributions. While our team has contributed to non-Drupal projects, Drupal is the core of our business and where we know we can have the biggest impact.
  • To support team members who have prior contribution history in Drupal core or contributed projects. Contribution is defined widely, including design, project management, and code contributions.

Our leadership and management teams will evaluate proposals from our team on various criteria. A proposal should answer the following questions:

  • Are its goals aligned with Lullabot’s needs and those of our clients?
  • Does it align with existing Drupal community priorities?
  • Does the proposing team member have a prior contribution history?
  • How feasible is a successful delivery of working software in the proposed timeline?
  • What are the plans for managing risks and exceptions as they come up during the project, especially as they relate to non-Lullabot Drupal contributors who may be on the critical path to success?

Our sponsorship for 2023: Drupal administration layout redesign

For 2023, Cristina Chumillas will be working on Drupal administration usability improvements. She’ll be dedicated to this effort from May to the end of October.

It’s going to be great to have dedicated time for this work, as I can work on designs that will be the base of other much-needed improvements in Drupal’s user interface, such as how we navigate across different sections of Drupal’s administration UI, or the way we create and manage fields.

Cristina Chumillas, Senior Front-end Developer

If you or your organization is interested in contributing to work improving Drupal’s administration interface, please join us at the contribution day in Pittsburgh, or join the #admin-ui Slack channel at drupal.slack.com.

See you in Pittsburgh!

Our goal with this post is to encourage and inspire other agencies and organizations using Drupal to consider how best to organize and focus Drupal contribution efforts. This article shares several approaches on how organizations can sponsor work on Drupal, and it’s unlikely that simply adopting one as-is will meet your needs. Find us at our booth at Drupalcon Pittsburgh - we’d love to chat more about others’ thoughts on this topic.

Appendix: proposal outline

Here is the proposal outline we use as a starting point for contribution proposals. We hope you find this helpful.

Project introduction

  • Explain your proposal in a paragraph or two.
  • Use links to upstream issues or references if they exist.
  • List some current or past clients or projects that would have benefited from this work if it had already existed.

Background and prior work

  • Are there existing community projects that compete with or complement this proposal?
  • Were there previous projects or initiatives in this space that weren’t successful? Why not, and how will this be different?
  • Is there inspiration for the project from custom work we’ve done for clients or from outside the Drupal community?

Deliverables and scope

  • How will we know when this project is done?
  • What is in scope and out of scope?
  • A list of high-level features, improvements, or changes that this project will deliver.
    • Each feature should be broken down by “must-have” (the work is unusable without it) and “nice to have” (if it doesn’t get done, we’ll still have completed something valuable).

Resources and risks

  • What do you need to complete this project? Do you need help from skills you don’t have, others in the Drupal community, or core committer and manager approval?
  • Are there other individuals or agencies that we should collaborate and synchronize with?
  • Are there signals you’ll watch for to know you need to course correct or “fail fast”?
  • How much time do you need to complete the project?
  • If the scope isn’t completed by the end of the sponsored time, how will whatever is complete be usable by Lullabot and our clients?
  • Are there current projects or clients who would be good targets for testing early work that’s completed?

Benefits to Lullabot and our clients

  • What types of clients are most likely to benefit from this work?
  • Are there types of clients who we know will never benefit?
  • What audiences do you expect to be most interested in and affected by the project?

Open questions and next steps

  • Is there work to be done before starting the project? Do you need help researching and discussing with others? Describe those topics here.
May 10 2023
May 10

Every website needs a host, and a fantastic website on a mismatched hosting platform can become a terrible website. You've spent a lot of time and money on your website (or websites). Deciding where to host should not be an afterthought. 

Complex websites with content management, media management, and authenticated users have more complex hosting requirements than simple static websites. If your project warrants a CMS like Drupal, you need to ensure your hosting platform matches.

Here are some questions to ask to ensure you choose the appropriate home for your investment.

What are your goals and priorities? 

Or alternatively, why is your current hosting solution not working? Where is it falling short? There are several things to evaluate:

  • Security
  • Performance and reliability
  • Price
  • Deployment workflows
  • Consistency with updates of the underlying software
  • Management tools
  • Customer support

But you can't just list the things you want. You must work to prioritize them. Otherwise, you'll have stakeholders that want all of these things equally, to the maximum measure. To prioritize, however, you must have a clear view of your goals.

If you had the choice between a cheap solution or a more secure solution, which one would you go for? It depends. If you're a financial institution, you'll want to prioritize security. If you're a marketing agency, you don't want an insecure website, but you probably care more about performance and price.

You must also know what each of these criteria means for your organization. How do you define security? Is it based on certifications, like SOC2 audits? Is it based on specific hosting features, like proactive protection against common security holes? Is it based not on capabilities but on who owns what layers of the stack? A Drupal site owner has much greater responsibility for the security of a Drupal site on AWS or Linode than hosting with a managed provider.

Your definition of performance may differ. Are you seeking to host one high-performing website or a network of lower-traffic websites? Is your traffic relatively even throughout the year, or are there days when your site sees 10x or even 1000x times the traffic?

If you are hosting one large website and need top performance, what will it take to reach that goal? It can be hard to scale out one big website. One provider might be able to do it well, but the cost might be prohibitive.

If you have lots of smaller websites and are optimizing for costs, that means shared resources. This is fine as long as the sites don't get much traffic. But what happens if one of those websites runs a big, viral promotion and gets a traffic surge? Will all the other sites go down?

Don't forget organizational politics. A stakeholder with lots of authority may have something against a particular hosting provider. One of your top goals may just be to remove yourself from your current vendor, whatever the cost. These political goals might be hidden and come out of hiding at the most inconvenient times, so be sure and dig deep. For example, we had one client who would not use anything that used AWS under the hood. This automatically eliminated certain providers. The earlier you get this information, the better.

What are your in-house capabilities? And are they available?

Your options are limited based on your in-house resources and their availability. You might have the expertise on staff to manage your own hosting, spin up servers, secure them, and maintain them. 

But will they have the dedicated time to do this? Are there other priorities that could pull them away from these duties? Go back to your performance goals. If the website goes down in the middle of the night, do you need someone to take that call? If you did an in-house solution, would you need to hire another person, or would you need to contract with a company for first-tier support?

Your answers to these will limit or expand your potential hosting options before you even evaluate them. For example, if you don't have anyone to monitor servers and have limited Drupal expertise on staff, you'll be restricted to managed Drupal hosting. Then again, perhaps your site has a limited or narrow audience, so weekend downtime doesn't matter.

Go back to your goals and priorities. If you need to hire talent to meet those goals, consider that during your evaluation. These costs and expectations should be transparent.

In some cases, you might not even have the in-house expertise to do the evaluation itself. That's ok. You can hire plenty of experts and consultants to give you an honest recommendation. We've helped many organizations with these same questions.

Which hosting providers do you want to evaluate?

You have your goals and have prioritized them. You have honestly assessed your capabilities. Now, it's time to choose who you want to evaluate. Don't make this decision lightly. Every provider you put on this list increases the time you will need to assess them honestly.

You probably already have an idea of where you want to host. For Drupal hosting, Pantheon and Acquia might be on your list. If you have the resources and capabilities, having your own data center might be one of the options. Server providers like Linode and DigitalOcean can be good options, as are cloud services like AWS and Google Cloud.

Once you have developed your list of options, start evaluating based on your already established criteria.

Do you really need the extras the hosting provider is offering?

Hosting providers get you in the door with hosting and then want to upsell you with other services: personalization, digital asset management, managed updates, and more. Many hosting companies want to be technical MarTech companies because that's where the money is.

You might need or want their extra services. Maybe one of their extra offerings is a big positive for choosing them. Or maybe not. Again, go back to your goals and priorities. Most organizations just want reliable hosting. Evaluate based on that, not the bells and whistles that start blaring in front of your face.

You've done a lot of work thinking through what you need, so don't deviate from it in the final hour.

Keeping track

Have each evaluator give scores independently for the same criteria. Don't use the numbers to choose a winner, but to determine points where evaluators disagree or where further research is needed. A simple spreadsheet with line items and scores of 1-3 works well. Give each person their own copy so they can work independently without being influenced by other scores. It also helps to determine yes/no criteria as quickly as possible because that can help rule out providers before diving too deep.

Finally, keep track of the gaps each shortlisted provider has. If your request is significant enough, they may be willing to prioritize missing features on their roadmap. If they promise to have a desired feature in three months, that could add an asterisk to your deliberations.

However you keep track of your evaluations, you'll want to keep it open and transparent. Make it clear how you have come to your decisions, and be ready to explain your rationale. Writing up a summary document for those who don't want to dive into spreadsheets is a good idea.

If all this sounds daunting, we can help you through this process. We help organizations every day uncover requirements, set goals, rank priorities, and come to a good decision.

Mar 29 2023
Mar 29

Content teams want the flexibility to publish content creatively. They want landing pages to be dynamic and reflect the vision they have inside their heads. Organizations want to encourage brand and writing-style consistency (sometimes across a whole network of websites). They also want to ensure their content is maintainable and meets web accessibility and security standards.

How do we marry these desires together? One way is with proper guardrails for the authoring experience. Putting guardrails in place can help keep content within certain parameters without feeling too restrictive for authors and editors.

Types of guardrails

What makes a good guardrail? For one, guardrails are there in case of emergencies. You don't want to make a habit of bumping into them because that would ruin the paint job on your car. Ideally, they aren't noticed unless people are looking for them. Guardrails are there to guide people along the way they are already going while offering protection if something starts to go wrong.

The content-authoring experience of a website should be akin to driving on a well-planned and maintained roadway. Here are different types of guardrails you can use:

  • Use good labels, help text, and task-based navigation. Similar to clear signage on the road, you don't want your editors guessing what goes in a field or what to do next. Labels should follow the voice and tone outlined in your organization's content style guide. Help text should provide clear examples of expected content for each field. Menus and other links in the user interface should be clear and contextual to the user and the task at hand. Don't confuse authors with extra options they might not even have access to.
  • Plan out the right amount of space. Have you ever driven down a road without a shoulder, where the lane feels too narrow, and you feel like the tires are about to fall off the edge? Don't make your authors feel like that while they are entering content. They need ample space to write what they need while following voice, tone, and style guidelines. But they also need those lines painted somewhere.
  • Provide the ability to write drafts, save progress, and view previews. Authors should feel not feel like they need to speed. Enabling them to enter content at their own pace, knowing that their progress isn't going to be lost or be published until it is ready, all contribute to a sense of safety.
  • Fix functionality bugs or user experience (UX) obstacles. Know the feeling of driving over a road with potholes and rough, uneven pavement? That's what content authors and editors feel like when they encounter functionality bugs and bad UX. Test your authoring forms and processes rigorously and with real content to ensure all components work as expected.
  • Optimize for page performance. Consider using automated image optimization so that pages on your site are performant and load quickly for visitors. A visitor trying to read an article that is slowly loading 50MB of pictures can feel like being stuck behind a garbage truck on a one-way street.
  • Limit external code in content. Put restrictions on what authors can put into WYSIWYG fields, like third-party embedded code or JavaScript snippets, for site security. Authors usually don't want to deal with raw code anyway, so talk to them about why they are trying to insert such things to figure out a better solution to their needs.
  • Prevent accessibility issues stemming from content entry. Like taking a driver's education class to get a driver's license, authors need editor training to understand and follow the rules of the road. Training can help authors feel comfortable behind the wheel by letting them become familiar with the different forms and authoring tools they will use in the system. Training is an opportunity to give guidance on how to correctly structure content with headings and list markup and to include well-written alt text for non-decorative images.

Structured content

We advocate for structured content, which requires planning and organizing content into discrete fields per thing, even for a simple content type like an article. Instead of one large field to hold everything, we recommend structuring content piece by piece based on what it is and how it will be used to convey information across the site or content system. For example, on an article content type, we would typically start with an article title field, a published date field, an author name field, and an article body field at minimum.

For certain use cases, the content can be broken down even more. Some content may be better served with discrete month, day, and year fields instead of a full date field that contains all three pieces or providing individual fields for a person's title, first name, last name, and suffix instead of a single name field. 

But you don't want to go too far. Otherwise, you risk complicating content entry for your content team. Over-structuring content can make content entry more tedious than is necessary and introduce complexity that impacts the site implementation and maintenance. However, content systems that aren't set up with enough structure can negatively impact how your content does in search results, which prevents site visitors from finding your content.

Authoring flexibility vs. rigidity

Finding the right balance between content authoring flexibility and rigidity is difficult because the perfect balance in one system can be lopsided in another one. It depends upon the CMS and the authors who use it.

Too rigid

If the authoring experience is too rigid, authors fight against the system's constraints and feel frustrated, defeated, or disenfranchised. For example, authors may not be able to complete tasks they are asked to do, like posting an alert to the homepage, if only the site administrator can do that. This is especially true for authors who previously enjoyed a lot of authoring freedom.

Too flexible

If the authoring experience is too flexible, the integrity of the design and content system can be compromised and the content's message is lost. The content system becomes difficult to maintain as the quantity of content expands without enough structure or oversight. Content quality becomes inconsistent due to too many options and, ironically, is inflexible to future innovations. Authors have a wide unmarked road without lanes or signage, but this also means they have no clear way to get to their destination. This can be a negative experience for authors who are used to doing something one way and one way only. They can become overwhelmed with too many options to do what they need to do.

Finding the right balance

You don't want narrow, rigid guardrails because they hinder creativity and frustrate content teams. Authors can feel like they are part of an assembly line instead of an important part of the creative process. You also risk content getting locked into dated trends. A content authoring system with guardrails works best when

  • the authoring functionality matches the author's expectations and level of experience, 
  • enables authors to complete the tasks they need to, 
  • allows for content creativity while avoiding chaos.

Media asset management is an example that illustrates the importance of finding the right balance. Content managers often want to ensure that all media assets added to the site, like images or videos, meet rigorous content, style, copyright, and resolution quality standards. They may also want to encourage content authors to reuse images or videos already in the media library to reinforce branding with approved imagery. 

However, for authors to find images in the library to use, the image and video assets need to be created with metadata about them included. The metadata enables browsing and filtering on aspects of the media asset, like what is pictured, the image aspect ratio, file size, or the year the image was taken. For this to be possible, authors need to completely fill out the metadata fields each time they add a media asset to the library. By accounting for this additional work, you can evaluate if media asset reuse will benefit your organization or create more burden on the authors than the value it adds.

How do you find the right amount of guardrail structure to guide your content authoring experience? First, embrace the idea that it may need to change over time. The balance of authoring flexibility and constraints will need to adjust as your organization's content maturity level increases and new content goals are set. To get started, meet with people at your organization to talk about your current content system.

Interviews

Interview site stakeholders and content authors to learn more about the current content process. Ask questions to figure out the current state of your content, what your organization's goals for the content are, who the audience you want to reach with your content is, and what the practical limitations on the content creation lifecycle are. Some questions to ask in these interviews:

  • Who is the primary audience of the site?
  • What are the most important kinds of information currently on the website?
  • What does the current content authoring process look like? Can you talk through it? 
  • Who creates and manages content? Is it a particular role held by one person or a team?
  • What are some pain points you have when authoring content?

You may uncover that the content system does not match the skill level of most content authors. There may be competing priorities for target audiences and content messaging. These valuable findings can inform which guardrails to put in place or even remove. 

Even with guardrails, some authors will need guidance and training on using the content system. Other authors will be limited in what they can achieve themselves, even though they are capable of doing more. Achieving that balance is necessary to deliver content that consistently meets your organization's content guidelines.

Saying "no" to requests for more flexibility

Sometimes authors will come to you asking for more flexibility. The current content structure isn't working for them; they feel they can't realize some of their goals. They want you to take down some of the guardrails.

First, you need to get to the root of their need. Gently asking "why" will allow you to understand the request better. There are three possible outcomes:

  • The flexibility they want is aligned with content goals and will provide value, but it requires development work. Discuss what the feature requirements are, and prioritize the work with your development team.
  • The flexibility they want is already achievable within the existing system. If this is the case, you need to surface it to content authors more clearly by including it in the content training, doing a demonstration, and writing documentation. 
  • The flexibility they want goes against the stated goals for your organization's content. This doesn't mean a hard "no." It just means you must facilitate further discussions to reach a resolution.

However, rejecting a request is harder if your organization has no unifying mission or goals set for its content.

The importance of a clear destination

It's easy to end up driving in circles if you don't know where you are going. How do you prioritize anything related to content if you don't have clear goals to work towards? You are left to personal preferences. Any change, any guardrail, any attempt at more or less content system flexibility is easier to evaluate if there is a central and shared mission.

The American Booksellers Association's new IndieCommerce™ e-commerce platform had a clear mission driving all interface decisions: to provide "the tools for indie bookstores to create unique, content-rich, and easy-to-operate, fully transactional, e-commerce-enabled websites." 

In practical terms, they wanted to allow for content creativity while still having guardrails to enforce standards. This mission was the through-line for the entire project. It allowed them to dedicate resources to the platform's user experience. This focus resulted in providing a custom administration dashboard, task-based pages for managing the bookstore site and commerce settings, and flexibility to style a site's look and feel while meeting WCAG AA. Their mission prioritized strategy and development work that enhanced the user experience and ensured that the experience was maintained with comprehensive testing and iterative feedback from start to finish.

Having a mission in place makes your decisions around guardrails much easier. You can prioritize work that furthers the mission, which gives you clarity about what the content authoring experience needs.

Conclusion

With a commitment to optimizing the content authoring experience, organizational agreement on a mission, and willingness to talk to people about content authoring, you can establish the right balance between flexibility and rigidity in your content system. Setting up guardrails that fit your content process empowers content authors to bring their content to life while ensuring that content meets brand and style guidelines. Research, test, iterate, and repeat.

If you'd like help finding the right balance for your content teams, we can work with you to create an authoring experience that makes content authors happy and gets your content to the right audience.

Mar 06 2023
Mar 06

Working in the front end of Drupal can be difficult and, at times, confusing. Template files, stylesheets, scripts, assets, and business logic are often scattered throughout big code bases. On top of that, Drupal requires you to know about several drupalisms, like attaching libraries to put CSS and JS on a page. For front-end developers to succeed in a system like this, they need to understand many of Drupal's internals and its render pipeline.

Looking at other stacks in our industry, we observed that many try to bring all the related code as close as possible. Many of them also work with the concept of components. The essence of components is to make UI elements self-contained and reusable, and while to some extent we can do that in Drupal, we think we can create a better solution.

That is why we wanted to bring that solution to Drupal Core. Recently, the merge request proposing this solution as an experimental module was merged. This article goes over why we think Drupal needs Single Directory Components and why we think this is so exciting.

The goals of SDC.

Our primary objective is to simplify the front-end development workflow and improve the maintainability of custom, Core, and contrib themes. In other words, we want to make life easier for Drupal front-end developers and lower the barrier of entry for front-end developers new to Drupal.

For that, we will:

  • Reduce the steps required to output HTML, CSS, and JS in a Drupal page.
  • Define explicit component APIs, and provide a way to replace a component that a module or a theme provides.

This is important because it will vastly improve the day-to-day of front-end developers. In particular, we aim for these secondary goals.

  • HTML markup in base components can be changed without breaking backward compatibility (BC).
  • CSS and JS for a component are scoped and automatically attached to the component and can be changed without breaking BC.
  • Any module and theme can provide components and can be overridden within your theme.
  • All the code necessary to render a component is in a single directory.
  • Components declare their props and slots explicitly. Component props and slots are the API of the component. Most frameworks and standards also use this pattern, so it will be familiar.
  • Rendering a component in Twig uses the familiar include/embed/extends syntax.
  • Facilitate the implementation of component libraries and design systems.
  • Provide an optional way to document components.

Note that all this is an addition to the current theme system. All of our work is encapsulated in a module by the name of sdc. You can choose not to use single directory components (either by uninstalling the module or just by not using its functionality). The theme system will continue to work exactly the same.

History

Whenever SDC (or CL Components) comes up, we get the same question: "Isn't that what UI Patterns has been doing since 2017?"

The answer is yes! UI Patterns paved the way for many of us. However, we did not start with UI Patterns for the proposal of SDC. The main reasons for that are:

  1. UI Patterns is much bigger than we can hope to get into Core. We share their vision and would love to see site builder integrations for components in Drupal Core one day. However, experience tells us that smaller modules are more likely to be accepted in Core.
  2. The UI Patterns concepts were spot on six years ago. Our understanding of components in other technologies and frameworks has changed what we think components should be.

In the end, we decided to start from scratch with a smaller scope, with the goal of creating something that UI Patterns can use someday.

We started this initiative because many of us have several custom implementations with the concept of Drupal components. See the comments in the Drupal.org issue in the vein of "We also do this!" Standardizing on the bare bones in Core will allow extending modules and themes to flourish. Most importantly, these modules and themes will be able to work together.

Architectural decisions

The initial team, which included Lauri Eskola, Mike Herchel, and Mateu Aguiló Bosch, met regularly to discuss the technical architecture, principles, and goals of SDC. Here are some of the fundamental architectural decisions we landed on:

Decision #1: All component code in one directory

As we have learned from other JavaScript and server-side frameworks, components must be self-contained. The concepts of reproducibility and portability are at their Core. We believe that putting components in a directory without any other ties to the site will help implement those concepts. You can take a component directory and copy and paste it to another project, tweaking it along the way without a problem. Additionally, once a developer has identified they need to work with a given component (bug fixes, new features, improvements, etc.), finding the source code to modify will be very easy.

Decision #2: Components are YML plugins

We decided that components should be plugins because Drupal needs to discover components, and we needed to cache the component definitions. Annotated classes were a non-starter because we wanted to lower the barrier for front-end developers new to Drupal. We believe that annotated PHP classes fall more in the realm of back-end developers. While there are many file formats for the component definition for us to choose from, we decided to stay as close as possible to existing Drupal patterns. For this reason, components will be discovered if they are in a directory (at any depth) inside of my_theme/components (or my_module/components) and if they contain a my-component.component.yml.

The alternative we considered more seriously was using Front Matter inside the component's Twig template. Ultimately we discarded the idea because we wanted to stay close to existing patterns. We also wanted to keep the possibility open for multiple variant templates with a single component definition.

Decision #3: Auto-generated libraries

We believe this is a significant perk of using SDC. We anticipate that most components will need to have CSS and JS associated. SDC will detect my-component.css and my-component.js to generate and attach a Drupal library on the fly. This means you can forget about writing and attaching libraries in Drupal. We do this to lower the barrier of entry for front-end developers new to Drupal. If you are not satisfied with the defaults, you can tweak the auto-generated library (inside of the component directory).

Decision #4: Descriptive component API

Early in the development cycle, we decided we wanted component definitions to contain the schema for their props. This is very common in other technology stacks. Some use TypeScript, other prop types, etc. We decided to use JSON Schema. Even though Drupal Core already contains a different language to declare schemas (a modified version of Kwalify), we went with JSON Schema instead. JSON Schema is the most popular choice to validate JSON and YAML data structures in the industry. At the same time, Kwalify dropped in popularity since it was chosen for Drupal 8 nearly 11 years ago. This is why we favor the latter in the trade-off of Drupal familiarity vs. industry familiarity. We did this to lower the barrier of entry for front-end developers new to Drupal.

The schemas for props and slots are optional in components provided by your themes. They can be made required by dropping enforce_sdc_schemas: true in your theme info file. If your components contain schema, the data Drupal passes to them will be validated in your development environment. Suppose the component receives unexpected data formats (a string is too short, a boolean was provided for a string, a null appears when it was not expected, ...). In that case, a descriptive error will tell you early on, so the bug does not make it to production.

Schemas are also the key to defining the component API and, therefore, assessing compatibility between components. As you'll see below, you can only replace an existing component with a compatible one. Moreover, we anticipate prop schemas will be instrumental in providing automatic component library integrations (like Storybook), auto-generating component examples, and facilitating automated visual regression testing.

Decision #5: Embedded with native Twig tools

To print a component, you use native Twig methods: the include function, the include tag, the embed tag, and the extends tag. SDC integrates deeply with Twig to ensure compatibility with potential other future methods as well.

In SDC, we make a distinction between Drupal templates and component templates. Drupal templates have filenames like field--node--title.html.twig and are the templates the theme system in Drupal uses to render all Drupal constructs (entities, blocks, theme functions, render elements, forms, etc.). By using name suggestions and applying specificity, you make Drupal use your template. After Drupal picks up your Drupal template, you start examining the variables available in the template to produce the HTML you want.

On the other hand, component templates have filenames like my-component.twig. You make Drupal use your component by including them in your Drupal templates. You can think of components as if you took part of field--node--title.html.twig with all of its JS and CSS and moved it to another reusable location, so you can document them, put them in a component library, develop them in isolation, etc.

In the end, you still need the specificity dance with Drupal templates. SDC does not replace Drupal templates. But, if you use SDC, your Drupal templates will be short and filled with embed and include.

Decision #6: Replaceable components

Imagine a Drupal module that renders a form element. It uses a Drupal template that includes several components. To theme and style this form element to match your needs, you can override its template or replace any of those components. The level of effort is similar in this case.

Consider now a base theme that declares a super-button component. Your theme, which extends the base theme, makes heavy use of this component in all of its Drupal templates, leveraging the code reuse that SDC brings. To theme the pages containing super-button to match your needs, you'll need to override many templates or replace a single component. The level of effort is nothing similar.

This is why we decided that components need to be replaceable. You cannot replace part of a component. Components need to be replaced atomically. In our example, you would copy&paste&tweak super-button from the base theme into your custom theme. The API of the replacing component needs to be compatible with the API of the replaced component. Otherwise, bugs might happen. Both components must define their props schema for a replacement to be possible.

Example of working with SDC

Let's imagine you are working on theming links for your project. Your requirements include styling the links, tracking clicks for an analytics platform, and an icon if the URL is external. You decide to use SDC. So you scaffold a component using drush (after installing CL Generator). You may end up with the following (you'll want to use your custom theme instead of Olivero):

After the initial scaffold, you will work on the generated files to finalize the props schema, add documentation to the README.md, include the SVG icon, and implement the actual component. Once you are done, it might resemble something like this.

web/core/themes/olivero/components
└── tracked-link
    ├── img
    |   └── external.svg
    ├── README.md
    ├── thumbnail.png
    ├── tracked-link.component.yml
    ├── tracked-link.css
    ├── tracked-link.js
    └── tracked-link.twig

Below is an example implementation. Be aware that, since this is for example purposes only, it may contain bugs.

# tracked-link.component.yml
'$schema': 'https://git.drupalcode.org/project/drupal/-/raw/10.1.x/core/modules/sdc/src/metadata.schema.json'
name: Tracked Link
status: stable
description: This component produces an anchor tag with basic styles and tracking JS.
libraryDependencies:
  - core/once
props:
  type: object
  properties:
    attributes:
      type: Drupal\Core\Template\Attribute
      title: Attributes
    href:
      type: string
      title: Href
      examples:
        - https://example.org
    text:
      type: string
      title: Text
      examples:
        - Click me!

Note how we added an attributes prop. The type will also accept a class, an enhancement we had to make to the JSON Schema specification.

# tracked-link.twig
{# We compute if the URL is external in twig, so we can avoid passing a #}
{# parameter **every** time we use this component #}
{% if href matches '^/[^/]' %}
  {% set external = false %}
{% else %}
  {% set external = true %}
{% endif %}


  {{ text }}
  {% if external %}
    {{ source(componentMeta.path ~ '/img/external.svg') }}
  {% endif %}

If a component receives an attributes prop of type Drupal\Core\Template\Attribute it will be augmented with component-specific attributes. If there is no attributes prop passed to the component, one will be created containing the component-specific attributes. Finally, if the attributes exists but not of that type, then the prop will be left alone.

/* tracked-link.css */

.tracked-link {
  color: #0a6eb4;
  text-decoration: none;
  padding: 0.2em 0.4em;
  transition: color .1s, background-color .1s;
}
.tracked-link:hover {
  color: white;
  background-color: #0a6eb4;
}
.tracked-link svg {
  margin-left: 0.4em;
}

Components that make use of attributes receive a default data attribute with the plugin ID. In this case data-component-id="olivero:tracked-link". We could leverage that to target our styles, but in this example, we preferred using a class of our choice.

// tracked-link.js
(function (Drupal, once) {
  const track = (event) => fetch(`https://example.org/tracker?href=${event.target.getAttribute('href')}`);

  Drupal.behaviors.tracked_link = {
    attach: function attach(context) {
      once('tracked-link-processed', '.tracked-link', context)
        .forEach(
          element => element.addEventListener('click', track)
        );
    },
    detach: function dettach(context) {
      once('untracked-link-processed', '.tracked-link', context)
        .forEach(element => element.removeEventListener('click', track));
    }
  };
})(Drupal, once);

 With this, our component is done. Now we need to put it in a Drupal template. Let's inspect the HTML from a Drupal page to find a link, and there we'll find template suggestions to use. Let's say that our links can be themed with field--node--field-link.html.twig. To use our component, we can use include because our component does not have slots.

# field--node--field-link.html.twig

{% for item in items %} {{ include('olivero:tracked-link', { attributes: item.attributes, href: item.content.url, text: item.content.label }, with_context = false) }} {% endfor %}

{{>

Single Directory Components were merged into Drupal core 10.1.x. This means that they will be available for all Drupal sites running Drupal 10.1 as an experimental module.

However, we are not done yet. We have a roadmap to make SDC stable. We are also preparing a sprint in DrupalCon Pittsburgh 2023 for anyone to collaborate. And we have plans for exciting contributed modules that will make use of this new technology.

Note: This article has been updated to reflect the latest updates to the module leading to core inclusion.

Feb 22 2023
Feb 22

It happens often. Early in a project, a team member will offhandedly say something like "the site is slow," "my computer is slow," or "Docker is slow." Those all may be true. But it takes specialized knowledge to track down the root cause of performance issues. This article covers the same tips we share with our team to help solve workstation performance problems.

We'll cover mostly macOS, but the general framework is the same for any operating system. On Windows, use Task Manager instead of Activity Monitor, and on Linux, use htop or bpytop.

When a computer is slow, the best way to solve it is to:

  1. Identify what programs are using the most resources.
  2. Identify what types of resources are in contention.
    • CPU: Your computer's processor. If it's at 100%, programs must wait their turn to do something.
    • Memory: Where information being used by programs is stored. For example, this article is literally in memory as you view it. Your browser will keep the page in memory even after navigating away, so it's quick to come back if you hit the Back button.
    • Disk: Much slower than memory, but also much bigger. And it's persistent across restarts. For example, when restarting, macOS has to load everything from the disk because memory is cleared on a restart.
    • Network: The combination of Wifi and internet. If something is slow, but the three items above are all good, this is often the issue. For example, when updating ddev to a new version, the slowest part is almost always downloading it from the internet.
    • GPU: Graphics processing. This mostly matters for designers and those who use graphics applications like Figma, which are 3D-accelerated.

Using Activity Monitor

Activity Monitor is useful for seeing what your computer is doing. The easiest way to open it is to open Spotlight (command-space) and search for it.

When you open Activity Monitor for the first time, you will see a window like this:

Below is the memory tab. The most important is "Memory Pressure." Once it goes yellow, you're likely to start noticing slowness.

Disk is also helpful. For example, disk activity will increase significantly when pulling a new Docker image as Docker uncompresses the download.

Finally, the Network tab. If you change Packets to Data, you may find this window easier to think about. You can do this at the bottom.

In the Window menu, there is a CPU History option. It creates a floating window with a graph for each CPU core. Use this window to see what happens when you're doing something in an application. For Intel users, this number is usually doubled. The computer this image was taken from has 8 "real" CPU cores, but the processor exposes them as 16. (You can see if your Mac uses an Intel processor by clicking the Mac icon at the top left and selecting About This Mac.)

iStat menus 

We often find Activity Monitor is too busy to glance at for a quick diagnosis. When something is slow, it helps to know what's going on right now and see a quick representation of CPU, Memory, Disk, and Network all at once. iStat Menus is a go-to app for this, shipping with several useful menubar widgets.

Here's an example set of widgets from left to right:

  1. CPU temperature. On Intel, when this gets high (90° C+ typically), your processor makes itself slower to keep cool. Apple Silicon processors will throttle too, but it's much harder to trigger in day-to-day work.
  2. Network/internet.
  3. Disk - the lights go solid when the disk is being read (blue) or written to (red).
  4. Memory use.
  5. CPU use. The i means it's using integrated graphics, not dedicated graphics, which uses more battery. Apple Silicon computers only have one graphics chip and always show a D, which can be hidden.

As an example, let's tell PHPStorm to "invalidate caches and restart." This will cause lots of CPU and disk activity as it reindexes the loaded Drupal project. We'll click on different sections of the iStats menubar to bring up more detailed windows.

The following two screenshots show the CPU being used, but not fully. That's because PHPStorm is waiting on the disk to read files to index.

A little bit later, we can see the CPU applet nearly filling up. Even typing this article is slightly laggy.

 Fixing problems

My CPU is constantly at 100%

If possible, quit the programs using the CPU you don't actively need. If you see high CPU use from a program you don't recognize, ask about it. Many macOS system programs like WindowServer (used to draw the screen) and kernel_task (used to talk to hardware) can't be quit. But, you may find that closing a browser tab with lots of visuals (like ads) reduces other processes significantly.

My memory is full

You either need to quit programs or upgrade your computer. If you see something oddly using a lot of memory, it may indicate a bug in that program.

My disk is constantly being used

Like the above, you need to quit programs you aren't currently using. Try to keep at least 10-20% of your disk free. Flash disks (SSD drives, which most computers have nowadays) need free space to operate effectively, and a disk that is 90% or more full will be significantly slower. When buying a new computer, remember that larger disks will be faster than smaller ones.

Downloads are slow, or calls are lagging

Rule out Wifi as a problem by connecting your computer directly to your router with a cable. Also, remember that your internet connection is shared with every other device in your home. If the problem is caused by other devices using your home internet connection, check if your router supports Active Queue Management. At the least, if you determine this is the issue, you know buying a new computer won't fix it.

Security scanners can often cause performance issues

Sophos, Bitdefender, and Windows Defender on Windows cause a computer to be horrifically slow, especially for developers. Our projects comprise tens of thousands of small files, the worst case for security scanners. Developers may need to find they have to exclude directories from scanning or disable third-party AV entirely.

Docker is slow on macOS

Lullabot has standardized on ddev for local environments. We recommend all macOS users globally enable mutagen for good performance. For a deep dive into why Docker is slow on macOS, see Paolo Mainardi's excellent article Docker on MacOS is slow and how to fix it. We've found that colima, ddev, and mutagen generally offer native-like performance beyond the initial code sync.

Applications on Apple Silicon are not as fast as expected

If you use Migration Assistant to migrate to a new Mac, it will copy applications over, including those written for older Intel processors. These applications will run fine but are less fast than native Apple Silicon apps. Use the "Kind" column in Activity Monitor to search for Intel apps and upgrade or replace them. Many apps, notably Electron apps like Slack, don't ship universal binaries and require downloading a new copy manually to switch architectures. Likewise, Homebrew should be reinstalled from scratch in /opt/homebrew to get native apps. Finally, Docker apps like colima may need to be reconfigured to use a native VM instead of an Intel VM.

Conclusion

We hope this guide is as helpful for you as it has been for our team. When developing websites, wasted time can mean wasted money, and a slow computer can waste a lot of time. Having an efficient local development environment ensures we provide the most value.

Feb 16 2023
Feb 16

Matt Kleve gets three front-end developers together who have been working hard on Drupal core and are excited about the great things newly released in Drupal 10.

Jan 04 2023
Jan 04

Microsites can be a useful tool. If you need sections of your website to look different from the main theme, or you have an initiative that needs greater emphasis, or you want a content team to have more control over a specific group of content, then implementing microsites can be a good solution. 

What's the best way to implement them? And how do you know you need a microsite versus a new website altogether?

Let's start with a definition.

What are microsites?

A microsite can mean many things. At the highest level, a microsite is a section of a larger website that

  • lives on the same domain but has enough content to be considered a website if it lived on its own
  • has its own defined mission to justify its existence even if the parent website didn't exist
  • has the ability to define its own look and feel while still maintaining a clear association with its parent website

Universities and state governments might have each department as a microsite. A television network might give its most successful shows its own microsites. Many organizations naturally need microsites to help them manage, communicate, and organize information.

This can be accomplished in many ways. Given example.com/foo/bar, bar might be a folder that is an entirely different website or platform than what lives at example.com. Or it might live on a subdomain like bar.example.com. You can do this with Drupal, where each department is a complete Drupal instance. This can work well. You just need to put in the administrative elbow grease to manage and maintain all of those instances. Spinning up a new site usually requires IT and developer support. And unless you have a single sign-on solution, you'll need to manage different user accounts for each website.

For the purposes of this article, we will be dealing with microsites that all live in the same instance of Drupal. One Drupal website but many potential microsites.

How do we accomplish this?

Ways to change the look and feel

There are a few ways to change the look and feel of a section of a Drupal site, and each provides varying levels of flexibility and control.

Swap the theme based on specific criteria

Using a ThemeNegotiator, you can determine the active theme of a page based on whatever criteria you want. A class that extends ThemeNegotiatorInterface should have at least two methods:

  1. determineActiveTheme() - this returns the machine name of a theme
  2. applies() - this returns TRUE if the theme from determineActiveTheme should be used in the current context

You can use whatever criteria you want in the applies() method. Maybe you base it on the route or some property on a content type or the content type itself. For example, this service declaration of a custom ThemeNegotiator passes in an entityTypeManager, which will let us load the node data of the current page inside the applies() method.

example.theme.negotiator:
  class: '\Drupal\example_module\Theme\ExampleThemeNegotiator'
  arguments: ['@theme_handler', '@entity_type.manager']
  tags:
    - { name: theme_negotiator}

If you want to set this from the UI, you can use the Theme Switcher module to define some basic rules to control when a theme is shown. You still need to create and customize the theme, however.

With a new theme comes a different set of theme settings and block layouts and templates. This is the most flexible option but requires new custom code for every microsite. It makes sense if you have a predefined number of microsites.

Use the same theme but add classes or libraries based on context

If you don't want the overhead of different themes or your look and feel to keep the same structure as the main theme, adding contextual classes and/or libraries is a good option. 

For one project, we determined distinctive sections based on the path of a node. We had code that determined what section a page belonged to and passed that to the front end via preprocessing.

$variables['is_blog'] = \Drupal::service('example.path_matcher')->isBlogPage();

if ($variables['is_blog']) {
  $variables['page']['#attached']['library'][] = 'example/blog';
}

Like switching themes, this option also makes the most sense if you have a pre-defined number of microsites and don't expect to add more since it requires custom code for each one.

Have fields on the content itself control the look and feel

A content type can have fields that control its look and feel. Change colors, upload a logo, have unique menus and link lists, different layouts, fonts, and more. Changes are made in preprocessing based on the field values.

Be careful when offering unlimited options. Letting users create arbitrary layouts and color combinations can create chaos. You'll get layouts that look like they were thrown together by a toddler and color palettes that force people to look away in pity and disgust. Know your users. Know their needs and expectations.

You'll also want the best UX possible, with clear help text and guardrails to prevent a microsite from going rogue. Here are some patterns we have used in the past with Drupal.

This is an example of a color palette selector that allows any color to be picked but checks for accessibility. It also has sensible defaults to select if users don't want to get too granular.

Here is a font selector.

You can also let users control the layout for certain areas.

This option requires more developer effort upfront but puts more power into the hands of users. Once set up, any editor with proper permissions can create a new microsite without asking a developer for help. Only adding new predefined options will require the deployment of new code.

Content under a microsite

A microsite with only one page is called a landing page. We want to maintain the emphasis on the word site, which means we need a way to have additional content on our microsites. This is only a problem to solve for the third option above, where users can create microsites on their own. The system's criteria for switching the theme or adding additional classes can cascade to other pages. 

For example, if the blog is a microsite, then the criteria might be anything that matches the URL pattern of example.com/blog/*. That makes it easy enough for users to add content to the microsite. Just make sure the URL matches.

But for microsites controlled by the content itself, there is no established pattern or criteria. Users need a way to associate content with a microsite. Like anything else with Drupal, there are many ways to accomplish this. The simplest way is to have an entity reference field on content types that can be part of microsites. This field points to the parent microsite content. The Entity Reference Hierarchy module might also fit your needs.

A microsite implementation example

Let's take the example of a company with an extensive suite of products, and this company wants each product to have a hub on the website with its own unique look but remain within overall brand guidelines. They want to provide editors with flexibility but also with obvious guardrails. Throughout this example, we will refer to a product named Widget Prime. These are the content types and fields that might be implemented.

Product Home Page content type

This will be the main hub of a product and contain all the information that will cascade to other pages. 

Fields:

  • Submenu - a list of links
  • Layout - a dropdown list of predefined layout options.
  • Color Palette - a radio selection of predefined color palettes
  • Overview - a text field for a basic introduction on the product home page
  • Sidebar - a multivalue block field to reference blocks if the selected layout has a sidebar

Content for Product Home Pages

For our example, we'll have two content types that can be tied to a microsite: Blog posts and Videos. Each has an entity reference field named Product for selecting a Product Home Page node.

Bringing it all together

Based on the entity reference field, we can create a way to aggregate the most recent blog posts and display these in sections on the microsite home page. This can be done with an embedded View, or we can write a custom EntityQuery and inject the results into the template. This code collects all the blog posts for a product and inserts them into a variable called "feed" that will be available to templates.

/**
 * Implements hook_entity_view().
 */
function widget_prime_entity_view(array &$build, EntityInterface $entity, EntityViewDisplayInterface $display, $view_mode) {
  if ($entity->bundle() !== 'product') {
    return;
  }
  $query = \Drupal::entityTypeManager()->getStorage('node')->getQuery();
  $query->condition('type', 'blog')
    ->condition('status', NodeInterface::PUBLISHED)
    ->condition('field_product', $entity->id())
    ->range(0, 10)
    ->sort('created', 'DESC');
  $build['feed'] = [
    '#cache' => [
      'tags' => 'node_list',
    ],
  ];
  if ($blog_posts = $query->execute()) {
    $view_builder = \Drupal::entityTypeManager()->getViewBuilder('node');
    $build['feed'] = array_map(
      function (NodeInterface $blog_post) use ($view_builder) {
        return $view_builder->view($blog_post, 'teaser');
      },
      $blog_posts
    );
  }
}

For blog posts and video child content, we'll load the referenced Product Home Page node to grab the submenu, color palette, and sidebar values so they can be inherited. We recommend creating a service, or series of services, that can be queried to get this information during preprocessing. This unifies the microsite and keeps things consistent.

Keep in mind that this is just one example, and there are many other options. You could have a Hero field or a Logo field. You could have additional fields for promo items. You could also get fancy with permissions and have certain roles only be able to edit certain microsites and their child content. What you offer or what limitations you introduce will depend upon your unique needs and your team.

Some warnings when implementing microsites

Microsites can be addicting. If you implement them right, the flexibility and ease of use you give to your editors can give them a rush of empowerment that could go straight to their heads. And since it is better to give than to receive, you'll want to keep orchestrating these successes for others, desiring to give them more and more.

But temper this enthusiasm with some warnings and cold, hard reason.

Content associated with more than one microsite

Beware of the temptation that may come from stakeholders to have a single piece of content appear on more than one microsite, depending on the context. This gets complicated quickly and doesn't work as well as imagined. 

Which menu and theme should you display? Which content is the true parent? You can solve some of this contextually based on where a user clicks from, but that's a lot of potential work for minimal payoff. And what if a user shares a blog post on social media? You would need to pass around the microsite info in the URL or something equally cumbersome.

Relying on path matching means support requests

If you're relying on path matching to define your microsites and relying on your editors to get it right, sooner or later, you're going to deal with support requests. "Why is my content not styled to the microsite?" This is inevitable. Be prepared.

Use pathauto, but consider your path aliases carefully

At some point, you'll need to consider SEO and tidy up your URL paths to fit with the connected microsite. Random path patterns aren't good for UX and aren't good for SEO. If you have a microsite located at www.example.com/widget-prime, you'll want to do your best to keep microsite content consistent with some pattern like www.example.com/widget-prime/this-is-a-blog-post and not keep things at the root like www.example.com/this-is-a-blog-post.

Best to turn to pathauto early to help you manage this. But this can get complex fast. For content that can be associated with a microsite, will that association be optional or required? If it's optional, what should be the path alias when it's present versus not present? The pathauto and token modules provide options for entity references, as well as excluding empty values intelligently. For example, you'll be able to use something like [node:field_microsite:entity:title] in the path pattern.

Conclusion

Microsites are an excellent way to provide your editors with flexibility while keeping things consistently within brand guidelines. They get a unique look and feel to help differentiate their content without stretching your design system beyond its intended limits. As long as you do your user research and don't go overboard with options, everyone wins.

We've put together several microsite implementations, from complex to simple, helping organizations improve their marketing efforts and extend the usefulness of their content management system. Contact us for help discovering what would work best for you.

Dec 07 2022
Dec 07

Drupal 10 was released on December 14, 2022, with new features and new possibilities. Here's what you need to know.

Making Drupal beautiful with Claro and OIivero

Drupal 10 has two new default themes, and we are proud to say that we helped push these initiatives forward so that they would be ready and stable for this release.

Olivero, the new front-end theme, has a modern design that accommodates common features like secondary navigation, media embeds, and layout builder. It is also WCAG AA conformant. Learn more about the design process for Olivero.

Claro, the new administration theme, came out of Drupal's Admin UI & JavaScript Modernisation initiative. Claro refreshes the administration UI with an up-to-date look and feel and brings in well-known UI patterns used on the web. It also has a focus on being accessible to as many users as possible. In the future, you can expect Claro to focus on UX improvements for each persona who has a hand in managing a Drupal website.

Theme starter kit

Currently, if you want to create a custom theme, you could use a core or contrib theme as a base theme and inherit all of its code and functionality. This works great…unless you want to modify or remove parts of the base theme. Drupal 10 comes with a theme generation tool that copies all of a theme's files and performs some string replacement to create the same theme but under a different name.

You can use this tool to copy the new starter_kit theme and have a new theme with basic templates and CSS. This barebones theme provides useful scaffolding, ready to be customized. If you want to streamline theme development further with different templates and defaults, you can create your own "starter" theme and use that as a base from which to copy.

More upcoming features

The plan is to make Automatic Updates and Project Browser a part of Drupal Core during Drupal 10's lifecycle. These features help match user expectations for using a modern content management system. 

Automatic Updates lets you update Drupal Core with the push of a button, but only patch-level changes. You won't be able to use it to upgrade to Drupal 11, for example, but you will be able to use it to update to new minor versions of Drupal 10. Listen to our podcast with three of the people working to make Automatic Updates a reality.

Project Browser lets users search for modules and themes and install them without leaving their Drupal site, even if the site uses Composer to manage packages and dependencies.

Upgrading from Drupal 9

Drupal 9 will reach its end of life on November 2023, which matches the end of life for Symfony 4. This gives you almost a year to plan your upgrade. While upgrading to Drupal 10 shouldn't be a huge endeavor, you'll want to pay attention to the modules you use and your hosting infrastructure. Both can cause unexpected snags. Here is what to look out for:

  • Contributed modules - make sure the contributed modules you have installed are ready for Drupal 10. If they aren't, consider working with the maintainers to help create and test patches. Acquia has a running list of modules and their Drupal 10 readiness status.
  • Custom modules - make sure your custom code is not using any deprecated functionality.
  • Hosting infrastructure - Drupal 10 will require PHP 8.1. Drupal 9 requires PHP 7.3, so you might have some system administration work to do.

Porting your code to Drupal 10

Drupal 9.5 and Drupal 10 will be identical, but Drupal 9.5 will still have all deprecated functionality remaining. This means you can update to Drupal 9.5, update your code to remove any deprecated functionality, and then it will be ready for Drupal 10. Here are the steps you want to take:

After you have worked through all the fixes, your codebase will be ready to upgrade to Drupal 10. You won't have much to do here if you have kept up with Drupal 9 deprecations as they have been updated.

Moving to CKEditor 5

Drupal 9 uses CKEditor 4 as its default rich text editor, but Drupal 10 will use CKEditor 5. This latest version of CKEditor is a complete rewrite with a modern JavaScript architecture and an improved UI. You'll have access to several new features.

But since it is a rewrite, CKEditor 4 plugins are not compatible. If you only used its out-of-the-box Drupal features, then moving to CKEditor 5 is simple. After updating to Drupal 9.5, switch your text formats to use CKEditor 5, which will trigger a semi-automatic upgrade path.

If you want to stay on CKEditor 4 for a while, install the CKEditor 4 module before moving to Drupal 10. This can buy you some time if you have some custom CKEditor 4 plugins you need to migrate because all custom JavaScript code will need to be rewritten. Like Drupal 9, CKEditor 4 becomes end-of-life near the end of 2023, so the sooner you start, the better.

Time to get prepared

A year goes by faster than you think. It's almost the end of 2022 already, and many people will be surprised despite the previous eleven months of hints, clues, and warnings. Time doesn't crawl, it gallops at a thundering pace. Drupal 9's end-of-life will be here before you know it. Start planning for Drupal 10 now.

Still on Drupal 7? Find out what your options are, including the possibility of migrating to Drupal 10.

Already feel behind? Maybe a little overwhelmed? Maybe your development has been focusing on new features and enhancements, and you don't want to grind their progress to a halt so you can do updates. Either way, we can help. Our support and maintenance services prioritize long-term maintenance by creating automated end-to-end testing and automated package updaters that keep your website updated, stable, and secure. When your website stays current with the latest Drupal core and contrib releases, upgrading to Drupal 10 (and beyond) becomes much easier.

Oct 20 2022
Oct 20

A group of Lullabots get together to swap scary stories from their professional pasts. Sometimes computers are hard and result in some scary situations.

We're pretty sure that ghosts and goblins weren't ever to blame, but we can't be too sure.

Happy Halloween!

Sep 30 2022
Sep 30

We’re so excited to share about a program that we've been participating in at Lullabot – Healthy Minds @Work. This is a science-based app and program shown to help strengthen your well-being skills. The Healthy Minds program was built by neuroscientists and is designed to teach and measure skills associated with emotional well-being using meditation and other contemplative practices.

A group of Bots discusses the program and the 30-Day Challenge that's been going on inside Lullabot.

Sep 21 2022
Sep 21

When Microsoft killed off Internet Explorer in June of 2022, front-end developers breathed a sigh of relief. IE's death meant that the development of lots of exciting features in CSS could now be fully supported across all major browsers (Chrome, Firefox, Safari, Edge) without worrying about IE compatibility.

So on this joyous occasion, we collected the thoughts of Lullabot's front-end developers and compiled some of the great new-ish features in CSS that we are thankful for. We also had an airing of grievances about some of the (many) things the CSS spec still needs.

Most front-end developers are familiar with the transform property - it has lots of use cases for minor little tweaks (like a diagonal ribbon of text) or things like scaling an image on hover. Until recently, though, it could get a bit cumbersome to use because multiple transformations had to be included in one line.

If you wanted to change just one transformation (for instance, the rotate property on a :hover state), you had to include all of the other properties as well.

However, as of Chrome 104 (released August 2022), these three properties - translate, rotate, and scale can now be listed as individual properties, meaning less code and an easier life.

Chrome was the last to adopt this, so now it can be widely used across all browsers. Here is a CodePen to demonstrate:

You can learn more from the Chrome team's blog post, but this is a welcome change for anyone working with animations.

Have you ever had to apply the same styles to multiple HTML elements? Has it made your code long, convoluted, and hard to read? Well, no longer, now that :is() and :where() are supported in all major browsers! 

article h1,
article h2,
article h3 {
  color: rebeccapurple;
}

With :is() and :where(), you can now write the same code this way:

article :where(h1, h2, h3) {
  color: rebeccapurple;
}

What's the difference between :is() and :where()?  :where() has no specificity. :is() takes on the specificity of its most specific argument. In the next example article h1 will be #ccc.

article h1 {
  color: #ccc;
}

article :where(h1, h2, h3) {
  color: #000;
}

But in this example article h1 will be #000.

article h1 {
  color: #ccc;
}

article :is(h1, h2, h3) {
  color: #000;
}

MDN dives deep on :is() and compares it to :where() here if you need more clarification.

:not() pseudo selector

Supported in all major browsers, :not() adds the ability to select elements that do not match a particular selector and takes on the specificity of its most specific selector. The :not() selector operates in the opposite way of :is(), which selects all elements of a particular selector. 

With :not(), it's possible to add a margin to the bottom of every list item except for the last. In the past, a developer would write:

li {
  margin-bottom: 10px;
}

li:last-child {
  margin-bottom: 0;
}

Now, this same block of code can be written this way, thus shortening the amount of code written:

li:not(:last-child) {
  margin-bottom: 10px;
}

:not() has been a welcome addition to the CSS landscape. Learn more about :not() over at MDN.

CSS logical properties

While not exactly "new" (fully supported since 2017), CSS logical properties have started to become the de-facto choice for front-end developers looking to make their code flexible for all languages and layouts.

CSS logical properties are a CSS module that can manipulate a page layout through logical dimension and direction rather than physical dimension and direction. That means we use margin-inline-start rather than margin-left when setting the start of a margin.

CSS logical properties are centered around the block and inline axes that come from a page's writing direction. Inline is the direction characters naturally flow and block is the direction that text wraps. These properties allow developers to layout a page relative to the content it displays rather than the strict definition of left, right, top, and bottom.

For instance, say you are building a multi-lingual site and want to have a page of content and a sidebar with useful links. With languages that read left-to-right (LTR), you would likely want the sidebar to be on the right-hand side and the content on the left-hand side of the page, as that is how LTR content should be read. You could accomplish this using non-logical properties:

.content {
  margin-right: 20px;
  margin-bottom: 30px;
  padding: 15px 20px;
}
.sidebar {
  border-left: 1px solid #000;
  padding-left: 10px;
}

But how would you approach this for languages that read right-to-left (RTL)? You would have to duplicate your code, change the direction for the padding and margin properties, and possibly add additional classes or some other control further up the style sheet:

.content-rtl {
  margin-left: 20px;
  margin-bottom: 30px;
  padding: 15px 20px;
}
.sidebar-rtl {
  border-right: 1px solid #000;
  padding-right: 10px;
}

However, if you use CSS logical properties, you can limit your code to just two selectors and handle both layouts simultaneously.

.content {
  margin-inline-end: 20px;
  margin-block-end: 30px;
  padding-block: 15px;
  padding-inline: 20px;
}
.sidebar {
  border-inline-start: 1px solid #000;
  padding-inline-start: 10px;
}

The user's preferred language is built into browsers, so even before your CSS is rendered, the browser knows in which direction the user prefers the text to be read. By utilizing border-inline-start for instance, it puts the border at the start of the horizontal area of the content where the .sidebar class should be, regardless of the direction, so that for visitors preferring LTR text, it is on the left-hand side of the sidebar, but for RTL visitors, it would be on the right because that is where the sidebar appears to them.

We strive to make sites accessible for as broad an audience as possible, and CSS logical properties help us do that. We even published an architectural design record (ADR) about logical properties to help accomplish this. You can find a great reference guide about CSS logical properties over at MDN if you want to implement it in your projects.

CSS line clamp

-webkit-line-clamp is a sneaky little trick that makes displaying teaser text a much more manageable task. 

Originally part of Chrome way back in 2011, this "one little trick jQuery plugin authors can't stand" will let you create a four-line, ellipse-ending paragraph that works in all modern browsers:

.my-text-block {
  display: -webkit-box;
  text-overflow: ellipsis;
  -webkit-line-clamp: 4;
  -webkit-box-orient: vertical;
  overflow: hidden;
} 

Yep, you read that right. Even Firefox will accept and render the -webkit prefixes on all of these properties. Though not officially part of the CSS spec, there was such widespread support for this feature that by 2019, it worked everywhere (except IE, of course). A push is underway to add a line-clamp property to CSS, but there will still be backward support for -webkit-line-clamp included if this happens. So go ahead and clamp those lines! More information can be found at MDN.

CSS features we still need

Subgrid

CSS grid has been part of a web developer's toolkit since 2017, but what would make grid even more powerful? The ability to align elements within a grid item to the grid itself. Currently, this is not possible — without subgrid, developers resort to adding margins, padding, and other methods to align items just so

Subgrid will ease the pain of many web developers who have had to create a grid within a grid. With subgrid, everything within a grid element aligns perfectly by setting grid-template-rows: subgrid on whatever grid item you want to use the same grid as its parent. (This works for grid-template-columns as well.)

You can see subgrid in action if you view the following Codepen in Safari 16 or Firefox. Card 2's content is longer than Card 1 and Card 3, and ideally, we'd want the learn more link to align on the same grid line on all three cards. In a browser that supports subgrid, you can see that enabling subgrid allows the contents inside the card to be aligned to the same grid as the cards themselves.

As of September 2022, subgrid is supported in Firefox and the newest version of Safari.

Container queries

Media queries for responsive design opened many doors for new and creative website layouts. Still, developers have also wanted more fine-grained control over when the size of an element should change. Enter container queries, which will allow the developer to add breakpoints at the container level.

To use container queries, first, define your container.

.my-fantastic-container {
  container-type: inline-size;
  container-name: main-container;
}

Now, say you want to resize a card within the container at a certain breakpoint. To do this, add the breakpoint this way:

@container main-container (min-width: 768px) {
  .card {
    width: 50%;
  }
}

The card will now display at 50% width when .my-fantastic-container is 768px or larger.

Container queries will be supported in the upcoming Safari 16 release and in Chrome 106, which will be released later in 2022.

:has() pseudo selector improvements

There has been a lot of progress on :has() as of mid-2022, with Safari and Chrome (behind a flag) supporting it. But Firefox, Edge, and Chrome (without a flag) don't support it.

If unfamiliar, the :has() property allows you to target something that…well, has a certain other element.

For instance, you can target all

elements that have an img within them like this:

p:has(img) 

Which is incredibly useful! However, the :has() attribute is currently lacking the ability to apply specificity as well. 

Say you wanted to make any unordered list with at least ten items, display it in 2 columns. Code like this would be great if it worked:

ul:has(li:nth-child(10)) {
  column-count: 2;
}

This same layout should be doable with :has() but limitations on it in browsers prevent it from doing so. Here's a CodePen to demonstrate that will, for now, only work in Safari and Chrome.

In Safari and Chrome, you get three neat columns of US states, but it's just a long list everywhere else. That's because the specificity of the nth-child is ignored within the :has(). This might be an edge case, but without this specificity, you can't utilize all that :has() can offer. We hope that once this is fully supported, :has() will continue to evolve to be more flexible and will someday be more useful.

A real CSS property for screen readers

Accessibility is more important than ever and is a core focus for all our projects. The web is for everyone, and just as we strive to give the same experience to all users and their abilities, we should hope for the same thing from the code we write.

Unfortunately, CSS is still lacking a lot in this department. Perhaps the most glaring example is how to hide a block of text (visually) that is still accessible to screen readers. Currently, the only solid way to do something like this is with this amalgamation of CSS properties below, which many developers often shove into a .visually-hidden class.

.hidden-text-block {
  position: absolute;
  width: 1px;
  height: 1px;
  padding: 0;
  overflow: hidden;
  clip: rect(0, 0, 0, 0);
  white-space: nowrap;
  clip-path: inset(50%);
  border: 0;
}

There has to be a better way to do this. Why couldn't we leverage the idea of .visually-hidden and turn it into its own property?

.hidden-text-block {
  visually-hidden: true;
}

There doesn't seem to be a lot of progress within the CSS Working Group on this. Still, we hope that as accessibility continues to move to the forefront of working with CSS, there will be more significant movement on this issue.

The color-contrast() property

Continuing along the accessibility track, the color-contrast property takes a specified color property and then compares it against two sets of colors, selecting the ones with the highest contrast.

For instance, this compares #404040 and #5a5a5a against a color of #111 and then picks the higher contrast one.

.header-color {
  color-contrast: (#404040 vs #5a5a5a, #111);
}

You could see how this could work really well with a set of :root color variables versus a variety of background colors, giving the ability to keep colors accessible without any rewritten styles or other JS plugins.

--primary-color: blue;
--secondary-color: green;
--main-bg: #4a4a4a;
--sidebar-bg: #32a3fa;

.text-color {
  color-contrast: (--main-bg vs --sidebar-bg, --primary-color);
}

Unfortunately, this property is still experimental, and as of 2022, no browser supports it. Let's hope for some progress on this requested property from the CSS WG. You can learn more about it from their draft spec.

Change non-animatable properties at the beginning/end of an animation

CSS animations are amazing. A quick browse of CodePen reveals some of the absolute jaw-dropping things that brilliant front-end developers can do with CSS alone. But that doesn't mean there isn't room for improvement.

For instance, you can use @keyframes to control the steps of an animation very easily, and a common method is to fade something from visible to invisible (or vice versa) over a series of frames. This transition works great as a visual presentation, but the effect is not the same for screen readers.

The problem is that a normal show/hide CSS property, such as display: none or visibility: hidden doesn't get considered at the start/end of an animation. 

.ghost-kitty {
  inline-size: 2rem;
  block-size: 4rem;
  background: url("../images/ghost_cat.jpg") no-repeat center;
  animation: ghost-kitty 5s infinite;
}

@keyframe ghost-kitty {
  0% {
    opacity: 100%;
  }
  25% {
    opacity: 75%;
  }
  
  50% {
    opacity: 50%;
  }
  
  75% {
    opacity: 25%;
  }
  
  100% {
    opacity: 0%;
    display: none;
  }
}

Using the example above, our .ghost-kitty will disappear (for good) once it hits the display: none portion of the keyframe step, but it would still be visible to a screen reader because it just gets rendered at the first usage as it works down the document, which defeats the purpose of an animation. And because opacity, which is commonly used in animations, does not affect screen readers, you are stuck without a solution in CSS to truly hide/show something for all audiences.

It's possible that as :has() and :is() and :where() grow in usage, there might be some weird hack to make this work (like line-clamp), but having a useful way to truly control non-animatable properties within an animation would be nice if only to put less JavaScript out there in the world!

text-wrap: balance

Discussed as part of the Text 4 spec for CSS, this would work the same way as wrap for an inline box, where it would try to eliminate as much empty space as possible - without increasing the number of lines - by considering the amount of remaining space within the block container.

In practice, it would basically wrap text so that each line has (approximately) the same length. So with text-wrap: balance, this:

Checkout this really long line of text, boy does it go on for a while huh? I wonder
when it will end? Who knows, but it sure is long!

would become

Checkout this really long line of text, boy does it go on for a while 
huh? I wonder when it will end? Who knows, but it sure is long!

The best part is that this would work with fluid columns when the length of a column is unknown, so this would work wonders with grid and other responsive design elements.

text-wrap: balance is just a requested spec for now, but let's keep our fingers crossed that it makes it in sooner rather than later!

Break up box-shadow so its styles can be set separately

This feature is more of a want to have than a need, but it would be nice to declare each box-shadow value independently. Much like being able to declare a border either by shorthand, such as border: 1px solid rebeccapurple or individually, like:

border-width: 1px;
border-style: solid;
border-color: rebeccapurple

The ability to declare box-shadow values like this would be beneficial:

box-shadow-color: rebeccapurple;
box-shadow-offset-x: 2px;

// Comments like this

Seriously, why are we stuck with /* */ as our only option?

Conclusion

We may never get every little feature we want in CSS. Still, the continuing evolution of it as a language and the new features from the last few years are enough to excite almost any front-end developer. 

To keep up with the latest and greatest and what to get excited about next, check out these resources:

  • The ShopTalk Show podcast: Hear Jen Simmons talk about what's coming to CSS, including :has(), container queries, and more! (Note that this episode is more Safari-centric. Still, we can get excited about these things coming to all browsers!)
  • The Syntax podcast: There's a recent (as of August 2022) episode about upcoming CSS proposals that may or may not see the light of day, such as @when and carets. Intrigued? Give this episode a listen! They also have an episode on :has() and about new viewport units, which is worth your time.
  • The CSS Weekly newsletter: A compilation of some of the best articles written by many experts in the community, including Sara Soueidan, Michelle Barker, Jen Simmons, Una Kravets, Jhey Tompkins, Adam Argyle, and more! Subscribe to keep up with the latest and greatest in the CSS world, plus information about potential new CSS changes and features coming down the pipeline.
  • The CSS Working Group: If you want to get into the weeds with what's coming down the road for CSS, check out the CSS working group's blog or the GitHub issue list
Sep 07 2022
Sep 07

[embedded content]

Understanding how the language and the browser interact is important if you want to get the most out of JavaScript and save yourself a bunch of future headaches. We'll cover all the basics about browser events and point you toward other resources so you can dive deeper.

JavaScript vs. the browser

JavaScript is a compact programming language that is single-threaded. That means all the action happens in the same pipeline. In operating systems and some other programming languages, you can do multiple things at once, but JavaScript can only do one thing at a time.

Because of that, JavaScript code can be described as blocking. Only one thing can happen, and the next thing JavaScript wants to do can't start until the first thing ends.

JavaScript on our websites runs in the browser's JavaScript engine and will do things like arithmetic and logical flow (if blocks, for loops, while loops), and it has some built-in data types like strings, numbers, objects, and arrays.

On the other hand, you have browser APIs. The browser API is an extensive and growing set of capabilities the browser has built-in. Some examples:

  • DOM - the Document Object Model and all the methods that go along with it to allow manipulation of HTML.
  • CSSOM - the CSS Object Model which allows manipulation of CSS
  • Observers (like intersection observer and resize observer)
  • Fetch - an interface for fetching resources 
  • History - allows the navigation and manipulation of the contents of a browser's history

All of these APIs are part of the browser.

The event loop

The event loop is how JavaScript and the browser interact.

The JavaScript call stack

Remember that JavaScript is blocking and single-threaded. The call stack is how functions are organized, and a second task can't begin until the first task ends. This order remains true…unless the first task calls another function.

If a function is called during the execution of a task, it gets placed on top of the call stack. If we call another function within that function, it will also be placed on top of the stack. Since it's a stack, we can only pull from the top of it, so we can't pull things off the bottom of the stack to execute until those top pieces are resolved. This way of dealing with tasks is an approach known as last-in-first-out or LIFO.

When we describe code as "blocking," this is the source of that problem. Functions running in the call stack prevent other tasks from being completed.

The callback queue

To avoid blocking other functions, JavaScript can offload work to Web APIs. If you want to make an AJAX call or set a timeout, you can send that information to the Web API to do all that work. When the API has done its work, if it has a callback function to run, it goes into the Callback Queue. Once JavaScript has run everything in its call stack, it pulls the next task from the callback queue, runs that to completion, then grabs the next thing in the queue.

If you want to dive into this deeper, you can watch this video about the Event Loop. It demos a website called Loupe, a simulated event loop in the browser.

The event loop leads us to another way JavaScript offloads work to a browser API: events.

Browser events

To reiterate, the Event Interface is a Web API. Events are a browser thing, not a javascript thing. The browser has events and kicks them off, and JavaScript is given the ability to respond to those events. Events allow the browser to kick off programmatic responses to interactions, changes, and other significant happenings.

Events can be triggered by user interaction, browser behavior, and your code. User interaction events are triggered by things like mouse clicks, buttons pressed on the keyboard, and elements gaining or losing focus. Browser behavior events happen for things like loading and unloading resources and changes in device orientation. Finally, you can artificially set off events from your code with the .dispatchEvent() method.

View the full reference for available events.

Handling events in JavaScript

So events happen on the browser side. Where does JavaScript come in?

To react to events, you’ll use EventTarget.addEventListener(eventtype, callback).

An EventTarget is, for the most part, any element on your page. The class is broader than that, but for our purposes, this is enough. We can say "select my button" or "select my paragraph" and add an event listener to it.

That short line of code is saying: "Hey browser, when 'EventTarget' has an 'eventtype' event, can you run the function 'callback'?"

Let's walk through what happens when someone visits your page:

  • Server/Cache delivers a raw .html file.
  • Browser parses that HTML into the DOM.
    • This is where inline JS runs.
  • Browser performs prefetches.
  • Browser does layout/paint operations.
  • Browser fetches remaining resources.
    • This is where external JS runs. If you have put your code in an external file, this is when your JavaScript code is loaded and run.
  • Running js calls addEventListener. Let's say you want to react to a button click.
  • Browser stores callback function.
    • An indeterminate amount of time passes. Right now, everything is sitting over in the Web API.
  • The specific event happens on the specified target (the button click).
  • The browser drops the callback function into the callback queue.

What if there are multiple events?

When a click happens on an element, you have mousedown, mouseup, and click events all happening in quick succession, all on the same thing. How do we know things happen in the correct order?

The browser fires off events in a predictable pattern that happens in up to three phases: capture phase, at target, and bubble phase. Think of event phases like a ball bouncing. The ball falls, makes contact with the ground, and then rebounds.

Before going into more detail, it will help to distinguish how different types of people perceive a web page versus how the browser perceives it. Designers might see components coupled in layouts. Content strategists might see headings, articles, and footnotes. Developers might see sets of nested elements containing text and the abstracted code behind them.

The browser doesn't care about any of that. It cares not about content or presentation. It only cares about the DOM. The browser stores its knowledge of the page's elements like a flow chart in a tree structure, relating each node in the tree to every other node. Events always start at the top of the tree and work their way down toward the deepest event target.

Imagine a click event on the anchor tag element near the bottom of the diagram. The event starts at the top and digs deeper.

 Once the deepest possible element has been reached, the event fires from the "target" element. In this case, "target" means what the browser was aiming for, not necessarily the user/developer. If a user is aiming for one thing and misses, the target is whatever they actually clicked on.

For some events, you have a bubbling phase. It retraces its path back up through the DOM toward the root. Not all events bubble. There's not a handy way that we've found to remember which events bubble and which ones do not. You'll need to check the MDN documentation.

It's important to note the event itself "moves" rather than having a cascade of events. Instead of having a series of events fire, one per DOM node as the browser works down towards the target, the browser has a single event that is "retargeted" on each DOM node in succession. Because of this, it is important to react quickly in any event listeners if you need to alter or stop the event before the browser retargets the event to the next DOM node.

Why does all of this matter?

When you add an event listener, you can tell the browser which phase to invoke your callback function on. Inside your callback function, you can alter how the browser handles the event by:

  • Preventing default behavior
  • Preventing the event from continuing its path along the DOM
  • Preventing the event from invoking any additional callbacks on the current target, if you have things in the correct order

Adding event listeners

Let's look at some code. If we want an event listener, we first need to get a reference to the element that is our event target. In this case, it will be the first button on the page.

const button = document.querySelector(‘button’);

We have three ways to set a callback on a click event. The first way has a named function declared elsewhere.

function myCallback(event) {
  console.log(event);
}

button.addEventListener(‘click’, myCallback);

The second way passes an anonymous function in the older syntax.

button.addEventListener(‘click’, function(evt) { console.log(evt) });

The final way is with an arrow function.

button.addEventListener(‘click’, (e) => { console.log(e) });

While these examples all do the same thing, they will be treated as three different event listeners. If we run this code, clicking the button would result in 3 identical console log statements.

The default behavior for events:

  • Event listeners will always fire on the target phase.
  • Event listeners will fire on the bubble phase by default on events that bubble. The event will go down the DOM until it hits the target, but even listeners on the target's parent will not fire until the bubble phase when the event retraces its path.
  • Multiple event listeners on the same target will be triggered in the order they were attached.

Modifying default behavior

Event listeners can be set to run on capture. As the event passes an element, it triggers the callback function before the event gets to the target.

.addEventListener(‘click’, callback, true);

.addEventListener(‘click’, callback, {useCapture: true});

Event listeners can also be set to remove themselves after firing.

.addEventListener(‘click’, callback, {once: true});

Responding to events

All events have these properties/methods.

Event = {
  bubbles: boolean,
  cancelable: boolean,
  currentTarget: EventTarget,
  preventDefault: function,
  stopPropagation: function,
  stopImmediatePropagation: function
  target: EventTarget,
  type: string,
}

The currentTarget property points to the current event target as the event moves through the DOM and will change for the event as it is retargeted. The target property is the event's intended target. The type will be the name of the event, like "mousedown" or "click."

Some events that would normally trigger a default browser action can have that action prevented by calling the preventDefault() method. For example, if you have a link with a click event listener, you can prevent the browser from navigating away from the current page. While this is not a great idea for accessibility purposes, it is a common enough occurrence to serve as an example.

function myCallback(event) {
  event.preventDefault();
}

Events that might trigger additional callbacks on elements further along the path can be halted with the stopPropogation() method. Calling it will stop the event from being retargeted further. Stopping propagation is a valuable tool to manage events that might otherwise conflict or overlap. For example, a click event listener on a parent element and a button inside, and you want to listen to one set of clicks but not another.

function myCallback(event) {
  event.stopPropagation();
}

With stopImmediatePropagation(), you can also stop any additional callbacks on the current element that might come after the current callback. Calling this method is not common and requires careful consideration of the order in which listeners are added.

function myCallback(event) {
  event.stopImmediatePropagation();
}

Certain event types have additional relevant properties. Click events get the coordinates and modifier keys. Keyboard events get keycodes. Focus events get the element that the focus moved from/to.

Removing event listeners

You can remove event listeners if you only want to listen for a limited amount of time or a specified number of times. The trick is to have a reference to the exact same function in the add and remove methods. You must have named callback functions if you want to do this. Anonymous functions will not work, even if they are character-for-character matches.

The following code works:

function myCallback(event) {
  console.log(event);
}

button.addEventListener(‘click’, myCallback);
button.removeEventListener(‘click’, myCallback);

These examples do not work:

button.addEventListener(‘click’, function(evt) { console.log(evt) });
button.removeEventListener(‘click’, function(evt) { console.log(evt) });

button.addEventListener(‘click’, (e) => { console.log(e) });
button.removeEventListener(‘click’, (e) => { console.log(e) });

Discovering and debugging event listeners

All major browsers have a way to discover event listeners in the devtools. Because the Web API depends on the browser's implementation, it can be different for each browser. The following image is what it looks like in Chrome.

With Safari, you need to look at the Node section.

Firefox puts a little event badge in the inspector, which will show you all of the event listeners on that one node.

Conclusion

Reacting to browser events in JavaScript is one of the fundamental ways to start building more reactive websites and rich web applications. This article has been a primer on the fundamentals that should save you some headaches down the road. Here are some additional resources:

Aug 18 2022
Aug 18

Keeping a Drupal site up-to-date can be tricky and time consuming. Host Matt Kleve sits down with three people in the Drupal community who have been working to make that process easier and faster.

It's been in progress for awhile, but now you might be able to start using Automatic Updates on your Drupal Site.

Aug 09 2022
Aug 09

One of the nice things about front-end web development as a career choice is that the software and coding languages are available on every modern machine. It doesn’t matter what operating system or how powerful your machine is. HTML, CSS, and JavaScript will run pretty well on it. This lets a lot of us in the industry bypass the need for formal education in computer science.

Unfortunately, this also has the side effect of leaving little gaps in our knowledge here and there, especially in strategies like bitmasking, which are seldom used in web development.

What is a bitmask?

Bitmasking is a strategy that can be used to store multiple true-or-false values together as a single variable. Depending on the language, this can be more memory efficient and also opens up the doors to some special operators that we’ll look at later. This trick takes advantage of two simple facts:

  • humans and computers look at numbers differently.
  • the way computers think about numbers is identical to how they think about true and false.

Humans typically think of numbers in a decimal, or base-10, system. We have 10 unique digits of 0-9, and when we want to count beyond 9, we create new columns as needed to symbolize how many multiples of ten, one hundred, one thousand, (the powers of ten), and so on we need. Computers, on the other hand, look at numbers in a binary or base-2 system. Computers have 2 unique digits, 0 or 1, and when they need to count beyond that, they create new columns to symbolize how many multiples of 2, 4, or 8 (the powers of two) they need. While we think of numbers differently, the values of integers are ultimately identical, and the computer stores all numbers as binary values. Each individual binary digit is a bit of information.

English Base-10 Base-2 Zero 0 0 One 1 1 Two 2 10 Three 3 11 Four 4 100 Five 5 101 Six 6 110 Seven 7 111 Eight 8 1000 Nine 9 1001 Ten 10 1010

When we combine the fact that all our numbers are converted to binary for storage with the fact that boolean true/false values are also stored as a 1 or 0, respectively, we can see how we could easily store a group of boolean values as a single integer. All we have to do is make sure each value we care about is stored as a separate power of two.

Where might I see this?

A great example of this in front-end development is the Node.compareDocumentPosition method. This method compares the relative positioning between two nodes on a page and returns a bitmask of the resulting comparison. There are six possible values of a.compareDocumentPosition(b)

  • Disconnected - These nodes are not in the same document tree (for example, one node is in a web component’s shadow DOM)
  • Preceding - Node a follows node b in the document tree.
  • Following - Node b follows node a in the document tree.
  • Contains - Node a is a descendant of node b.
  • Contained By - Node a is an ancestor of node b.
  • Implementation Specific - This rarely means anything to us and is an artifact of how the calculation is done within the browser.

The result of our comparison could yield any combination of those 6 being true or false, a potential 64 unique combinations! The way we make sense of this, however, is to assign each value a bit. Since we have 6 values, we’ll need six bits.

Disconnected 1 000001 Preceding 2 000010 Following 4 000100 Contains 8 001000 Contained By 16 010000 Implementation Specific 32 100000

Now our 64 possible combinations can be numbered from 0 (all are false) to 63 (all are true). Of course, not all combinations are actually possible, as nodes can neither precede and follow nor contain and be contained by. Nevertheless, when we examine the returned number bit-by-bit, we can tell exactly which values are true and which are false.

How do I use this?

One great use of this is in the focus-trapping logic in IBM’s Carbon Design System. Since we want to prevent focus from leaving the modal and instead loop it back into the element, we have a focusout event listener on the modal’s container element. When focus leaves an element within the modal, the focusout event bubbles up, and we’re able to see the event’s target element that just lost focus, as well as the event’s relatedTarget element that just gained focus. We can then compare the positioning of the relatedTarget to the modal container, and if the “contains” value is not true, we know we need to force focus back into the modal.

While we could split out the bits and do individual comparisons, JavaScript has bitwise operators designed specifically to compare two bitmasks and yield a third bitmask. These operators will compare each individual bit and then yield a 0 or a 1.

  • & will evaluate to 1 when two compared bits are both 1
  • | will evaluate to 1 when either compared bit is 1
  • ^ will evaluate to 1 when one, but not both, compared bits are 1.

Try comparing 5 and 9 with each operator in this truth table to see bitwise calculations in action:

https://codepen.io/andy-blum/pen/ExEajpX

Once we understand these comparison operations, we can use them within our code. We’ll start by creating the combination flags PRECEDING and FOLLOWING. These new flags combine the bitmasks provided by the Node object. In our use case, PRECEDING will indicate that the compared node’s tab order should be prior to the current node and FOLLOWING will indicate the opposite. We’ll also create a bitmask WITHIN that will be easier to read in our code later. 

const PRECEDING = Node.DOCUMENT_POSITION_PRECEDING | Node.DOCUMENT_POSITION_CONTAINS;
const FOLLOWING = Node.DOCUMENT_POSITION_FOLLOWING | Node.DOCUMENT_POSITION_CONTAINED_BY;
const WITHIN = Node.DOCUMENT_POSITION_CONTAINED_BY;

Next, within our focusout event listener, we can compare the relative positions of the event’s target, which just lost focus and its relatedTarget, which just gained focus. The code below has been modified slightly from the source to make it easier to read and has comments pointing to the breakdown below.

function handleFocusOut(event) {
  const { target, relatedTarget } = event;
  
  // #1
  const positionToModal = 
    this.compareDocumentPosition(relatedTarget) |
    (this.shadowRoot?.compareDocumentPosition(relatedTarget) || 0);

  // #2
  const positionToPrevious = target.compareDocumentPosition(relatedTarget);

  // #3
  const relatedTargetIsContained = Boolean(positionToModal & WITHIN);
  
  // #4
  if (!relatedTargetIsContained && !(relatedTarget === this)) {
    
    // #5a
    if (positionToPrevious & PRECEDING) 
      // #6a
      tryFocusElems(focusableElements as [HTMLElement], true, this);
    
    // #5b
    } else if (positionToPrevious & FOLLOWING) {
      // #6b
      tryFocusElems(focusableElements as [HTMLElement], false, this);
    }
  }
};

Let’s break it down:

  1. We create a bitmask variable, positionToModal. This is a combination of the comparison between the modal and the relatedTarget as well as the modal’s shadowroot and the relatedTarget. The element we’ve focused to could be in either the regular document or in the Shadow DOM, so we want to compile both comparisons together.
  2. We create a bitmask variable, positionToPrevious. This is the comparison of the target and the related target.
  3. We create a boolean variable, relatedTargetIsContained. This compares positionToModal and WITHIN. If the element we focused on is in any way inside our modal, then this variable is true.
  4. We check to see if our relatedTarget is contained within the modal and that our relatedTarget is not the modal itself. If that’s true, then we know our relatedTarget is outside the modal, and we need to redirect focus.
  5. We compare our positionToPrevious bitmask with our PRECEDING and FOLLOWING  bitmasks. If they overlap, then we know which way to try to focus, and we use our tryFocusElems function to move focus back into the modal.
  6. The tryFocusElems function systematically attempts to place focus on each element in elems. It can run in forward or reverse order based on the second argument, and if none of the elements provided will hold focus, it falls back to the element provided in the third argument

Conclusion

Bitmasks and bitwise operations are not strategies front-end developers are likely to reach for often, but having a solid foundation of computer science principles can help us to know when they are the right tool to use. Understanding the theory behind how numbering systems work and how computers can compare and manipulate masks can open up new opportunities in our code.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web