Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Aug 26 2021
Aug 26

To date, speed has taken a backseat when it comes to websites. But today, there’s no denying it– fast websites are effective websites. When a website provides users with underwhelming performance, everything from SEO rankings to conversion rates, ecommerce sales, and general user experience takes a hit. So — how do you get the faster site you need?

You optimize your site for speed and performance.

That’s a very simple answer to a pretty complicated question, of course. Getting your website to a high-performance tier and keeping it there requires a lot of work. Specifically, it requires monitoring that site on an ongoing basis — and that the data collected is gathered under consistent testing conditions.

This work is inarguably worth doing, but starting the work may seem daunting. Use this guide as a launching point, and begin collecting and monitoring the data necessary to achieve exceptional website performance.

Successful Website Performance Monitoring Requires the Right Tool and the Right Method

When you start digging through website analytics and aren’t happy with what you see, you’ll be anxious to make some changes. It’s important not to rush into website speed optimization without the right tool in hand, though.

There are a few different methods of testing website performance. One is to take a snapshot of how a site performs at any given moment. A tool like Google’s PageSpeed Insights (PSI) is fine for that. However, certain inconsistencies in the tool’s functionality, such as actual server location, mean that it’s not very useful for long-term monitoring. Too many variables exist for meaningful data comparison over time.

Other tools, such as Calibre or SpeedCurve, are much more useful for continuous website performance monitoring. They eliminate inconsistencies with server location and certain network conditions. They also work best for those companies fully committed both to ongoing website performance monitoring and site speed optimization efforts. Not everyone is ready to make that commitment — paying for a service, dedicating time and resources to gathering and reviewing data, and then actually optimizing for speed.

A DIY Approach for Starting Out with Ongoing Performance Monitoring

If you’re ready to start continuously monitoring website performance — and you want more than one-off test results without committing to a more advanced service — consider our recommended DIY approach. Note– this method requires a good practical knowledge of spreadsheets.

Choosing the Right Tool for Getting Started with Website Performance Monitoring

One benefit of using PageSpeed Insights and getting that one-off performance evaluation report is the fact that it’s free to use. It’s not the only free tool out there, though. To get up and running with website performance monitoring and speed optimization, we recommend using webpagetest.org (WPT). WPT is also a free tool, but offers the type of control you simply don’t get with a one-off solution like PSI (which, to be clear, does have its uses — just not for ongoing performance monitoring and testing).

Test Your Site While Maintaining Control

Control in this context means eliminating the types of variables that certain tools introduce during testing — and which can mislead you when comparing data from different tests. To present meaningful data for evaluating ongoing website optimization efforts, a good DIY tool must allow you to:

  1. Decide where requests originate from.
  2. Choose the network conditions under which your request is handled.
  3. Select which browsers you want to test for.

Without this type of control ruling out the unpredictable conditions of one-off testing tools, it’s not possible to make meaningful comparisons of data over time. Additionally, you should absolutely look for a tool that produces a Lighthouse report. In fact, if webpagetest.org did not provide a Lighthouse report in its testing, we’d look for an alternative to recommend. These metrics are that important.

Running Your Own Tests

To keep things simple for the purposes of this article, let’s see what it looks like to run the simplest test for our ChromaticHQ.com homepage. We’ll select the Simple Testing tab (direct link to Simple Testing), enter our URL, and enable Run Lighthouse Audit. Notice that Test Configuration is left as the default (Mobile - Fast 3G); more on this later. This configuration will run three performance test runs against the URL that you’ve provided. The data for all three tests will be available once the test is completed, but by default, it will show you the data for the median run (that is, not the best nor the worst of the three, but the one in the middle; that’s the one we want).

Here’s what our configuration looks like on WPT:

Screen capture of the main WebPageTest interface, depicting a primary textfield to enter a testing URL

Once we’ve filled out this simple form, we can click the Start Test button on the right, which will submit our test request and bring us to this page:

A screen capture of the WebPageTest

Keep that page open for a bit and you will eventually see that your tests are running:

A screen capture of the WebPageTest interface, showing the interstitial

And, after a few minutes, your tests will be ready:

A screen capture showing a WebPageTest result, with scores across multiple metrics.

Congratulations! You’ve taken the first step in your performance monitoring journey.

Before we move on to the next step, you should do the following:

  1. Make note of the First Byte metric. The table under the Performance Results (Median Run - SpeedIndex) heading has a First Byte column. Write down the value recorded under that column (0.837s in the above screenshot).
  2. Open your Lighthouse Performance report. You can do this by clicking on the first box near the top right that has the label Lighthouse Perf below it. That link points to the Lighthouse Performance report for the median run by default.

Capturing Results and Visualize Progress with a Performance Tracking Spreadsheet

Once you have opened your Lighthouse Performance report, you’ll be ready to move on to the next step: adding them to a spreadsheet to compare test results over time. Anyone can do this, but it does take a bit of time and effort to set up. Luckily, we’ve created a boilerplate spreadsheet to get you started.

We based this spreadsheet on the tools we at Chromatic have used internally to help inform our performance efforts for our own clients and agency partners. Using this tool, you’ll be able to easily enter your test result data, visualize your progress, and focus on the metrics that will help your team succeed.

The tool is complete with sample anonymized data, production and staging comparisons (should you want to test both), multiple pages, and configurable goals. It’s packed with enough features to kickstart your long-term performance monitoring strategy.

Jan 26 2021
Jan 26

Authoring content in one place and publishing it to multiple locations offers many benefits. Reducing the editorial effort needed to keep content synchronized across multiple locations is often the primary benefit. However, providing a consistent authoring interface, a canonical cross-brand asset library, and increasing monetization of existing content are often ancillary benefits that can be just as important.

We always get excited when we can partner with our clients to build a new publishing experience or improve an existing one, as we love to use technology to solve problems. However, the excitement can soon be tempered by the reality of potential challenges. Through our experience building syndication systems, we have solved many challenges and found some tactical approaches and strategic principles that guide us through the issues that must be considered along the way.

Complexity

Should we even be doing this and is it really worth it? This should always be the first question. Sometimes the shiny new functionality might not actually be the best idea when all factors are considered. Some questions to ask while making this decision:

  • How much content needs to be shared?
  • How many places will it be shared?
  • How often will the content change?
  • How quickly do edits need to be synchronized?
  • How will associated media such as images and video be shared?
  • What are the costs/time considerations with manually keeping the content in sync?
  • What are the costs to build and maintain an automated system and how much complexity does it add for the editorial team and developers?
  • Is there an existing solution or will a custom approach be needed?

Visibility

Depending on the complexity of the system, understanding what content is available and where it has been published is important, but can quickly become difficult to manage. Planning for this up-front will be essential to the long-term success of your system, ensuring a great user experience that scales beyond the proof-of-concept.

Providing contextual data on content edit forms that inform editors where a given piece of content has been syndicated to and where the current updates will appear is a great start. Filterable content listing admin pages with batch functionality built-in will also be valuable. This allows for editors to easily understand and change where content is published at scale.

Syndication

Understanding the expectations around the syndication process is important for selecting the technology used and ensuring the system meets business expectations. If content needs to be live on consuming sites within 2 seconds, that is a much different requirement than 30 seconds, or even 5 minutes. Every layer between the end user and the syndication platform must be evaluated holistically with caching considerations also being taken into account.

SEO

Google isn’t a big fan of duplicate content, yet that is precisely what syndicating content seeks to do. Understanding this risk and mitigating it with a canonical URL and other measures is vitally important and can’t be an afterthought.

The creation and management of URLs is one of the primary challenges of sharing content and should be carefully considered from the start of a project. Some questions to consider:

  • Where are URL aliases generated?
  • What data is used to generate URL aliases?
  • Will the alias remain identical on all sites?
  • If the alias is different, do other sites need to know the alias for cross-site links?
  • Will there be cross-site search and what information will it need to build links?
  • How will sites know what site content lives on canonically and determine the domain?
  • Where will redirects be created and managed?

Editorial Interface

This is where things really get fun. There are many challenges and also many opportunities for efficiency, as reducing the editorial workload is likely what started this whole adventure in the first place. Some questions to consider through the process:

  • What level of control will be needed over the content on the consuming sites?
  • Will additional edits/updates need to be syndicated?
  • Are there any per site customizations? Do those take precedence over updates?
  • Where will curated lists of featured content be created and managed? Will that be syndicated as well?
  • Do editors need previews of the content on one or many of the client sites from the editing interface within the content syndication platform?
  • How will marketing pages built with Gutenberg or Layout Builder be shared and how will designs translate across sites?

Technical

There are a myriad of technical considerations and approaches depending on the requirements and the technology that is used. One of the first considerations should be to keep things generic and avoid tight couplings between the data models in the syndication platform and the consuming sites. Some field values can be mapped to identical fields in the client sites such as taxonomy terms and author information, as that data will likely be needed for querying data for lists. However, all other data can often be stored in a JSON field that can be decoded and made available to templates.

This all begins to border on a decoupled approach, and with this approach, it helps to set a project up for success if the front-end goes fully decoupled later. Or perhaps now is the time to consider going decoupled while a foundational evaluation of editorial workflows is already in progress.

Sharing is Caring

They say sharing is caring, but only share content if you can take the time to care about all of the details along the way. With thoughtful consideration at the very beginning, from challenging the very need for the functionality all the way to the technical details that set canonical metatags, analyzing each step of the process ensures a successful outcome. There is no easy answer, but hopefully this helps you avoid many of the pitfalls. If you have a project with content syndication coming up or have experience with it, drop us a line on Twitter @ChromaticHQ, we would love to hear your thoughts.

Jan 21 2021
Jan 21

PHPStorm comes with a handy feature called live templates, which helps you avoid repetitive typing.

Let’s say that you are writing SASS and you find yourself repeating a breakpoint over and over:

@include breakpoint($breakpoint-md) {
    // Some CSS attributes.
}

With live templates, you can instead type a shortcode that will generate the breakpoint for you. In this example we'll be using ibk as an abbreviation for include breakpoint and we will generate another variant that includes the $breakpoint-md as an argument passed to the mixin.

ibk will generate @include breakpoint() with no arguments predefined and position the cursor inside the mixin parameter. ibk:md will generate @include breakpoint($breakpoint-md)

To begin setting up a Live Template, go to File -> Settings -> Editor -> Live Templates

You’ll see the following window with some live templates already available, depending on which plugins you have installed. For this example we will generate a new “Template group” by clicking on the “+” icon:

Live Template configuration interface

PHPStorm is going to prompt for a name for the new group. In this example, we will name the group SASS.

Once you have the SASS template group created, you need to click on the “+” icon again and add the actual live template.

You can now add the abbreviation ibk, a short description, and the code.

The cursor will be positioned on $breakpoint$ first and then to $END$.

Define the language as CSS, check the box “Reformat code according to style”, hit “Apply” and “OK” to close the dialog box. The result should look something like this:

Live Template configuration interface

The following shows the Live Template in action:

Live Template in use

If you want to save more time, you can avoid typing the breakpoint variable as well, with a variant of the breakpoint Live Template you’ve created.

Live Template creation interface

With this variation, you eliminate the need to explicitly typing $breakpoint-md as an argument for the mixin. See the Live Template in action below:

Live Template in use

Wrapping a Selection with Live Templates

If you need to wrap a selection in a breakpoint, you can add a new parameter $SELECTION$ to a live template:

Important: You need to use the keyword $SELECTION$ for this to work.

After editing ibk:md the code should now look like this:

@include breakpoint($breakpoint-md) {
    $SELECTION$$END$
}

You can achieve the following by selecting the code you want to wrap and using the keyboard shortcut Control + Alt + J.

Live Template in use

Live Templates are great for increasing development speed and productivity whenever you must repeat code. Reach out to us at @ChromaticHQ on Twitter and let us know how you have used them and how they have helped improve your development process.

Jan 11 2021
Jan 11

When developing or maintaining a Drupal theme, there is often a need to understand why a theme needed to override a given template. A diff of the template in the active theme compared to the template in the base theme would make this easy to understand. A drush command that found the template files for you and output a diff would solve this quite nicely.

Template Diff

The Template Diff module does just that. It provides a drush command that accepts a template name and will display the diff between two specified themes. If no theme is specified it defaults to comparing the active theme and its base theme.

Examples

Compare the active theme and its base theme:

drush template_diff:show views-view

Compare "foo_theme" vs "bar_theme":

drush template_diff:show views-view foo_theme bar_theme

Compare "some_theme" and its base theme:

drush template_diff:show views-view some_theme

The output will look something like this:

$ drush template_diff:show views-view
 [notice] Comparing chromatic (active theme) and stable (base theme).
- stable
+ chromatic
@@ @@
 {#
 /**
  * @file
- * Theme override for main view template.
+ * Default theme implementation for main view template.
  *
  * Available variables:
  * - attributes: Remaining HTML attributes for the element.
- * - css_name: A CSS-safe version of the view name.
+ * - css_name: A css-safe version of the view name.
  * - css_class: The user-specified classes names, if any.
  * - header: The optional header.
  * - footer: The optional footer.
…

If you have ideas on how to improve this, submit an issue so we can make understanding template overrides even easier.

Oct 26 2020
Oct 26

Managing Drupal configuration files and keeping them in sync with a database requires intense attention to detail and process. The importance of following the correct sequence of events can be documented, but mistakes still happen. Additionally, trying to get our robot friends such as Dependabot to play by the same rules presents a challenge.

The ideal sequence of configuration commands can even vary by where the configuration operation is performed. This message from Drupal Slack illustrates the requirements nicely.

Yes it should be like this:

Locally:

  1. drush updb

  2. drush config:export + commit

Deploy:

  1. drush updb

  2. drush config:import

Missing Config Updates

When Drupal core or a Drupal module releases an update, it can alter the structure or values of configuration. This is fine as long as those changes make it back into the tracked configuration files. If the “Local” process described above is followed, this isn’t an issue because the changes are exported and tracked.

However, if a tool such as Dependabot performs core or contributed module updates, it will not have a database to run updates on, and it will most certainly not be able to export those config changes and commit them. After the Dependabot update is deployed, the active database configuration and the config tracked in code are no longer in sync. Subsequently, the output of drush config:export will now show a diff between the active configuration in the database and the tracked configuration files. This can lead to unintended consequences down the line.

Config Process & Development

Suppose a Dependabot PR that resulted in config changes as described above was just merged. Then a developer grabs a fresh database and the latest code, and they go about their work while making some changes to the site’s configuration locally. They export their changes, but the diff now includes unrelated changes. This is a result of the config changes made in the database from the previously deployed Dependabot update that were never exported and committed.

This pattern will repeat and become harder to untangle until someone decides to include the changes in an unrelated PR or create a separate PR to bring things back into sync. This isn’t fun for anyone. The best solution is to avoid this entirely.

At Chromatic, we have made attempts to monitor release notes for signs that a config change may be included, but naturally some slip through the cracks. The only way to truly avoid this and keep ourselves accountable is to automate the process of checking for config changes on every pull request.

Automate a Configuration Checking Solution

Any code branch should be able to import its configuration into a production database and subsequently export the active configuration without a resulting configuration diff. Thus the solution becomes to run a standard deployment which includes a configuration import, run configuration export (drush config:export), and verify that the output confirms everything is synced.

 [notice] The active configuration is identical to the configuration in the export directory (../drupal-config/sync).

Note: This will likely only be done in pre-production environments, but that is where we want to catch our problems anyways!

Our Solution

We have automated our deployments to pre-production environments, such as Tugboat, with a standardized Ansible based deployment tool. In a recent release we added functionality to conditionally run drush config:export and check the output for active configuration is identical, and we were in business.

# Identify Drupal configuration changes in core or contrib updates that need
# to be exported and may have been missed.
- name: config structure check
  shell: "{{ deploydrupal_drush_path }} config:export -y"
  args:
    chdir: "{{ deploydrupal_core_path }}/sites/{{ deploydrupal_site_name }}"
  register: deploydrupal_config_check_result
  when:
    - deploydrupal_config_check_structure
  failed_when: "deploydrupal_config_check_structure_string not in deploydrupal_config_check_result.stderr"

Every build now checks for configuration synchronization between our code and database, keeping us accountable as we go.

Checking Configuration Content Changes

The process outlined above will catch changes to configuration schemas, as the changes are made via importing configuration and not blown away by it. However, the potential for including a config change via an update hook (ran via drush updatedb), and the config change to be lost still remains.

A great example of this is an update to the Google Authenticator login module that changed the name of certain configuration keys. The resulting Dependabot PR failed to run in our QA environment alerting us to the problem. It was resolved by following the correct steps locally and committing the updated configuration changes. Due to Google Analytics loading on every page and failing loudly it was caught, but the issue might not always be so easily found. Detecting this problem programatically is quite simple in theory, but more difficult in practice. The basic process is:

  • Ensure all configuration is imported and up to date (drush config:import).
  • Run the update hooks (drush updatedb).
  • Export configuration and check for a diff (drush config:export).

This is fine until you realize that the drush deploy command and the suggested deployment command order noted above make it clear that we need to run our database updates (drush updatedb) before our configuration import (drush config:import) when deploying code.

Any pre-production deployment process should mirror the sequence of commands run during a production deployment to properly test the changes, which creates a problem. There are various ways around this with enough additional infrastructure and creativity, but based upon our research, none of them could be implemented with just changes to a deployment script alone.

Many updates that result in a change to configuration keys/values will often cause a failed build during the pre-production deployment process, which accomplishes the same end goal of alerting us to the configuration synchronization problem. However, that won’t always be the case, so we will continue to search for a way to fully validate the configuration.

The Value of Configuration Maintenance

The configuration system empowers us to do incredible things. However, it must be used carefully and maintained meticulously if we want it to be the canonical source of truth for our sites. The subtle yet important differences in workflows under different scenarios must be respected by everyone, humans and robots alike. With proper care, well-managed configuration will continue to foster reliable deployments and simplify the development and code review process for everyone.

Oct 26 2020
Oct 26

What does it take to get a Drupal codebase onto a server and make it ready to serve a website? There are more steps than expected, and it is easy to forget the details if you don’t do it frequently. Additionally, many of these same steps need to be performed during each code deployment. Just a few of the steps include:

  • Get the latest code.
  • Run Composer commands.
  • Create a public files directory.
  • Run deployment commands.
  • Build theme assets.
  • Secure file permissions.

Automating the Setup

At Chromatic we love to automate processes like this. Even if the effort requires only one or two manual steps, repeated many times per day they can add up. The value of automating these tasks is well worth the effort. Our automation journey began with a humble shell script, but we soon realized this was difficult to maintain and that our one size fits all solution didn’t fit the needs of all our clients.

Selecting Ansible as an Automation Tool

Ansible is a flexible tool that we use for tasks as big as provisioning and managing servers, to as small as running cron on all our Drupal sites with one command. Some of the key factors that helped us decide to use it for automating deployments were:

  • Ansible is idempotent and allows us to define the desired state, only taking action when necessary to achieve that state as opposed to a script that would just run a specific set of commands.
  • Ansible does not depend on any agents. If you can SSH to a box, Ansible can act on that box.
  • Ansible configuration is stored in YAML, and we like YAML.
  • Ansible has a lot of open-source role support via Ansible Galaxy.

Required Features

Git Operations

The deployment tool needs to support git operations to clone the latest version of your codebase with configurable branches.

Initial Site Deployment

During early development, a site might not have a persistent database. The deployment tool needs to support installing from site configuration using the drush si command.

Existing Site Deployment

Deploying to an existing site needs to be supported. A variable sites/* folder name should allow for per-site targeting. Standard Drupal deployment commands such as drush deploy can be run as well.

Configuration Checking

Keeping the tracked configuration up to date and in sync with the active config in the database can be hard at times. An optional deployment step is needed that checks if they are in sync and can fail the deployment if they are not.

File Permission Management

The deployment process needs to handle setting the proper permissions for the various files and folders within Drupal such as the settings.php file and the public files directory.

Ansible Setup

Setup requires the addition of several files to your repository and the installation of Ansible on whatever machine you will be triggering the deployment from. This is probably going to be your local environment for development and some other environment for production (ie. a server of your own, GitHub Actions, Jenkins, etc). Below are examples of the files with some basic defaults. These can be expanded to accommodate more complex needs. Note that we store these files in an ansible/ directory so the paths reflect that.

ansible/requirements.yml tells Ansible which roles it needs to install when it runs. In this case, we only need our chromatichq.deploy-drupal role.

---
# Ansible required roles.
- src: chromatichq.deploy-drupal
  version: 2.20

ansible/hosts.yml defines for Ansible the hosts it is able to connect to.

---
all:
  hosts:
    localhost:
      ansible_connection: local
    EXAMPLE_SITE_PRODUCTION:
      hosts:
        mysite.domain.com:
          ansible_host: 63.162.105.42

ansible/example-playbook-configuration.yml provides the configuration that allows for customized deployment options to fit your codebase and hosting environment.

---
- hosts: all

  vars:
    deploydrupal_repo: "[email protected]:ChromaticHQ/chromatichq.com.git"

    deploydrupal_checkout_user: "root"
    deploydrupal_apache_user: "www-data"

    deploydrupal_dir: "/var/www"
    deploydrupal_code: false
    deploydrupal_redis_flush: true
    deploydrupal_redis_host: "redis"

    deploydrupal_npm_theme_build: true
    deploydrupal_npm_theme_path: "{{ deploydrupal_dir }}/web/themes/chromatic"
    deploydrupal_npm_theme_run_commands:
      - yarn
      - yarn build:preview

    deploydrupal_config_check_structure: true

  roles:
    - ansible-deploy-drupal

The call to run the playbook will look something like this:

ansible-playbook -v --limit=EXAMPLE_SITE_PRODUCTION -i "/path/to/repo/ansible/hosts.yml" "/path/to/repo/ansible/example-playbook-configuration.yml"

Our Solution

With the considerations above, we created the chromatichq.deploy-drupal Ansible role. It standardizes the entire process with best practices provided through sensible defaults and many optional configuration options to fit custom needs. All of the steps are defined in YAML using standard Ansible tools.

Testing Deployments

We are big proponents of using Tugboat, to create a testing environment for every pull request. Using the chromatichq.deploy-drupal role allows us to test our standardized deployment process along with any changes made to the site’s codebase. This gives us increased confidence that what we see on Tugboat/QA is what we can expect when deploying to production.

Benefits of Automation & Standardization

Deployment best practices evolve and bugs will be found throughout the life of a website. When that happens, the use of a centralized Ansible role allows us to easily contribute them back to a repository where all of our projects can benefit from them without maintaining a deployment script in every site repository. We also get a chance to clearly document the what and why of the changes as the update is no longer just a fix in an application repo, but it is often a new release of a tool with proper documentation.

No tool will fix every problem, and our Ansible role for standardizing Drupal deployments is no exception. However, we love having a process that allows us to easily set up standardized deployments with minimal effort to keep our sites deployed correctly and securely. We look forward to hearing how you use the tool, and collaborating on refining and improving the deployment process together.

Oct 16 2020
Oct 16

Composer 2 RC2 is now in the wild and the official release of 2.0 is quickly approaching:

The current plan is to release a 2.0 final before [the] end of October.

This is exciting for anyone who has complained about Composer’s performance in the past as Composer 2 brings significant performance improvements.

While developers will need to run composer self-update to opt-in to 2.0 locally, environments where the installation of Composer is scripted may end up on 2.0 surprisingly quickly after its release. This may prove to be problematic if you are using Composer plugins that need to be upgraded/marked as compatible with 2.0. Below are some options for you to prepare early for Composer 2 or hang back on Composer 1 a bit longer while you resolve any issues.

Remaining on Composer 1 Until You are Ready to Upgrade

If your project is using plugins that are not ready for Composer 2.0, and you want to stay on 1.x until you confirm compatibility, adding the following step after you install Composer will keep you there even after 2.0 becomes the default for new installations.

composer self-update --1

Upgrading to Composer 2 Now

If you have validated that any Composer plugins you are using are compatible with 2.0 and want to test that ahead of its official release, add the following step after you install Composer. This will ensure that you upgrade to 2.0 now, and will have no effect once 2.0 is officially released.

composer self-update --2

Specifying a Composer Version in Composer

If you are using an environment where you use Composer to install Composer (it sounds crazy, I know) such as platform.sh and you want to actively control the Composer version that is being used, either as an early adopter to 2.0 or to hang back on 1.x, you can specify your version requirement as a PHP build dependency. Refer to your host’s documentation for exact syntax.

dependencies:
    php:
        composer/composer: '^[email protected]'
Jul 13 2020
Jul 13

Many of the common website speed problems in Drupal 7 are now a distant memory with Drupal 8 and Drupal 9. Features like Drupal’s built-in Dynamic Page Cache help bring great performance benefits to the two most recent versions. This article will highlight other tactics to assist with getting the best performance out of your Drupal site.

Before we take a look at some of the approaches we can use for optimizing Drupal, let's cover how we can actually test our performance.

One of the most comprehensive and easy to use tools is Google’s PageSpeed Insights tool. This tool can very quickly and clearly highlight potential performance improvements that apply for your site.

Chromatic PageSpeed Insights.

While PageSpeed Insights is a great tool to get started, a solid performance improvement strategy includes continuous performance monitoring. For this, we recommend an automated service such as Calibre or SpeedCurve.

Chromatic Calibre performance monitoring.

These services build upon existing technology, such as WebPageTest and Google Chrome Lighthouse by allowing you to schedule regular tests and easily visualizing metrics over time.

With that covered, we can jump into our recommendations for getting the best performance out of your Drupal site.

Caching

Caching is one of the most impactful ways to reduce load time. Drupal 8 comes with caching enabled by default, however, additional or alternative caching implementations may work best for your specific site.

Page Cache

Page Caching allows repeated requests for a specific page to be served quickly and efficiently by storing the resulting HTML after the first time they are requested and serving that cached version of the page going forward. This reduces page load times and prevents unnecessary heavy lifting by the web and database servers.

For smaller sites with mostly anonymous traffic, Drupal Core’s Internal Page Cache module should be configured. For sites of this size, it may provide all of the performance gains necessary for your site. Drupal’s Dynamic Page Cache will provide even better results if your site has a mix of anonymous and authenticated traffic as it provides caching of both of those user types.

For more complex sites that serve personalized content to users, the Internal Page Cache module is inadequate because the personalized (or dynamic) parts of the page should not be cached. Our recommendation, in this case, would be to use the Dynamic Page Cache module. This module will still cache the majority of a given page, but also provides the essential ability to turn dynamic parts of the page into placeholders, which can be populated with the correct content for the current user.

The Cacheability of render arrays documentation page provides a great breakdown of the thought process behind caching in Drupal. This can help immensely when considering your own caching strategy.

BigPipe

The idea behind BigPipe caching is to effectively separate the personalised or uncacheable portions of a page into smaller chunks which can then be ‘streamed’ into the page, providing a huge improvement to perceived performance to visitors. This can result in a massive reduction to page load times and also means that caching can still be implemented on pages with dynamic content, in fact, it is strongly recommended to use the BigPipe module in conjunction with the Dynamic Page Cache module.

Check out our Don’t break your cache, use BigPipe instead article for implementation details and tips & tricks for getting BigPipe working on your site.

Redis

Redis in-memory caching.

Out of the box, Drupal utilizes the database to cache many objects. While this is a nice feature to have, it can become problematic on high traffic sites because the database is also being used to handle many other queries related to page requests, this can result in the database becoming a bottleneck. Using Redis as a drop-in replacement for database caching can yield substantial performance improvements. Redis is an in-memory, key-value data store, highly optimized for storing, reading and writing cache data, in addition, it is preventing the need for the database to handle these requests.

The Redis module provides the necessary features to connect Drupal with a Redis service. Our Configuring Redis Caching with Drupal 8 article also provides a more detailed Redis overview along with some additional implementation tips.

Media Optimization

Website Media Optimization.

Image in graphic by Karina Vorozheeva.

Images are critical for a website and are consistently responsible for the largest amount of data transfer when a page is loaded. A website performance audit will often reveal the need to implement or refine the delivery of images throughout your Drupal site.

There are a few optimization strategies that we can use to ensure that our images look great and load quickly.

Image Optimization

A module such as Image Optimize can be used to perfect the balance between quality and performance for images throughout a Drupal site. In addition, the quality for each file type can be optimized individually and new derivatives can be generated, such as WebP Image files.

Image Optimize allows pipelines to be defined for any given requirement, each with a specific set of optimizations. For example, perhaps hero images can be compressed more heavily to reduce file size whilst still looking great whereas your blog images need a higher quality setting for maximum impact.

Responsive Images

In addition to optimizing image files, varying screen sizes and densities should be accommodated. This is often seen as a way to ensure high-end devices render the best possible quality image, but the main goal of responsive images is to allow devices with smaller or lower-resolution screens to download smaller versions of an image where appropriate.

The Responsive Image module is included in Drupal Core and allows for appropriately sized images to be served to your site visitors by using the HTML5 picture tag and theme-specific breakpoints. An alternative option to the element is to output a simple tag that leverages srcset to provide responsive image values. Our Responsive Images in Drupal 8 Using "srcset" article provides a comprehensive tutorial for why and how to implement responsive images using srcset in Drupal.

When used in conjunction with well-optimized images, configuring responsive images will result in large performance gains over a typical out of the box configuration.

Lazy Loaded Images

Lazy loading may sound like something that could slow you down, however this technique actually brings huge performance gains with it by decreasing page load time.

Lazy loading is when a resource, in this case a potentially large image file, is loaded only when it is actually needed. For a typical web page, an image only needs to be loaded when the user scrolls down to where the image is visible on the page.

There are several popular modules for implementing Lazy Loading in Drupal, including Lazy-load, Image Lazyloader and Blazy.

Browser level native lazy loading is now supported by Chrome by making use of the new `loading’ attribute. This is great to see, with time it will become more widely supported and will make custom lazy loading code or external libraries unnecessary.

Modules & Codebase

Stay up to date

Drupal core and contributed modules are always being updated for security, bug, and performance fixes. We recommend staying on top of these ongoing updates.

Uninstall unused modules

Every module enabled brings some overhead with it, which in turn can negatively impact the performance of your site. As a result, we recommend keeping the number of modules enabled to a minimum in addition to ensuring that any unused modules are disabled before the site is launched.

In addition to ensuring that only essential modules are enabled, any development modules should always be disabled in production environments. These development modules might include Devel, Field UI, Menu UI, and Views UI.

404 Error Pages

The Fast 404 module prevents 404 pages from causing unnecessary load on your hosting infrastructure.

In addition to potentially wasting resources to deliver an error page, the site should be checked for broken links, images and other resources to prevent errors to begin with. A popular and powerful tool for crawling your site for broken links is the SEO Spider Tool by Screaming Frog. This tool can check for broken image and file references, in addition to internal and external page links.

Content Delivery Network (CDN)

Fastly and other Content Delivery Networks (CDN) help your website to perform faster by delivering content to your users from a location which is physically closer to them. In addition, a CDN can help to maintain consistent site performance during high traffic periods due to the distributed nature of the network being used.

Another advantage to utilizing a CDN is that it can help to protect your origin server by responding to requests. Without a CDN, the origin server would be responsible for handling each and every end-user request, potentially resulting in slow response times and high load on your server.

We love Fastly, which comes built-in with some Drupal hosting providers like Pantheon and Platform.sh.

External Resources (JavaScript, CSS, etc.),

External resources include any asset which is loaded from an externally hosted site or service and typically consists of small snippets of code used for things like tracking, advertising and analytics.

Each of these resources introduces an extra request made by your browser which in turn increases latency and page load time. Ideally, your site’s reliance on external resources should be kept to a minimum. A good first step when reviewing external libraries is to look at the performance impact for each resource and discuss the perceived benefit with stakeholders to decide whether it is worth continuing to use.

In the cases where these resources are essential, it is worth investigating the implementation of a tag manager to handle tags as an alternative to just placing the separate tags directly in the page source. In addition to being easier to maintain, tag managers will typically load asynchronously which prevents the loading of these resources from blocking other items from loading.

Leveraging Drupal’s Robust Library System

Drupal 8 introduced asset libraries to manage CSS and JavaScript, for both themes and modules. Utilizing libraries allows for granular inclusion of assets based on the page being loaded, this is a good thing because it can help to decrease page load time and to keep the amount of assets being loaded to a minimum.

base:
  version: 1.x
  css:
    theme:
      build/styles/root.css: { minified: true }
  js:
    build/scripts/root.js: { minified: true }

article-listing:
  version: 1.x
  js:
    build/scripts/03-structures/article/article-feature/article-feature.js: { minified: true }

content:
  version: 1.x
  js:
    build/scripts/content.js: { minified: true }

> Drupal uses a high-level principle: assets (CSS or JS) are still only loaded if you tell Drupal it should load them. Drupal does not load all assets (CSS/JS) on all pages because this is bad for front-end performance.

Database Optimization

Drupal uses the database heavily and can quickly become a bottleneck as traffic increases to the site. Well configured Caching and Redis will help greatly with this, here are a couple of additional database-specific tips.

Database Logging

Production sites should not write the logs to the database as it is an extremely inefficient operation. Using the Syslog core module, it’s possible to output all messages to a file, instead of the database. Due to the intensive logging for PHP warnings and errors that are all being written to the database, this will considerably lower the strain on the server.

Table Maintenance

In the past, we have seen instances where database tables have grown to extreme sizes. Frequently the cause for the excessive number of rows in these tables is due to caching or logging of events or errors, such as logging 404 errors.

It is important to keep database tables under control and there are many approaches for doing so, a module like DB Maintenance is one tool which may come in handy.

Hosting / Stack

There are many site-specific needs that will influence a recommendation for the ideal hosting environment, but we typically start with a Platform as a Service (PaaS) provider such as Platform.sh as a default. The following tips are worth referencing when looking for performance and stability improvements to your existing stack.

PHP version

We recommend always using the latest stable version of PHP that your application supports, currently, this is PHP 7.4. Newer versions of PHP provide the latest features, and often include performance improvements over older versions, in addition to including the latest security fixes.

NGINX

Using NGINX as your web server can improve site performance. NGINX is capable of handling high concurrent traffic whilst avoiding high memory utilization issues which are often a problem with Apache. Drupal specific hosting providers such as Platform.sh use NGINX by default.

DNS Hosting (Anycast)

Typically, the DNS Hosting provided by domain registrars is not focused on providing the best possible performance. The DNS solutions offered by the larger providers such as Amazon Route 53, Cloudflare, and DNS Made Easy work in a similar way to a Content Delivery Network and provide multiple redundant geographic locations. Switching to a premium DNS provider will result in lower latency DNS lookups for users which in turn can help with increasing page load performance.

Fine-Tune CSS and JavaScript Asset Delivery

Aggregating multiple files together to a single file can assist with increasing page performance by requiring fewer requests to be made from browser to server when loading all of the required files, this is especially true when using the HTTP/1.1 protocol. If you are using HTTP/2, it may be beneficial to not aggregate assets due to the ability for resources to be transferred in parallel and cached on a more granular level.

Drupal provides CSS and JavaScript aggregation options as part of its Performance Configuration options (/admin/config/development/performance). In addition, the Advanced CSS/JS Aggregation module can be used to further optimize aggregation settings as well as provide some more advanced configuration options related to caching, asset compression, and JavaScript minification.

Inline Critical CSS

As with other externally sourced assets, stylesheets can become a render-blocking resource and may result in an increased page render time if conditions, such as network performance, are not optimal. Inlining critical styles in the document `` and deferring non-critical styles to be loaded later streamlines rendering the content that’s immediately visible on load, resulting in improved perceived performance.

Inline CSS should be kept to only the styles that apply to “above the fold” markup, otherwise any performance advantage is negated. Tools such as Critical will extract and inline critical CSS, which is typically a much better option than attempting to accurately perform this process manually.

If your project is using HTTP/2, critical CSS can be pushed instead of inlined, allowing browsers to cache these critical styles instead of downloading them over and over again on every new page request.

In Summary

There are many speed optimization tactics for fixing website performance problems in Drupal 8/9. Thanks to some of the options we have just reviewed such as the ability to cache pages containing dynamic content, Drupal website performance is better than ever.

The implementation details will vary on a project by project basis as there is no ‘one size fits all’ approach where website performance optimization is concerned, however, we hope that some of the suggestions here will lead you towards a high performing website. A website performance audit would uncover existing problems and provide prioritized suggestions for improving the performance of your site, a thorough audit will ensure that time is spent wisely when solving performance issues.

May 26 2020
May 26

As longtime members of the Drupal community, Chromatic strives to contribute whenever we can. Some of those contributions are monetary, such as with the ongoing Drupal Cares campaign, but others involve activity directly on drupal.org, including creating/testing patches, maintaining projects, and submitting case studies. For organizations that list themselves in the Drupal Marketplace, these statistics are all inputs into a formula whose output is your organization’s rank on the marketplace pages.

drupalorg-slack app screenshot

To make these numbers more visible to our team and use them as an additional motivation tool to spurn contributions, we created a Slack app (drupalorg-slack) that will pull these stats from Drupal.org’s API and announce them in Slack. It supports both sending notifications on-demand to users via a Slack “slash command” and on a regular schedule to a channel.

We have open-sourced the app and if you follow the instructions in the README, you can make use of it in your Slack workspace as well.

Mar 13 2020
Mar 13

Applying patches can be dicey when there are untracked changes in the files being patched. Suppose you find a patch on Drupal.org that adds a dependency to a module along with some other changes. It should be pretty simple to apply, right? If you are using Composer to manage your modules and patches it might not be so simple.

Understanding the Problem

When Composer installs a Drupal module, it installs the module using the dist version by default, but this version includes project information to the bottom of the *.info.yml file which is added by Drupal’s packaging system.

# Information added by Drupal.org packaging script on 2019-05-03
version: '8.x-1.1'
core: '8.x'
project: 'module_name'
datestamp: 1556870888

When a patch also alters one of those lines near the end of the file, such as when a dependency is added, then the patch will fail to apply.

Preferred Install

Thankfully composer allows you to specify the installation method using the preferred-install setting.

There are two ways of downloading a package: source and dist. For stable versions Composer will use the dist by default. The source is a version control repository.

Installations that use the source install avoid the packaging step that alters the *.info.yml file, and thus allow patches to be applied cleanly (assuming there are no other conflicts).

With an understanding of the issue, the solution is quite easy. Simply denote the preferred installation method for the patched module and watch it work!

"config": {
    ...
    "preferred-install": {
        "drupal/module_name": "source",
        "*": "auto"
    }
},

Note: The flags --prefer-dist and --prefer-source are both available for composer install. Be sure to check your deployment and build scripts to prevent unexpected issues in the deployment pipeline.

Other Solutions

Unfortunately, this is the only viable solution at the moment. There is an issue on Drupal.org for this, but a fix does not sound simple or straightforward.

The long term fix would need changes both on the d.o side and in core itself (since core would have to know to look in *.version.yml (or whatever) for the version info, instead of looking in *.info.yml).

Until a solution is agreed upon and implemented, avoiding the issue is the best we can do. Thankfully the workaround can often be temporary, as the custom preferred-install setting override can be reverted as soon as the needed patch makes its way into a release. Then it’s back to clean dist installs and smooth sailing, at least until the next info file patch comes along!

Jan 28 2020
Jan 28

Codebases often have deployment scripts, code style checking scripts, theme building scripts, test running scripts, and specific ways to call them. Remembering where they all live, what arguments they require, and how to use them is hard to keep track of. Wouldn’t it be nice if there was some sort of script manager to help keep these things all straight? Well, there is one, and (if you’re reading our blog) chances are you probably already have it in your codebase.

Composer already manages our PHP dependencies, so why not let it manage our utility scripts too? The scripts section of a Composer file offers a great place to consolidate your scripts and build an easy to use canonical library of useful tools for a project.

The official Composer documentation says it best:

A script, in Composer's terms, can either be a PHP callback (defined as a static method) or any command-line executable command. Scripts are useful for executing a package's custom code or package-specific commands during the Composer execution process.

Note: The Composer documentation is full of much more great information on the details of how the scripts section can be used.

If you used a Composer template to build your composer.json file, you likely found entries under scripts such as pre-install-cmd, post-install-cmd, etc. These “magic” keys are event names that correspond to events during the composer execution process.

"scripts": {
        "drupal-scaffold": "DrupalComposer\\DrupalScaffold\\Plugin::scaffold",
        "pre-install-cmd": [
            "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
        ],
        "pre-update-cmd": [
            "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
        ],
        "post-install-cmd": [
            "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
        ],
        "post-update-cmd": [
            "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
        ]
    },

Creating Custom Commands

Composer also allows for custom events and provides great details in their documentation. Custom events can be used for just about anything and can be called easily with composer run-script my-event or simply composer my-event.

Shell Scripts/Commands

Composer scripts can be utilized to run anything that is available via the command line. In the example below, we simplify the execution of our code style checks and unit test execution.

"robo": "robo --ansi --load-from $(pwd)/RoboFile.php",
"cs-check": "composer robo job:check-coding-standards",
"phpunit": "composer robo job:run-unit-tests",
"test": [
    "@cs-check",
    "@phpunit"
],
"code-coverage": "scripts/phpunit/code-coverage"

Now we can call our tests simply using composer test.

Note that you can also define events that simply call other events by using the @ syntax as seen in our test event.

No longer do you need to remember the correct directory, command name, and mile long list of arguments needed to call your script, just store that all in a composer event and call it with ease.

PHP Commands

Composer scripts can also call PHP functionality with a tiny bit of additional setup.

First you must inform the autoloader where your class lives.

"autoload": {
    "classmap": [
        "scripts/composer/SiteGenerator.php"
    ]
},

Then create a new event and point it to a fully name-spaced static method, and you are set.

"generate-site": "ExampleModule\\composer\\SiteGenerator::generate",

Listing Available Commands

If you ever forget what scripts are available, composer list displays a handy list of all the available commands.

Available commands:
  about                Shows the short information about Composer.
  archive              Creates an archive of this composer package.
  browse               Opens the package's repository URL or homepage in your browser.
  check-platform-reqs  Check that platform requirements are satisfied.
  clear-cache          Clears composer's internal package cache.
  clearcache           Clears composer's internal package cache.
  code-coverage        Runs the code-coverage script as defined in composer.json.
  config               Sets config options.
  create-project       Creates new project from a package into given directory.
  cs-check             Runs the cs-check script as defined in composer.json.
  ...

Examples Use Cases

Each event can execute multiple commands as well. So for example, if you want to put your deployment script directly into composer.json you can.

This is not a complete deployment script, but it shows the flexibility Composer offers.

"deploy": [
  "drush config-import -y",
  "drush cc drush",
  "drush status",
  "drush updatedb -y",
  "drush cr"
],

Now code deploys to a local environment or a production environment can all use composer deploy and you can have confidence that the same code is running everywhere.

This can be integrated with front-end commands as well. Back-end developers no longer need to find the theme, navigate to it, find the build commands and run them. They can simply run something akin to composer example-build-theme-event where all theme building steps are handled for them.

Summary

Of course none of this is revolutionary, there are many other ways to achieve similar results. We are simply calling shell scripts or PHP methods in a fancy way. However, a tool does not need to be revolutionary to be useful. Composer scripts are a great way for a team to better document their scripts and increase their visibility and ease of use. Especially if said team is already committed to Composer as part of the project’s tooling.

If knowing is half the battle, then hopefully this helps your team remember where they put their tools and how to use them more easily.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web