Feeds

Author

Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Nov 12 2021
Nov 12

When your client needs a website migration, talking through the process feels a little like planning a real-life move to a new home. No one feels especially excited about packing up and relocating everything they value, and there’s a lot that can go wrong. You have to account for a lot of details to get the move right. But the benefits of moving your valuables to a better place are what make the entire complicated process worthwhile.

Whether your client’s website migration constitutes a move to another hosting service, an upgraded CMS, or a whole new platform, the process carries an element of risk. You need to manage both the infrastructure of your client’s site and its content with equal levels of care. Otherwise, the migration could introduce broken links, duplicate content, or a whole host of issues. Even worse, your client could lose the hard-won SEO rankings they’ve established over the years.

However, if you plan correctly, a site migration doesn’t just offer the chance to secure a better, more reliable platform. By leveraging a few key steps, you can optimize their re-platformed site and raise their search engine rankings to new heights.

To return to the moving analogy, a website migration is a little like finding the right places for all your furniture in a new home before you get there. With the right amount of planning and know-how, you can ensure your clients see the most benefits from a migration experience.

Two Approaches to a Site Migration Impact Risk Levels for Clients

Whether your client needs a site migration to update their branding, centralize their digital properties, or escape an outdated CMS, their digital business depends on a successful migration. But before you or your web development partner can start planning, you face a fork in the road.

One way is a seemingly more straightforward path that involves keeping your client’s existing site running while you migrate a duplicate version to its new home. Then, once everything is in the right place, you redirect all its traffic to the new site. This “flip-a-switch” migration is less complex, but it also carries more risk. If any links, tags, or site features are broken, your team must scramble to make repairs to avoid impacting SEO performance. 

Alternatively, you can offer a more iterative approach that prioritizes the parts of the site that are of greatest importance. This kind of piecemeal approach also allows your client to keep publishing without interruption as your team migrates sections of their site. Plus, as the migration progresses, you can check the performance of the site and its SEO ranking to resolve any issues as they arise.

The downside? Iterative migrations are far more complex and expensive to carry out. While larger clients with special publishing needs like a huge media brand will value the flexibility of an iterative approach, smaller firms may find the additional costs a deal-breaker.

Key Checkpoints to Ensure an SEO-Friendly Website Migration

Every business recognizes the value of their site ranking highly on search engines. But the migration process involves so many moving parts that SEO can be overlooked. To both retain and even improve search performance, you should factor in the following steps in a website migration.

Perform a Site Audit to Establish SEO Goals for the Migration

In planning a client’s website migration, you should clarify their objectives. Are some parts of the site performing exceptionally well? Those pages should be viewed closely to ensure their rankings are protected. Conversely, legacy pages that are performing poorly or are otherwise out of date may not need to be part of the migration to a new site.

You should evaluate all aspects of your client’s website rankings from the beginning. For one, setting a baseline will help you identify potential issues with their current website that a migration should address. Plus, these benchmarks will provide useful KPIs for your team to measure the new site’s performance against once the migration is complete.

Address Site Performance & Page Experience Prior to Migration to Avoid Negative SEO Implications

In both iterative and “flip-a-switch” migration approaches, you should incorporate best practices to improve SEO and site performance each step of the way. As of 2020, Google began including page experience signals into its search rankings. The faster your client’s migrated site loads, the higher it will rank on results pages. Identifying and addressing areas of the site with slower load times early will help mitigate potential SEO problems later.

As you plan your client’s migration, you can run scripts that search for potential issues such as oversized site images that aren’t responsive to different devices or pages missing meta tag information. Once these issues are identified, you can resolve them before they are moved to the new site.

Evaluate Site Architecture and Content Structure

You should thoroughly examine details such as directory structures, URL aliases, and internal links on your client’s current site pages. Each internal link should use the full URL for the corresponding page and not rely on a redirect to prevent potential errors. Making sure each link on the site will function as expected on the new site will allow both users and search engines to find the content they expect on your client’s site.

Additionally, if your client’s migration involves merging multiple sites into one, your team should ensure any duplicate content, such as syndicated features, are tagged with the correct canonical URL. Identifying potential SEO issues like these should be incorporated into your agency’s strategy from the start rather than planning to put out fires after the migration.

A Right-Fit CMS Is Key to Improved SEO Performance, Editorial Experience, and Website UX

For many clients, an upgraded CMS is a prime motivation for a migration project. Upgrading a website from Drupal 8 to 9 is often relatively straightforward. However, the transition from Drupal 7 to a newer version of the platform requires enough developer labor that it can be as complex an undertaking as switching to WordPress.

To recommend the right CMS for your clients, you should factor in the technical expertise of their content team and the potential platform features that will best serve their digital priorities. A modern CMS won’t just provide an improved editorial experience; it will also allow your clients to manage critical, SEO-enhancing page elements such as titles, URLs, and meta-tag details. Plus, applying semantic markup such as H1 tags along with helpful CMS plugins like Yoast and AccessiBe will encourage your client to incorporate best practices into their CMS. 

Providing a more inclusive, readable, and positive experience for all users should be the goal for every website migration. However, the importance of the implementation process can’t be overstated. You can have all the right tools to boost SEO performance for your client’s website. But if what’s output to the front-end experience isn’t thoughtfully built, the site’s performance and SEO ranking will suffer. 

Identifying potential issues with your client’s content early will ensure any issues that are impacting their site’s SEO performance won’t follow them to their new home. With proper planning, you can ensure any migration will leave their site — and their business — in a better position than before.

Oct 27 2021
Oct 27

The configuration management system in Drupal allows for configuration to be imported and exported between the file system and the database. This enables the management of configuration with version control and allows configuration to be shared across environments (e.g. dev, staging, prod).

At its core it is quite simple, but modern website development workflows and processes can quickly lead to edge cases, which if not handled carefully, can trip up even the most experienced developer and will surely confuse anyone new to Drupal.

Key Concepts

Configuration Instances

There are two independent and complete instances of the configuration when using Drupal's default configuration tooling.

Note: There are exceptions to this as you begin adding modules such as config_ignore, config_split, etc to your workflow. It would be impossible to cover all of the possible permutations, however, if they are configured correctly, the complexity they introduce will likely be hidden in the day-to-day workflows described below.

Active Configuration

One instance of the configuration lives in the database. The database configuration is used when building pages and rendering data. This is essentially the only configuration Drupal knows about and is often referred to as the “active state.” The database configuration is altered when updating information within the admin UI, such as a change to the site name.

Codebase Configuration

The other instance of the configuration lives in a folder full of yml files within the codebase. The location of this folder is defined by $settings['config_sync_directory']. These files are typically tracked via version control and deployed with the codebase.

Configuration Syncing

The two instances are independent, but can be synced using the Drupal admin UI or via Drush commands. 

  • Configuration import is the process of syncing the changes from the yml files into the database. This is often performed with drush config:import.
  • Configuration export is the process of syncing the changes from the database to the yml files in the codebase. This is often performed with drush config:export.

Configuration Gone Wrong

With a baseline understanding, let’s explore some common scenarios where improper management of the configuration has unwanted and destructive side-effects. 

Example 1 - Deleted Files

Scenario: You submit your PR and it deletes configuration files. During code review, one of your colleagues asks, "Why is this configuration file being deleted?"

Initial state while working on example-branch.

Code Configuration

Config A

Config B

Active Configuration

Config A

Config B

The latest commits from the main branch are merged into example-branch.

Code Configuration

Config A

Config B

Config D

Active Configuration

Config A

Config B

Configuration is updated in the admin UI.

Code Configuration

Config A

Config B

Config D

Active Configuration

Config A

Config B

Config C

Configuration is exported and the local diff shows the deletion of configuration D.

Code Configuration

Config A

Config B

Config C

Active Configuration

Config A

Config B

Config C

What went wrong?

The active configuration was not synced with the code configuration when the configuration file D changes were merged in. When the latest commits from the main branch were merged into example-branch a configuration import should have been performed to ensure that the active configuration in the database precisely mirrors the configuration tracked in code.

Example 2 - Lost Configuration

Scenario: You are configuring a new feature in the Drupal UI and all of your work is wiped out.

Initial state while working on example-branch-one.

Code Configuration

Config A

Config B

Active Configuration

Config A

Config B

Configuration is updated in the admin UI.

Code Configuration

Config A

Config B

Active Configuration

Config A

Config B

Config C

Switch to example-branch-two to continue work on another issue.

Code Configuration

Config A

Config D

Active Configuration

Config A

Config B

Config C

Import configuration and discover that all of the work that went into making the C configuration changes was lost, and the B configuration is missing too.

Code Configuration

Config A

Config D

Active Configuration

Config A

Config D

What went wrong?

Configuration should have been exported before switching branches. It’s normally best to always export configuration and commit it to the current branch before moving on.

Example 3 - Extra Configuration Files

Scenario: You submit your PR and it includes a bunch of new configuration files. During code review, one of your colleagues asks, “Why is this unrelated configuration in this PR?"

Initial state while working on example-branch-one.

Code Configuration

Config A

Config B

Active Configuration

Config A

Config B

Switch to example-branch-two to continue work on another issue.

Code Configuration

Active Configuration

Config A

Config B

Additional configuration changes are made in the UI.

Code Configuration

Active Configuration

Config A

Config B

Config D

Configuration is exported and configuration B is included, but it is unrelated to the current work/branch.

Code Configuration

Config A

Config B

Config D

Active Configuration

Config A

Config B

Config D

What went wrong?

Configuration should have been imported after switching branches. This would have removed  configuration B from the active configuration and prevented it from being exported.

Managing Configuration Correctly

Understanding the state of the two configuration instances and how they relate as the code and database change throughout the development process will greatly aid in reducing frustration and producing clean diffs for code review.

Initial state while working on example-branch-one.

Code Configuration

Active Configuration

Configuration is updated in the admin UI.

Code Configuration

Active Configuration

Config A

Config B

Configuration is exported to the file system and committed to example-branch-one.

Code Configuration

Config A

Config B

Active Configuration

Config A

Config B

Switch to example-branch-two to continue work on another issue.

Code Configuration

Config A

Config C

Active Configuration

Config A

Config B

Configuration is imported.

Code Configuration

Config A

Config C

Active Configuration

Config A

Config C

Additional configuration changes are made in the UI.

Code Configuration

Config A

Config C

Active Configuration

Config A

Config C

Config D

Configuration is exported. Only the expected configuration D is included in the diff, and configuration B is safely tracked in the example-branch-one branch.

Code Configuration

Config A

Config C

Config D

Active Configuration

Config A

Config C

Config D

Lessons Learned

Alway assume that the codebase configuration and database (active) configuration are not in sync after either of them change. Once the mindset of needing to keep the configuration files and the active configuration in sync is adopted, the habits of importing and exporting configuration begin to become second nature.

As someone who has done considerable work with configuration, writing this was a great exercise that still tripped me up a few times. Managing state is never easy in programming, especially when you mix version control in with it. However, hopefully these illustrations will help with visualizing the situation the next time you are trying to figure out what configuration is where and what it all means.

Jan 26 2021
Jan 26

Authoring content in one place and publishing it to multiple locations offers many benefits. Reducing the editorial effort needed to keep content synchronized across multiple locations is often the primary benefit. However, providing a consistent authoring interface, a canonical cross-brand asset library, and increasing monetization of existing content are often ancillary benefits that can be just as important.

We always get excited when we can partner with our clients to build a new publishing experience or improve an existing one, as we love to use technology to solve problems. However, the excitement can soon be tempered by the reality of potential challenges. Through our experience building syndication systems, we have solved many challenges and found some tactical approaches and strategic principles that guide us through the issues that must be considered along the way.

Complexity

Should we even be doing this and is it really worth it? This should always be the first question. Sometimes the shiny new functionality might not actually be the best idea when all factors are considered. Some questions to ask while making this decision:

  • How much content needs to be shared?
  • How many places will it be shared?
  • How often will the content change?
  • How quickly do edits need to be synchronized?
  • How will associated media such as images and video be shared?
  • What are the costs/time considerations with manually keeping the content in sync?
  • What are the costs to build and maintain an automated system and how much complexity does it add for the editorial team and developers?
  • Is there an existing solution or will a custom approach be needed?

Visibility

Depending on the complexity of the system, understanding what content is available and where it has been published is important, but can quickly become difficult to manage. Planning for this up-front will be essential to the long-term success of your system, ensuring a great user experience that scales beyond the proof-of-concept.

Providing contextual data on content edit forms that inform editors where a given piece of content has been syndicated to and where the current updates will appear is a great start. Filterable content listing admin pages with batch functionality built-in will also be valuable. This allows for editors to easily understand and change where content is published at scale.

Syndication

Understanding the expectations around the syndication process is important for selecting the technology used and ensuring the system meets business expectations. If content needs to be live on consuming sites within 2 seconds, that is a much different requirement than 30 seconds, or even 5 minutes. Every layer between the end user and the syndication platform must be evaluated holistically with caching considerations also being taken into account.

SEO

Google isn’t a big fan of duplicate content, yet that is precisely what syndicating content seeks to do. Understanding this risk and mitigating it with a canonical URL and other measures is vitally important and can’t be an afterthought.

The creation and management of URLs is one of the primary challenges of sharing content and should be carefully considered from the start of a project. Some questions to consider:

  • Where are URL aliases generated?
  • What data is used to generate URL aliases?
  • Will the alias remain identical on all sites?
  • If the alias is different, do other sites need to know the alias for cross-site links?
  • Will there be cross-site search and what information will it need to build links?
  • How will sites know what site content lives on canonically and determine the domain?
  • Where will redirects be created and managed?

Editorial Interface

This is where things really get fun. There are many challenges and also many opportunities for efficiency, as reducing the editorial workload is likely what started this whole adventure in the first place. Some questions to consider through the process:

  • What level of control will be needed over the content on the consuming sites?
  • Will additional edits/updates need to be syndicated?
  • Are there any per site customizations? Do those take precedence over updates?
  • Where will curated lists of featured content be created and managed? Will that be syndicated as well?
  • Do editors need previews of the content on one or many of the client sites from the editing interface within the content syndication platform?
  • How will marketing pages built with Gutenberg or Layout Builder be shared and how will designs translate across sites?

Technical

There are a myriad of technical considerations and approaches depending on the requirements and the technology that is used. One of the first considerations should be to keep things generic and avoid tight couplings between the data models in the syndication platform and the consuming sites. Some field values can be mapped to identical fields in the client sites such as taxonomy terms and author information, as that data will likely be needed for querying data for lists. However, all other data can often be stored in a JSON field that can be decoded and made available to templates.

This all begins to border on a decoupled approach, and with this approach, it helps to set a project up for success if the front-end goes fully decoupled later. Or perhaps now is the time to consider going decoupled while a foundational evaluation of editorial workflows is already in progress.

Sharing is Caring

They say sharing is caring, but only share content if you can take the time to care about all of the details along the way. With thoughtful consideration at the very beginning, from challenging the very need for the functionality all the way to the technical details that set canonical metatags, analyzing each step of the process ensures a successful outcome. There is no easy answer, but hopefully this helps you avoid many of the pitfalls. If you have a project with content syndication coming up or have experience with it, drop us a line on Twitter @ChromaticHQ, we would love to hear your thoughts.

Jan 21 2021
Jan 21

PHPStorm comes with a handy feature called live templates, which helps you avoid repetitive typing.

Let’s say that you are writing SASS and you find yourself repeating a breakpoint over and over:

@include breakpoint($breakpoint-md) {
    // Some CSS attributes.
}

With live templates, you can instead type a shortcode that will generate the breakpoint for you. In this example we'll be using ibk as an abbreviation for include breakpoint and we will generate another variant that includes the $breakpoint-md as an argument passed to the mixin.

ibk will generate @include breakpoint() with no arguments predefined and position the cursor inside the mixin parameter. ibk:md will generate @include breakpoint($breakpoint-md)

To begin setting up a Live Template, go to File -> Settings -> Editor -> Live Templates

You’ll see the following window with some live templates already available, depending on which plugins you have installed. For this example we will generate a new “Template group” by clicking on the “+” icon:

Live Template configuration interface

PHPStorm is going to prompt for a name for the new group. In this example, we will name the group SASS.

Once you have the SASS template group created, you need to click on the “+” icon again and add the actual live template.

You can now add the abbreviation ibk, a short description, and the code.

The cursor will be positioned on $breakpoint$ first and then to $END$.

Define the language as CSS, check the box “Reformat code according to style”, hit “Apply” and “OK” to close the dialog box. The result should look something like this:

Live Template configuration interface

The following shows the Live Template in action:

Live Template in use

If you want to save more time, you can avoid typing the breakpoint variable as well, with a variant of the breakpoint Live Template you’ve created.

Live Template creation interface

With this variation, you eliminate the need to explicitly typing $breakpoint-md as an argument for the mixin. See the Live Template in action below:

Live Template in use

Wrapping a Selection with Live Templates

If you need to wrap a selection in a breakpoint, you can add a new parameter $SELECTION$ to a live template:

Important: You need to use the keyword $SELECTION$ for this to work.

After editing ibk:md the code should now look like this:

@include breakpoint($breakpoint-md) {
    $SELECTION$$END$
}

You can achieve the following by selecting the code you want to wrap and using the keyboard shortcut Control + Alt + J.

Live Template in use

Live Templates are great for increasing development speed and productivity whenever you must repeat code. Reach out to us at @ChromaticHQ on Twitter and let us know how you have used them and how they have helped improve your development process.

Jan 11 2021
Jan 11

When developing or maintaining a Drupal theme, there is often a need to understand why a theme needed to override a given template. A diff of the template in the active theme compared to the template in the base theme would make this easy to understand. A drush command that found the template files for you and output a diff would solve this quite nicely.

Template Diff

The Template Diff module does just that. It provides a drush command that accepts a template name and will display the diff between two specified themes. If no theme is specified it defaults to comparing the active theme and its base theme.

Examples

Compare the active theme and its base theme:

drush template_diff:show views-view

Compare "foo_theme" vs "bar_theme":

drush template_diff:show views-view foo_theme bar_theme

Compare "some_theme" and its base theme:

drush template_diff:show views-view some_theme

The output will look something like this:

$ drush template_diff:show views-view
 [notice] Comparing chromatic (active theme) and stable (base theme).
- stable
+ chromatic
@@ @@
 {#
 /**
  * @file
- * Theme override for main view template.
+ * Default theme implementation for main view template.
  *
  * Available variables:
  * - attributes: Remaining HTML attributes for the element.
- * - css_name: A CSS-safe version of the view name.
+ * - css_name: A css-safe version of the view name.
  * - css_class: The user-specified classes names, if any.
  * - header: The optional header.
  * - footer: The optional footer.
…

If you have ideas on how to improve this, submit an issue so we can make understanding template overrides even easier.

Oct 26 2020
Oct 26

Managing Drupal configuration files and keeping them in sync with a database requires intense attention to detail and process. The importance of following the correct sequence of events can be documented, but mistakes still happen. Additionally, trying to get our robot friends such as Dependabot to play by the same rules presents a challenge.

The ideal sequence of configuration commands can even vary by where the configuration operation is performed. This message from Drupal Slack illustrates the requirements nicely.

Yes it should be like this:

Locally:

  1. drush updb

  2. drush config:export + commit

Deploy:

  1. drush updb

  2. drush config:import

Missing Config Updates

When Drupal core or a Drupal module releases an update, it can alter the structure or values of configuration. This is fine as long as those changes make it back into the tracked configuration files. If the “Local” process described above is followed, this isn’t an issue because the changes are exported and tracked.

However, if a tool such as Dependabot performs core or contributed module updates, it will not have a database to run updates on, and it will most certainly not be able to export those config changes and commit them. After the Dependabot update is deployed, the active database configuration and the config tracked in code are no longer in sync. Subsequently, the output of drush config:export will now show a diff between the active configuration in the database and the tracked configuration files. This can lead to unintended consequences down the line.

Config Process & Development

Suppose a Dependabot PR that resulted in config changes as described above was just merged. Then a developer grabs a fresh database and the latest code, and they go about their work while making some changes to the site’s configuration locally. They export their changes, but the diff now includes unrelated changes. This is a result of the config changes made in the database from the previously deployed Dependabot update that were never exported and committed.

This pattern will repeat and become harder to untangle until someone decides to include the changes in an unrelated PR or create a separate PR to bring things back into sync. This isn’t fun for anyone. The best solution is to avoid this entirely.

At Chromatic, we have made attempts to monitor release notes for signs that a config change may be included, but naturally some slip through the cracks. The only way to truly avoid this and keep ourselves accountable is to automate the process of checking for config changes on every pull request.

Automate a Configuration Checking Solution

Any code branch should be able to import its configuration into a production database and subsequently export the active configuration without a resulting configuration diff. Thus the solution becomes to run a standard deployment which includes a configuration import, run configuration export (drush config:export), and verify that the output confirms everything is synced.

 [notice] The active configuration is identical to the configuration in the export directory (../drupal-config/sync).

Note: This will likely only be done in pre-production environments, but that is where we want to catch our problems anyways!

Our Solution

We have automated our deployments to pre-production environments, such as Tugboat, with a standardized Ansible based deployment tool. In a recent release we added functionality to conditionally run drush config:export and check the output for active configuration is identical, and we were in business.

# Identify Drupal configuration changes in core or contrib updates that need
# to be exported and may have been missed.
- name: config structure check
  shell: "{{ deploydrupal_drush_path }} config:export -y"
  args:
    chdir: "{{ deploydrupal_core_path }}/sites/{{ deploydrupal_site_name }}"
  register: deploydrupal_config_check_result
  when:
    - deploydrupal_config_check_structure
  failed_when: "deploydrupal_config_check_structure_string not in deploydrupal_config_check_result.stderr"

Every build now checks for configuration synchronization between our code and database, keeping us accountable as we go.

Checking Configuration Content Changes

The process outlined above will catch changes to configuration schemas, as the changes are made via importing configuration and not blown away by it. However, the potential for including a config change via an update hook (ran via drush updatedb), and the config change to be lost still remains.

A great example of this is an update to the Google Authenticator login module that changed the name of certain configuration keys. The resulting Dependabot PR failed to run in our QA environment alerting us to the problem. It was resolved by following the correct steps locally and committing the updated configuration changes. Due to Google Analytics loading on every page and failing loudly it was caught, but the issue might not always be so easily found. Detecting this problem programatically is quite simple in theory, but more difficult in practice. The basic process is:

  • Ensure all configuration is imported and up to date (drush config:import).
  • Run the update hooks (drush updatedb).
  • Export configuration and check for a diff (drush config:export).

This is fine until you realize that the drush deploy command and the suggested deployment command order noted above make it clear that we need to run our database updates (drush updatedb) before our configuration import (drush config:import) when deploying code.

Any pre-production deployment process should mirror the sequence of commands run during a production deployment to properly test the changes, which creates a problem. There are various ways around this with enough additional infrastructure and creativity, but based upon our research, none of them could be implemented with just changes to a deployment script alone.

Many updates that result in a change to configuration keys/values will often cause a failed build during the pre-production deployment process, which accomplishes the same end goal of alerting us to the configuration synchronization problem. However, that won’t always be the case, so we will continue to search for a way to fully validate the configuration.

The Value of Configuration Maintenance

The configuration system empowers us to do incredible things. However, it must be used carefully and maintained meticulously if we want it to be the canonical source of truth for our sites. The subtle yet important differences in workflows under different scenarios must be respected by everyone, humans and robots alike. With proper care, well-managed configuration will continue to foster reliable deployments and simplify the development and code review process for everyone.

Oct 26 2020
Oct 26

What does it take to get a Drupal codebase onto a server and make it ready to serve a website? There are more steps than expected, and it is easy to forget the details if you don’t do it frequently. Additionally, many of these same steps need to be performed during each code deployment. Just a few of the steps include:

  • Get the latest code.
  • Run Composer commands.
  • Create a public files directory.
  • Run deployment commands.
  • Build theme assets.
  • Secure file permissions.

Automating the Setup

At Chromatic we love to automate processes like this. Even if the effort requires only one or two manual steps, repeated many times per day they can add up. The value of automating these tasks is well worth the effort. Our automation journey began with a humble shell script, but we soon realized this was difficult to maintain and that our one size fits all solution didn’t fit the needs of all our clients.

Selecting Ansible as an Automation Tool

Ansible is a flexible tool that we use for tasks as big as provisioning and managing servers, to as small as running cron on all our Drupal sites with one command. Some of the key factors that helped us decide to use it for automating deployments were:

  • Ansible is idempotent and allows us to define the desired state, only taking action when necessary to achieve that state as opposed to a script that would just run a specific set of commands.
  • Ansible does not depend on any agents. If you can SSH to a box, Ansible can act on that box.
  • Ansible configuration is stored in YAML, and we like YAML.
  • Ansible has a lot of open-source role support via Ansible Galaxy.

Required Features

Git Operations

The deployment tool needs to support git operations to clone the latest version of your codebase with configurable branches.

Initial Site Deployment

During early development, a site might not have a persistent database. The deployment tool needs to support installing from site configuration using the drush si command.

Existing Site Deployment

Deploying to an existing site needs to be supported. A variable sites/* folder name should allow for per-site targeting. Standard Drupal deployment commands such as drush deploy can be run as well.

Configuration Checking

Keeping the tracked configuration up to date and in sync with the active config in the database can be hard at times. An optional deployment step is needed that checks if they are in sync and can fail the deployment if they are not.

File Permission Management

The deployment process needs to handle setting the proper permissions for the various files and folders within Drupal such as the settings.php file and the public files directory.

Ansible Setup

Setup requires the addition of several files to your repository and the installation of Ansible on whatever machine you will be triggering the deployment from. This is probably going to be your local environment for development and some other environment for production (ie. a server of your own, GitHub Actions, Jenkins, etc). Below are examples of the files with some basic defaults. These can be expanded to accommodate more complex needs. Note that we store these files in an ansible/ directory so the paths reflect that.

ansible/requirements.yml tells Ansible which roles it needs to install when it runs. In this case, we only need our chromatichq.deploy-drupal role.

---
# Ansible required roles.
- src: chromatichq.deploy-drupal
  version: 2.20

ansible/hosts.yml defines for Ansible the hosts it is able to connect to.

---
all:
  hosts:
    localhost:
      ansible_connection: local
    EXAMPLE_SITE_PRODUCTION:
      hosts:
        mysite.domain.com:
          ansible_host: 63.162.105.42

ansible/example-playbook-configuration.yml provides the configuration that allows for customized deployment options to fit your codebase and hosting environment.

---
- hosts: all

  vars:
    deploydrupal_repo: "[email protected]:ChromaticHQ/chromatichq.com.git"

    deploydrupal_checkout_user: "root"
    deploydrupal_apache_user: "www-data"

    deploydrupal_dir: "/var/www"
    deploydrupal_code: false
    deploydrupal_redis_flush: true
    deploydrupal_redis_host: "redis"

    deploydrupal_npm_theme_build: true
    deploydrupal_npm_theme_path: "{{ deploydrupal_dir }}/web/themes/chromatic"
    deploydrupal_npm_theme_run_commands:
      - yarn
      - yarn build:preview

    deploydrupal_config_check_structure: true

  roles:
    - ansible-deploy-drupal

The call to run the playbook will look something like this:

ansible-playbook -v --limit=EXAMPLE_SITE_PRODUCTION -i "/path/to/repo/ansible/hosts.yml" "/path/to/repo/ansible/example-playbook-configuration.yml"

Our Solution

With the considerations above, we created the chromatichq.deploy-drupal Ansible role. It standardizes the entire process with best practices provided through sensible defaults and many optional configuration options to fit custom needs. All of the steps are defined in YAML using standard Ansible tools.

Testing Deployments

We are big proponents of using Tugboat, to create a testing environment for every pull request. Using the chromatichq.deploy-drupal role allows us to test our standardized deployment process along with any changes made to the site’s codebase. This gives us increased confidence that what we see on Tugboat/QA is what we can expect when deploying to production.

Benefits of Automation & Standardization

Deployment best practices evolve and bugs will be found throughout the life of a website. When that happens, the use of a centralized Ansible role allows us to easily contribute them back to a repository where all of our projects can benefit from them without maintaining a deployment script in every site repository. We also get a chance to clearly document the what and why of the changes as the update is no longer just a fix in an application repo, but it is often a new release of a tool with proper documentation.

No tool will fix every problem, and our Ansible role for standardizing Drupal deployments is no exception. However, we love having a process that allows us to easily set up standardized deployments with minimal effort to keep our sites deployed correctly and securely. We look forward to hearing how you use the tool, and collaborating on refining and improving the deployment process together.

Mar 13 2020
Mar 13

Applying patches can be dicey when there are untracked changes in the files being patched. Suppose you find a patch on Drupal.org that adds a dependency to a module along with some other changes. It should be pretty simple to apply, right? If you are using Composer to manage your modules and patches it might not be so simple.

Understanding the Problem

When Composer installs a Drupal module, it installs the module using the dist version by default, but this version includes project information to the bottom of the *.info.yml file which is added by Drupal’s packaging system.

# Information added by Drupal.org packaging script on 2019-05-03
version: '8.x-1.1'
core: '8.x'
project: 'module_name'
datestamp: 1556870888

When a patch also alters one of those lines near the end of the file, such as when a dependency is added, then the patch will fail to apply.

Preferred Install

Thankfully composer allows you to specify the installation method using the preferred-install setting.

There are two ways of downloading a package: source and dist. For stable versions Composer will use the dist by default. The source is a version control repository.

Installations that use the source install avoid the packaging step that alters the *.info.yml file, and thus allow patches to be applied cleanly (assuming there are no other conflicts).

With an understanding of the issue, the solution is quite easy. Simply denote the preferred installation method for the patched module and watch it work!

"config": {
    ...
    "preferred-install": {
        "drupal/module_name": "source",
        "*": "auto"
    }
},

Note: The flags --prefer-dist and --prefer-source are both available for composer install. Be sure to check your deployment and build scripts to prevent unexpected issues in the deployment pipeline.

Other Solutions

Unfortunately, this is the only viable solution at the moment. There is an issue on Drupal.org for this, but a fix does not sound simple or straightforward.

The long term fix would need changes both on the d.o side and in core itself (since core would have to know to look in *.version.yml (or whatever) for the version info, instead of looking in *.info.yml).

Until a solution is agreed upon and implemented, avoiding the issue is the best we can do. Thankfully the workaround can often be temporary, as the custom preferred-install setting override can be reverted as soon as the needed patch makes its way into a release. Then it’s back to clean dist installs and smooth sailing, at least until the next info file patch comes along!

Jan 28 2020
Jan 28

Codebases often have deployment scripts, code style checking scripts, theme building scripts, test running scripts, and specific ways to call them. Remembering where they all live, what arguments they require, and how to use them is hard to keep track of. Wouldn’t it be nice if there was some sort of script manager to help keep these things all straight? Well, there is one, and (if you’re reading our blog) chances are you probably already have it in your codebase.

Composer already manages our PHP dependencies, so why not let it manage our utility scripts too? The scripts section of a Composer file offers a great place to consolidate your scripts and build an easy to use canonical library of useful tools for a project.

The official Composer documentation says it best:

A script, in Composer's terms, can either be a PHP callback (defined as a static method) or any command-line executable command. Scripts are useful for executing a package's custom code or package-specific commands during the Composer execution process.

Note: The Composer documentation is full of much more great information on the details of how the scripts section can be used.

If you used a Composer template to build your composer.json file, you likely found entries under scripts such as pre-install-cmd, post-install-cmd, etc. These “magic” keys are event names that correspond to events during the composer execution process.

"scripts": {
        "drupal-scaffold": "DrupalComposer\\DrupalScaffold\\Plugin::scaffold",
        "pre-install-cmd": [
            "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
        ],
        "pre-update-cmd": [
            "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
        ],
        "post-install-cmd": [
            "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
        ],
        "post-update-cmd": [
            "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
        ]
    },

Creating Custom Commands

Composer also allows for custom events and provides great details in their documentation. Custom events can be used for just about anything and can be called easily with composer run-script my-event or simply composer my-event.

Shell Scripts/Commands

Composer scripts can be utilized to run anything that is available via the command line. In the example below, we simplify the execution of our code style checks and unit test execution.

"robo": "robo --ansi --load-from $(pwd)/RoboFile.php",
"cs-check": "composer robo job:check-coding-standards",
"phpunit": "composer robo job:run-unit-tests",
"test": [
    "@cs-check",
    "@phpunit"
],
"code-coverage": "scripts/phpunit/code-coverage"

Now we can call our tests simply using composer test.

Note that you can also define events that simply call other events by using the @ syntax as seen in our test event.

No longer do you need to remember the correct directory, command name, and mile long list of arguments needed to call your script, just store that all in a composer event and call it with ease.

PHP Commands

Composer scripts can also call PHP functionality with a tiny bit of additional setup.

First you must inform the autoloader where your class lives.

"autoload": {
    "classmap": [
        "scripts/composer/SiteGenerator.php"
    ]
},

Then create a new event and point it to a fully name-spaced static method, and you are set.

"generate-site": "ExampleModule\\composer\\SiteGenerator::generate",

Listing Available Commands

If you ever forget what scripts are available, composer list displays a handy list of all the available commands.

Available commands:
  about                Shows the short information about Composer.
  archive              Creates an archive of this composer package.
  browse               Opens the package's repository URL or homepage in your browser.
  check-platform-reqs  Check that platform requirements are satisfied.
  clear-cache          Clears composer's internal package cache.
  clearcache           Clears composer's internal package cache.
  code-coverage        Runs the code-coverage script as defined in composer.json.
  config               Sets config options.
  create-project       Creates new project from a package into given directory.
  cs-check             Runs the cs-check script as defined in composer.json.
  ...

Examples Use Cases

Each event can execute multiple commands as well. So for example, if you want to put your deployment script directly into composer.json you can.

This is not a complete deployment script, but it shows the flexibility Composer offers.

"deploy": [
  "drush config-import -y",
  "drush cc drush",
  "drush status",
  "drush updatedb -y",
  "drush cr"
],

Now code deploys to a local environment or a production environment can all use composer deploy and you can have confidence that the same code is running everywhere.

This can be integrated with front-end commands as well. Back-end developers no longer need to find the theme, navigate to it, find the build commands and run them. They can simply run something akin to composer example-build-theme-event where all theme building steps are handled for them.

Summary

Of course none of this is revolutionary, there are many other ways to achieve similar results. We are simply calling shell scripts or PHP methods in a fancy way. However, a tool does not need to be revolutionary to be useful. Composer scripts are a great way for a team to better document their scripts and increase their visibility and ease of use. Especially if said team is already committed to Composer as part of the project’s tooling.

If knowing is half the battle, then hopefully this helps your team remember where they put their tools and how to use them more easily.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web