Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 14 2021
Sep 14

11 minute read Published: 14 Sep, 2021 Author: Matt Parker
Drupal Planet , Migrations

This is the sixth in a series of blog posts on writing migrations for contrib modules:

Stay tuned for more in this series!


While migrating off Drupal 7 Core is very easy, there are still many contrib modules without any migrations. Any sites built using a low-code approach likely use a lot of contrib modules, and are likely blocked from migrating because of contrib. But — as of this writing — Drupal 7 still makes up 60% of all Drupal sites, and time is running out to migrate them!

If we are to make Drupal the go-to technology for site builders, we need to remember that migrating contrib is part of the Site Builder experience too. If we make migrating easy, then fewer site builders will put off the upgrade or abandon Drupal. Plus, contributing to migrations gives us the opportunity to gain recognition in the Drupal community with contribution credits.

Problem / motivation

In D7, some modules define their own database table to store data in (if the D7 module implemented hook_schema(), then there’s a pretty good chance it defines its own database table).

In order to fully finish writing migrations for contrib modules that define custom tables, we need to know how to migrate data out of those custom tables.

For example, Environment Indicator version 7.x-2.x stores data about each of its environments in a table named environment_indicator_environment, which has the following structure:

Column name Type Options Notes machine varchar Length: 32, Unique constraint Environment ID name varchar Length: 255 Environment label envid serial Unsigned, Not null, Primary key Internal ID regexurl varchar Length: 255 Regular expression pattern to run on the URL to determine if a user is on the environment settings text Size: big, Serialized data

… while the 4.x version of Environment Indicator (for D9) stores this data as configuration entities.

We need to migrate data out of the D7 custom table and into the D9 config entities.

Proposed resolution

You may be wondering “What is a configuration entity? I thought nodes, taxonomy terms and user accounts were entities?” Nodes, taxonomy terms, and/or user accounts group data with fields and are now called content entities. Similarly, a configuration entity in D9 groups configuration data with fields. But unlike content entities (whose IDs are usually numbers, e.g.: node 1, etc.), configuration entity IDs are usually machine names (e.g.: production, etc.). Also, while a content entity’s field names usually begin with field_; a configuration entity’s fields usually do not. Due to the similarities between content and configuration entities, they share a lot of code in D9; including the migrate destination plugin entity, which we will use.

But what about migrating out of a custom table? The plugin in the list of core source classes that looks the closest to what we want to do is SqlBase… but SqlBase is marked as “abstract” (meaning that we cannot use it directly), because it’s query() function is abstract. In English, this means that the SqlBase class doesn’t know how to get the data that we want out of the custom table we want! We need to write our own custom Source plugin, which extends SqlBase, and implements its query() function.

Steps to complete

Let’s start by mapping the destination fields in the D9 configuration entity to the source fields in the D7 custom table, as we did for simple configuration:

D9 field D9 field data type D9 field default value ← How to process ← D7 variable D7 data type Notes machine string (none) ← (copy) ← machine string (n/a) description text '' (empty string) ← (use default value) ← (doesn’t exist) (n/a) (n/a) name string (none) ← (copy) ← name string (n/a) (n/a) (n/a) (n/a) ← (discard) ← envid integer Required in D7 url uri (none) ← (copy) ← regexurl string (n/a) fg_color string #D0D0D0 ← (copy) ← settings.text_color string CSS color bg_color string #0D0D0D ← (copy) ← settings.color string CSS color (n/a) (n/a) (n/a) ← (discard) ← settings.weight (doesn’t matter) (n/a) (n/a) (n/a) (n/a) ← (discard) ← settings.position (doesn’t matter) (n/a) (n/a) (n/a) (n/a) ← (discard) ← settings.fixed (doesn’t matter) (n/a)

Given this information, we can write a migration configuration at migrations/d7_environment_indicator_hostname_environments.yml:

id: d7_environment_indicator_hostname_environments
label: Environment indicator hostname environments
  - Drupal 7
  - Configuration
  plugin: d7_environment_indicator_hostname_environment
  machine: machine
  name: name
  url: regexurl
  fg_color: text_color
  bg_color: color
    - plugin: default_value
      default_value: ''
  plugin: 'entity:environment_indicator'

Note the custom source plugin, d7_environment_indicator_hostname_environment: this plugin doesn’t exist yet — we will write it shortly.

The migration’s process configuration should look familiar: we simply map source variables to destination variables; although you’ll notice we’re leaving out the settings. prefix that we used in the mapping table for D7’s settings.text_color and settings.color — because we are writing our own source plugin, we can name the data fields whatever we want.

Finally, we specify the destination plugin entity:environment_indicator. This is the entity migration destination plugin that we mentioned earlier; plus the destination entity ID environment_indicator. We get this ID from the entity type that we’re migrating into — in this case, the configuration entity defined in environment_indicator-4.x’s Drupal\environment_indicator\Entity\EnvironmentIndicator class.

Writing a test

Before we get too much further; we should write a test for the migration we just wrote. In tests/src/Kernel/Migrate/d7/MigrateHostnameEnvironmentsTest.php:

namespace Drupal\Tests\environment_indicator\Kernel\Migrate\d7;

use Drupal\Core\Database\Database;
use Drupal\environment_indicator\Entity\EnvironmentIndicator;
use Drupal\Tests\migrate_drupal\Kernel\d7\MigrateDrupal7TestBase;

 * Tests migration of environment_indicator hostname environments.
 * @group environment_indicator
class MigrateHostnameEnvironmentsTest extends MigrateDrupal7TestBase {

  /** {@inheritdoc} */
  protected static $modules = ['environment_indicator'];

  /** {@inheritdoc} */
  protected function setUp(): void {

    // Create the environment_indicator_environment table in the D7 database.
    // The schema definition here was copied from version 7.x-2.9.
    Database::getConnection('default', 'migrate')
      ->createTable('environment_indicator_environment', [
        'fields' => [
          'machine' => [
            'type' => 'varchar',
            'length' => '32',
            'description' => 'Unique ID for environments.',
          'name' => [
            'type' => 'varchar',
            'length' => '255',
            'description' => 'Name for the environments.',
          'envid' => [
            'type' => 'serial',
            'unsigned' => TRUE,
            'not null' => TRUE,
            'description' => 'Primary ID field for the table. Not used for anything except internal lookups.',
            'no export' => TRUE,
          'regexurl' => [
            'type' => 'varchar',
            'length' => '255',
            'description' => 'A regular expression to match against the url.',
          'settings' => [
            'type' => 'text',
            'size' => 'big',
            'serialize' => TRUE,
            'description' => 'Serialized array with the configuration for the environment.',
        'primary key' => ['envid'],
        'unique keys' => [
          'name' => ['machine'],

    $this->setUpD7EnableExtension('module', 'environment_indicator', 7202, 0);

  /** Simulate enabling an extension in the D7 database. */
  protected function setUpD7EnableExtension($type, $extensionName, $schemaVersion, $weight) {
    $extensionName = strval($extensionName);
    Database::getConnection('default', 'migrate')
        'filename' => sprintf('sites/all/modules/%s/%s.module', $extensionName, $extensionName),
        'name' => $extensionName,
        'type' => strval($type),
        'owner' => '',
        'status' => 1,
        'bootstrap' => 0,
        'schema_version' => intval($schemaVersion),
        'weight' => intval($weight),

  /** Tests migrating hostname environments. */
  public function testHostnameEnvironmentMigration() {
    // Fixtures we can verify.
    $machine = 'production';
    $name = 'Production';
    $url = 'https://prod.example.com';
    $bgColor = '#abc123';
    $fgColor = '#def456';

    // Fixtures that won't be migrated.
    $envid = 17;

    // Set up the D7 environment and run the migration.
    $this->setUpD7HostnameEnvironment($machine, $name, $envid, $url, $bgColor, $fgColor, 11, 'top', FALSE);

    // Load the D8 environment indicator and verify against the fixtures.
    $env = EnvironmentIndicator::load($machine);
    $this->assertInstanceOf('Drupal\environment_indicator\Entity\EnvironmentIndicator', $env);
    $this->assertSame($env->label(), $name);
    $this->assertSame($env->getUrl(), $url);
    $this->assertSame($env->getBgColor(), $bgColor);
    $this->assertSame($env->getFgColor(), $fgColor);

  /** Add a D7 hostname environment to be migrated. */
  protected function setUpD7HostnameEnvironment($machine, $name, $envid, $regexUrl, $bgColor, $textColor, $weight, $position, $fixed) {
    $this->assertIsString($machine, 'Machine name must be a string.');
    $this->assertIsString($name, 'Name must be a string.');
    $this->assertIsInt($envid, 'Envid must be an integer.');
    $this->assertIsString($regexUrl, 'RegexURL must be a string.');

    $settings = [
      'color' => strval($bgColor),
      'text_color' => strval($textColor),
      'weight' => strval($weight),
      'position' => strval($position),
      'fixed' => boolval($fixed),

    Database::getConnection('default', 'migrate')
      ->fields(['machine', 'name', 'envid', 'regexurl', 'settings'])
        'machine' => $machine,
        'name' => $name,
        'envid' => $envid,
        'regexurl' => $regexUrl,
        'settings' => serialize($settings),


You’ll notice a few new things in this test: firstly, we use Drupal’s Upsert query to insert or update a row in the database — using an upsert here makes sure that the row we’re testing matches what we expect, without having to check whether a row with the same key already exists. Although we’re not doing it here for clarity; this is useful when randomizing test fixture data. The equivalent raw SQL for Drupal’s Upsert varies based on your database backend — it becomes INSERT ... ON DUPLICATE KEY UPDATE in MySQL; and INSERT ... ON CONFLICT (...) DO UPDATE in PostgreSQL and SQLite.

Also new in this test is a setUp() function — if you recall from PHPUnit’s documentation on writing tests (which was extra reading in part 3 of this blog series), setUp() is run before every test function. In this case, we use it to set up a database schema which we copied from the 7.x-2.9 version of Environment Indicator, i.e.: the version that we’re migrating from. We also have to give the D7 module a row in D7’s system table (which we do in our test’s setUpD7EnableExtension() function).

The test itself (testHostnameEnvironmentMigration()) should look pretty familiar by now: we set up fixtures (using the setUpD7HostnameEnvironment() function to clean things up a bit), run the migration, and test that the fixture data was migrated to the destinations that we expected.

Writing a custom source plugin

Finally, we need to write a custom source plugin… in src/Plugin/migrate/source/d7/D7HostnameEnvironment.php:

namespace Drupal\environment_indicator\Plugin\migrate\source\d7;

use Drupal\migrate\Row;
use Drupal\migrate_drupal\Plugin\migrate\source\DrupalSqlBase;

 * Drupal 7 Environment Indicator Hostname Environment source from database.
 * @MigrateSource(
 *   id = "d7_environment_indicator_hostname_environment",
 *   source_module = "environment_indicator"
 * )
class D7HostnameEnvironment extends DrupalSqlBase {

  /** {@inheritdoc} */
  public function fields() {
    return [
      'machine' => $this->t('Unique ID for environments.'),
      'name' => $this->t('Name for the environments.'),
      'envid' => $this->t('Primary ID field for the table. Not used for anything except internal lookups.'),
      'regexurl' => $this->t('A regular expression to match against the url.'),
      'settings' => $this->t('Serialized array with the configuration for the environment.'),
      'text_color' => $this->t('The text color of the environment indicator.'),
      'color' => $this->t('The background color of the environment indicator.'),

  /** {@inheritdoc} */
  public function getIds() {
    $ids['envid']['type'] = 'integer';
    return $ids;

  /** {@inheritdoc} */
  public function query() {
    return $this->select('environment_indicator_environment', 'eie')
      ->fields('eie', [

  /** {@inheritdoc} */
  public function prepareRow(Row $row) {
    $settings = unserialize($row->getSourceProperty('settings'));

    $row->setSourceProperty('text_color', $settings['text_color']);
    $row->setSourceProperty('color', $settings['color']);

    return parent::prepareRow($row);


We declare the source plugin ID in the @MigrateSource annotation — this has to match the source plugin ID that we reference in the migration (migrations/d7_environment_indicator_hostname_environments.yml).

Our custom source plugin class extends DrupalSqlBase, which in turn extends the SqlBase class we found earlier when we were looking for source plugins (DrupalSqlBase adds a few Drupal-specific checks and logic).

In the fields() function, we declare which data fields we are going to pass from this custom source plugin to the process part of the migration (this is where we declare text_color and color without the settings prefix). We declare settings and envid even though our migration doesn’t use them, because we need to handle these fields internally in this class.

In the getIds() function, we return the field envid and its type. Drupal 9’s migration subsystem uses fields that you declare in getIds() to understand which data has been migrated, and which data still needs to be migrated (the data returned by this function is used when rolling-back and resuming migrations).

In the query() function, we return a simple Drupal Select query to get data out of the D7 table.

The prepareRow() function runs on each result from the query declared in query(). Here, we perform some post-processing, in this case, by unserializing the data in the settings column, and using it to populate the (unprefixed) text_color and color fields we declared in fields().

Next steps

When you’re writing a custom source plugin, it might be a good idea to make all of the D7 data fields available (provided that it doesn’t add too much additional complexity): another module might extend the one you’re working on, and might want to migrate data from the fields you decided to ignore.

For a more complete example, check out my patch to migrate configuration from environment_manager, which also includes a custom process plugin.

Starting next week, I’ll be taking a bit of a break from this blog series — new posts won’t be coming out as often — but I hope to eventually explore custom process and destination plugins, how to migrate content, and how to migrate data from custom field types defined by a module.

The article Easy commit credits with migrations, part 6: Migrating data from a custom table first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Sep 07 2021
Sep 07

6 minute read Published: 7 Sep, 2021 Author: Matt Parker
Drupal Planet , Migrations

This is the fifth in a series of blog posts on writing migrations for contrib modules:

Stay tuned for more in this series!


While migrating off Drupal 7 Core is very easy, there are still many contrib modules without any migrations. Any sites built using a low-code approach likely use a lot of contrib modules, and are likely blocked from migrating because of contrib. But — as of this writing — Drupal 7 still makes up 60% of all Drupal sites, and time is running out to migrate them!

If we are to make Drupal the go-to technology for site builders, we need to remember that migrating contrib is part of the Site Builder experience too. If we make migrating easy, then fewer site builders will put off the upgrade or abandon Drupal. Plus, contributing to migrations gives us the opportunity to gain recognition in the Drupal community with contribution credits.

Problem / motivation

In the last blog post, we walked through the process of creating a simple configuration migration — but I noted that, even after you’ve built the migration, when you get to the “What will be upgraded?” step in the migration wizard, the module will still show up in the list of “Modules that will not be upgraded”. This happens because core’s Migrate Drupal UI has no way of knowing whether you’ve written all the migrations that you intended to write!

If you look closely at the “What will be upgraded?” step, you’ll see there is a row for each module that has stored data on the D7 site — that is to say, D7 modules which do not store data are not listed; and D9 modules are only mentioned if they declare a migration for the data in one of those D7 modules.

Also, to date, this blog series has assumed that you are migrating to D9 from an older D7 version of the same module — but that doesn’t necessarily need to be the case: for example, the D9 Address module didn’t exist in D7: its predecessor module was named AddressField. Address module migrations would be written to migrate data from the AddressField module.

Recall that the main goal of this blog series is to improve the upgrade experience for Site Builders… as a Site Builder facing an upgrade, I want as many of my D7 modules to be (accurately) accounted for in the “What will be upgraded?” step of the migration wizard, so that I know how much manual migration that I need to do after running the migration wizard.

Proposed resolution

In Drupal 8.8, the migration team introduced a way for modules to declare their upgrade status. The status determines whether the “What will be upgraded?” report will list a D7 module in the list of “Module(s) that will be upgraded” or “Module(s) that will not be upgraded”.

A migration status looks like…

# In migrations/state/D9_DESTINATION_MODULE.migrate_drupal.yml
    d6_source_module_1: D9_DESTINATION_MODULE
    d7_source_module_2: D9_DESTINATION_MODULE
      - other_d9_destination_module

    d7_source_module_4: D9_DESTINATION_MODULE

You can see from this example that:

  1. You declare migrations as either finished or not_finished.

    In the “What will be upgraded?” report, a source module that does not have a migration declared for it — or whose migration is declared as not_finished — will appear in the “Module(s) that will not be upgraded” list.

    If a migration is declared as finished, then the module will appear in the “Module(s) that will be upgraded” list.

  2. You declare migration statuses for D6 and D7 modules separately.

    This allows you to tackle D6 and D7 migrations separately.

  3. You can declare migrations from one or more source modules to one or more destination modules.

    For example, core’s telephone module declares that it can migrate content from both the D7 Phone module and the the D7 Telephone module.

    Unfortunately, I’m not aware of an example where more than one D9 destination module is defined for a D7 source module.

  4. You declare migrations as finished or not_finished for the module as a whole.

    For example, this means that if a D7 module stores both content AND configuration, and you’ve only written a migration for configuration, then the module’s status is not_finished. Only once you’ve written the migration for the content, you can declare the status as finished.

Steps to complete

Let’s try to follow the principles of test driven development (TDD) by writing a test before we write the code to make that test pass. Put the following template at your module’s tests/src/Kernel/Migrate/d7/ValidateMigrationStateTest.php:

namespace Drupal\Tests\MODULE_NAME\Kernel\Migrate\d7;

use Drupal\Tests\migrate_drupal\Kernel\d7\MigrateDrupal7TestBase;
use Drupal\Tests\migrate_drupal\Traits\ValidateMigrationStateTestTrait;

 * Tests that the MODULE_NAME test has a declared migration status.
 * ValidateMigrationStateTestTrait::testMigrationState() will succeed if the
 * modules enabled in \Drupal\Tests\KernelTestBase::bootKernel() have a valid
 * migration status (i.e.: finished or not_finished); but will fail if they do
 * not have a declared migration status.
 * @group MODULE_NAME
class ValidateMigrationStateTest extends MigrateDrupal7TestBase {
  use ValidateMigrationStateTestTrait;

   * {@inheritdoc}
  protected static $modules = ['MODULE_NAME'];


The test inherits from MigrateDrupal7TestBase, which automatically sets up a migration; and the test includes code fromValidateMigrationStateTestTrait — which has a public function testMigrationState() — so the migration state is automatically tested if you just fill in the MODULE_NAME.

Since we haven’t declared a state yet, if you run this test, it will fail.

Now, let’s write a migration state! At migrations/state/MODULE_NAME.migrate_drupal.yml


Now, when you run the test, it will pass, because the module has declared a status (even though the status is not_finished).

Once you’re confident that you’ve written migrations for all the data that your D7 module can store, you can change that not_finished to finished.

Next steps

If you’ve already contributed some migrations, you can update those contributions to declare the migration status for that module. Remember, only declare the status as finished if you’ve written migrations for all the data stored by the D7 module that can be stored by the D9 module.

The article Easy commit credits with migrations, part 5: Declaring a module’s migration status first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Aug 31 2021
Aug 31

14 minute read Published: 31 Aug, 2021 Author: Matt Parker
Drupal Planet , Migrations

This is the fourth in a series of blog posts on writing migrations for contrib modules:

Stay tuned for more in this series!


While migrating off Drupal 7 Core is very easy, there are still many contrib modules without any migrations. Any sites built using a low-code approach likely use a lot of contrib modules, and are likely blocked from migrating because of contrib. But — as of this writing — Drupal 7 still makes up 60% of all Drupal sites, and time is running out to migrate them!

If we are to make Drupal the go-to technology for site builders, we need to remember that migrating contrib is part of the Site Builder experience too. If we make migrating easy, then fewer site builders will put off the upgrade or abandon Drupal. Plus, contributing to migrations gives us the opportunity to gain recognition in the Drupal community with contribution credits.

Problem / motivation

In my experience as a consultant, clients who are willing to be early adopters of Drupal 7 to 9 migrations tend to want to make a bunch of other changes to their site at the same time… so configuration has often been overlooked in favour of setting new config from scratch on the D9 site. But from an end-user-of-Drupal’s standpoint, when the budget is tight, and/or Drupal 7 already functions the way you want it, it makes more sense to spend your time and money on verifying the site’s content, and updating the website’s theme!

As a Site Builder migrating a site from Drupal 7 to Drupal 9, I want as much of my Drupal 7 configuration to be migrated as possible, so that I can spend my time on the theme and content of the site.

Proposed resolution

Define a migration for simple configuration from Drupal 7 to Drupal 9.

As mentioned briefly in the last post, migrations are defined by YAML files inside a module’s migrations/ directory that look something like…

label: A Human-Friendly Name
migration tags:
  - Drupal 7
  - A Migration Tag
  plugin: # a @MigrateSource plugin id
  # some config for that @MigrateSource plugin
  # some process config
  plugin: # a @MigrateDestination plugin id
  # some config for that @MigrateDestination plugin

As you can probably guess, id, label, and migration tags are metadata.

Each migration definition includes a source plugin and its configuration, which states where to find data in the Drupal 7 source database. Each migration also defines a destination plugin and its configuration, which tells Drupal 9 where to store the migrated data. Each migration also contains a number of process instructions, which describe how to build the destination data, usually by taking data out of the source.

Steps to complete

Before we can write a simple config migration, we need to understand how config is stored in both systems; and do a bit of planning.

How Drupal 9 config works

In Drupal 9, the standard way to handle configuration is to store it as configuration entities, using the configuration management API.

Most configuration migrations into D9 use the config destination plugin, which uses the configuration management API to write the data. You configure the config destination plugin by specifying the machine name of the configuration entity that you want to build from the migrated data. If we look at the example migration we wrote tests for in the last blog post (i.e.: migrating config for the Environment Indicator module), you can see in its destination section…

  plugin: config
  config_name: environment_indicator.settings

… that the migration is going to be building the config object named environment_indicator.settings.

Note that each migration has one destination plugin; and the config destination plugin only lets you specify one config entity. To start a configuration migration, I usually look at the Drupal 9 module’s code for configuration objects. If there is more than one, I start with the one containing general configuration settings.

Once I’ve chosen a destination configuration object to focus on, I look at its definition in the module’s config/schema/MODULE_NAME.schema.yml file and where the config is being used (because the schema file isn’t always kept up-to-date). I start an inventory of the fields in that config object, their data type, and their default values from config/install/*.yml. A spreadsheet is a great tool for this inventory (just be aware that it can be helpful to show the spreadsheet to the community).

How Drupal 7 config works

In Drupal 7, the standard way to handle configuration was to store it in Drupal 7’s variable table in the database; and interact with it using the variable_get(), variable_set() and variable_del() functions.

My next step in writing a configuration migration is to search the D7 module’s code for the string variable_, examine the different variable names, and update my inventory with the D7 variable names and data types (to determine a variable’s data type, you may have to look at how the D7 code uses it). Some modules construct their variable names by concatenating strings and variables, so one string match (for variable_) may correspond with a handful of possible variables. If the way that variable names are constructed is particularly convoluted, it can be helpful to install the module on your D7 site, configure it, and see which variables are added to the variable table in the database.

When writing our migration definition for Drupal 7 variables, we can use the variable source plugin to pull data from the D7 variables table. You configure the variable source plugin by specifying a bunch of variable names to read data from; and optionally, specify which D7 module you’re migrating from (which is useful when you’re migrating config from a bunch of D7 modules into one D9 module).

If we look at the example migration we wrote tests for in the last blog post, you can see in its source section…

  plugin: variable
    - environment_indicator_integration
    - environment_indicator_favicon_overlay
  source_module: environment_indicator

… that the migration is going to copy data out of the environment_indicator_integration and environment_indicator_favicon_overlay variables.

Mapping out the migration

At this point, your inventory should contain the names, data-types, and default values for a bunch of D9 config object fields; plus the names and data-types for a bunch of D7 variables.

The next step is to process the inventory: for each D9 config object field, see if you can find a corresponding D7 variable, and mark the relationship in the inventory. It is always worth comparing how a config variable is used in both versions of the module, just in case it is unrelated but happened to be given a similar name. You should expect to find D7 config which does not have corresponding config in D9 and vice-versa. If you see D9 config that is related to the D7 config, but isn’t an exact copy (e.g.: a single value in D7 is the first value in an array in D9), add a note… we’ll talk about this shortly.

When you are done, your inventory might look like this…

D9 field D9 field data type D9 field default value ← How to process ← D7 variable D7 data type Notes toolbar_integration array [] ← (copy) ← environment_indicator_integration array (n/a) favicon boolean FALSE ← (copy) ← environment_indicator_favicon_overlay boolean (n/a)

At this point, you have enough information to start writing the migration test, which we covered in the previous blog post.

Migration process configuration

Now that we know how to migrate the data, we can write the process part of the migration configuration. Each instruction in the process section is a mapping (i.e.: hash, dictionary, object) whose name is the destination field. Inside the mapping is a list of migrate process plugins, to be run in order from first to last. The value after the final process plugin has run gets inserted into the destination field.

To get data from the source, you use the get migrate process plugin, which you configure by specifying which source fields to use…

  toolbar_integration:  # i.e.: the destination field in the 'environment_indicator.settings' config object
    - plugin: get
        - environment_indicator_integration  # i.e.: the source field
    - plugin: get
        - environment_indicator_favicon_overlay

… this configuration copies the data in the environment_indicator_favicon_overlay variable from D7, performs no other processing on it (i.e.: because there are no other instructions), and inserts it into the favicon field in the environment_indicator.settings config object.

Since copying data from a source field to a destination field without any processing is so common, there is a short-hand for this particular case. For example,

    - plugin: get
        - environment_indicator_favicon_overlay

… is equivalent to…

  favicon: environment_indicator_favicon_overlay

After converting the plugin: get lines to the shorthand, your migration config should look identical to the sample one that I gave in the previous blog post. If you replace migration given last time, with the one you just finished building (don’t forget to set the migration id — it must match the filename and be in the $this->executeMigrations(['...']); line in your test). You can verify this is the case by running the test again.

Multiple process plugins

If you do need to modify the D7 data before it gets saved to D9, it is possible to add additional process plugins. For example, the following configuration would get the data stored in the source’s my_d7_label field, URL-encode it, convert that URL-encoded data to a machine name, then store the resulting data to dest_field

    - plugin: get
        - my_d7_label
    - plugin: urlencode
    - plugin: machine_name

… put another way, given the data A name in source field my_d7_label, the data written to destination field dest_field would be a_20name (i.e.: A name -> A%20name -> a_20name).

If you are performing other processing steps, Core and many Contrib process plugins will allow you to take a shortcut by replacing the stand-alone get step by specifying a source in the first processing step. For example,

    - plugin: get
        - my_d7_label
    - plugin: urlencode
    - plugin: machine_name

… is equivalent to…

    - plugin: urlencode
      source: my_d7_label
    - plugin: machine_name

… but you may find it easier to keep the stand-alone get step until you’ve completed the whole migration and you are certain that it works.

When you write tests for a migration that involves process steps which modify data, the data you add in your test fixtures will be different from the data you verify at the end of the test — explicitly calling this out in a comment can be helpful to other people reading the test (or yourself 6 months later, when you’ve forgotten why).

Default values

Once a Drupal 7 module has been ported to D9 for the first time, that module’s D7 and D9 codebases diverge. Features added to the D9 version aren’t always backported to the D7 version for various reasons. As a result, when preparing your inventory, it’s not unusual to find that the D9 config object has fields for configuration which doesn’t exist in D7 (note the converse — where D7 variables have no D9 equivalent — is possible too, albeit less common).

When you’re writing a migration for the first time, it’s easy to focus on the config that you can migrate from D7, and ignore the D9 config which has no D7 equivalent. But if you don’t specify a value for those fields, the config destination plugin will set them to NULL when it constructs the config object from the migrated data.

But, many modules assume that those configuration object fields will be set to their default configuration (i.e.: from config/install/*.yml) — not NULL — which can lead to bugs, errors, warnings, and crashes later on.

The solution is to specify default values for that configuration in the process section of the migration definition, using the default_value process plugin.

For example:

    - plugin: default_value
      default_value: 'some_default_value'

When you specify default values, you should still test them, by verifying them when you verify the migrated data. I tend to separate these into their own section with a comment, so that I don’t get confused about why those verification lines don’t have a corresponding fixture…

// Verify the fixtures data is now present in the destination site.
$this->assertSame(['toolbar' => 'toolbar'], $this->config('environment_indicator.settings')->get('toolbar_integration'));
$this->assertSame(TRUE, $this->config('environment_indicator.settings')->get('favicon'));

// Verify the settings with no source-site equivalent are set to their default values in the destination site.
$this->assertSame('some_default_value', $this->config('environment_indicator.settings')->get('d9_only_feature');

Putting it all together

  1. Make sure your Drupal 9 environment is set up as described in the last blog post:

    1. Clone Drupal core, run composer install, set up the site.
    2. Find a migration issue and module to work on.
    3. Clone the module to modules/, and switch to the branch in the migration issue’s Version field if necessary.
    4. If the migration issue is using an issue fork, then switch to the issue fork using the instructions in the issue.
    5. If the migration issue is using patches, download the patch, apply it to a branch named after the issue ID and comment number the patch was uploaded to (we’ll call this $FIRST_BRANCH below).
    6. If the migration issue is using patches, then create a second branch named after the issue ID and number of comments in the issue plus 1; and apply the patch, but don’t commit it yet.
  2. Create the migration inventory.

  3. Write the migration test.

  4. Write the migration itself, running tests frequently.

  5. Spin up your D7 site, install the D7 version of the module, and run a manual test, as we did in part 1 of this series.

    The automated tests only test for very specific problems on a simulated “clean” environment — which make them great for catching regressions — but not very good for catching problems you weren’t specifically testing for (that is to say, things likely to crop up in the real world).

    Note that Drupal 9 core’s Migrate Drupal UI module has no way of knowing if you’ve written all the migrations that you intended to write for this module. So, the module you’ve written the migration for will still show up in the list of “Modules that will not be upgraded” for now — we’ll fix that in the next blog post. Don’t worry though, your migration will still run.

  6. When you’re satisfied, stage all the changes to the module (i.e.: git add .), and commit your changes. In the commit message, describe what you did. The commit message will be visible to other members of the community.

  7. If the migration issue is using an issue fork, then push your changes to the issue fork, and leave a comment in the issue describing what you did.

  8. If the migration issue is using patches, then:

    1. Generate the patch with git format-patch 8.x-2.x (where 8.x-2.x is the branch specified in the “Version” field of the issue with the patch).

      Generating a patch in this way way adds some metadata which will help avoid merge conflicts in the future.

      The patch will appear in the current directory, and will be named something like 0001-YOUR-COMMIT-MESSAGE.patch.

    2. Rename the patch according to Drupal.org’s conventions for naming patches.

      I like to move the patch somewhere that I can easily find it (e.g.: my Desktop folder) at the same time that I’m renaming it.

    3. Generate an interdiff between your current patch and the previous one with git diff $FIRST_BRANCH > interdiff.txt (where $FIRST_BRANCH is the branch with the previous patch applied and committed).

      I like to move the interdiff somewhere that I can easily find it (e.g.: my Desktop folder).

    4. Start a new comment, upload the patch and interdiff, and describe what you did in the Comment.

      If you set the issue status to “Needs review”, then automated tests will run on your patch — but once they pass, change the issue status back to “Needs work”, because your migration won’t be finished until you’ve verified there’s nothing else to migrate, and indicated that to the Migrate Drupal UI module.

If you’ve been following along with our example to migrate configuration for the Environment Indicator module, please be aware that there’s already a migration to do that in issue #3198995 — so please do not create a new issue, and please do not leave patches in that issue.

Next steps

At this point, you should have all the tools that you need to contribute patches which migrate simple configuration from the Drupal 7 version of a module to the Drupal 9 version of the module, so try it out!

Next, we will talk about how to tell Drupal core’s Migrate Drupal UI module that you’ve written all the migrations that you intended to write, which will move the module from “Modules that will not be upgraded” to “Modules that will be upgraded” in the migration wizard.

We’ll also talk about how to migrate more complex configuration and content in future posts in this series.

The article Easy commit credits with migrations, part 4: Migrating D7 variables first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Aug 24 2021
Aug 24

11 minute read Published: 24 Aug, 2021 Author: Matt Parker
Drupal Planet , Migrations

This is the third in a series of blog posts on writing migrations for contrib modules:

Stay tuned for more in this series!


While migrating off Drupal 7 Core is very easy, there are still many contrib modules without any migrations. Any sites built using a low-code approach likely use a lot of contrib modules, and are likely blocked from migrating because of contrib. But — as of this writing — Drupal 7 still makes up 60% of all Drupal sites, and time is running out to migrate them!

If we are to make Drupal the go-to technology for site builders, we need to remember that migrating contrib is part of the Site Builder experience too. If we make migrating easy, then fewer site builders will put off the upgrade or abandon Drupal. Plus, contributing to migrations gives us the opportunity to gain recognition in the Drupal community with contribution credits.

Problem / motivation

In the last blog post, we tested migration patches by manually creating content in the D7 site, and manually verifying that the content we created was migrated to the new site.

But entering test data, running the migration, and verifying the test data by hand is tedious and error-prone, especially if we want to be able to perform the exact same tests a few months later to ensure that recent changes to the module haven’t caused a regression by breaking the migration!

Being able to quickly run a migration is also quite useful when writing a migration from scratch (a topic we will cover in future blog posts), because you can get continuous feedback on whether your changes were effective (i.e.: you can do test driven development (TDD) — a style of programming where you (1) write a (failing) test, (2) write operational code so the test passes, and (3) refactor… and you repeat that cycle until you’ve solved the problem).

Proposed resolution

Let’s automate running the migration: automation will ensure that the test is performed the same way next time.

We will do so by writing PHPUnit tests. PHPUnit is an automated testing tool used in Drupal core. Because Drupal’s PHPUnit tests run in an isolated environment, this will save us time reverting the database before each migration test.

As an added bonus, Drupal CI — Drupal.org’s testing infrastructure — can be configured to run tests when patches and/or merge requests are posted to the module’s issue queue, to remind other contributors if the change they are proposing would break migrations in some way.

What do these tests look like?

Migration tests typically follow a pattern:

  1. Set up the migration source database,
  2. Fill the migration source database with data to migrate (“set up Test Fixtures”),
  3. Run the migration (“run the System Under Test”), and,
  4. Verify the migration destination database to see if the test fixtures were migrated successfully.

You might notice that we’ve been following this pattern in our manual tests.

PHPUnit tests themselves are expressed as PHP code. Note that this is different from Behat behavioural tests (where tests are expressed in the Gherkin language), or visual regression tests (where — depending on your testing tool — tests could be expressed as JavaScript code, as a list of URLs to compare, etc.).

Drupal’s convention is to put D7 migration tests into a module’s tests/src/Kernel/Migrate/d7/ folder. You’ll find many Core modules with migration tests in this location (Core’s ban and telephone modules are good places to start). But, most Core tests set up their test fixtures in a completely different file than the test itself, which can be confusing. In this blog post, I’ll walk you through writing tests that look a bit more like the steps we’ve been doing manually.

Steps to complete

Automated migration tests don’t strictly require a Drupal 7 site at all, because the D7 testing tools in Core’s Migrate Drupal module know how to set up something that looks just enough like D7 to make the tests work.

In order to run PHPUnit, you will need to set up the Drupal 9 site a bit differently than you may be accustomed to — the composer create-project commands (or the tar/zip files) you normally use when creating a Drupal site will not install the tools we need for running tests. We should clone Drupal core from source if we want to use PHPUnit.

If we are going to write tests, we should seriously consider sharing them with the community, either by pushing the tests to an Issue fork, or by generating a patch and interdiff that includes them. While we won’t actually generate a patch this week, the instructions below will get you to set up your environment as if you were going to generate a patch.

Setting up

  1. Clone Drupal core’s 9.2.x branch, set up your development environment on the repository (note there is no web/ or html/ folder in this setup), and run composer install.
  2. Find a contrib module that has a migration patch (as described in part 2 of this blog series). As before, read through the issue with the patch in detail.
  3. Clone the 8.x version of the module into your D9 site, as described in part 2.
  4. Apply the migrations patch to its own branch the 8.x module and commit the contents of the patch to the branch; or checkout the Issue fork, as described in part 2.
  5. If the issue is using patches, then before you continue, you should create a second branch to add your tests in — the changes in this second branch will become your patch; and running git diff between the first and second branches will become your interdiff. To do this:
    1. Check out the branch from the issue “Version” field again (e.g.: git checkout 8.x-2.x)

    2. Create a new branch to put your work in. I name this branch after the issue ID and the number of comments in the issue plus one.

      For example, if the issue number is 123456, and there are currently 8 comments in the issue (i.e.: the comment number of the most recent comment is #8), then I would name my branch 123456-9.

    3. Apply the patch again — but this time, don’t commit the changes yet (you need to add your tests first).

Finding a migration to test

Before we write a test, we need to take a closer look at the migration that we want to test. Recall from the migration patches that migrations are defined by YAML files inside the module’s migrations/ directory. These files have roughly the following structure…

# In a file named migrations/MIGRATION_NAME.yml...
label: # a human-friendly name
migration tags:
  - Drupal 7
  # possibly more tags
  plugin: # a @MigrateSource plugin id
  # some config for that @MigrateSource plugin
  # some process config
  plugin: # a @MigrateDestination plugin id
  # some config for that @MigrateDestination plugin

Right now, we only need to know the MIGRATION_NAME from the id line for one of the Drupal 7 migrations. If you find several migrations in the migrations/ folder, I’d suggest starting with a configuration migration, because those are usually the simplest.

Writing a test and running it

  1. Create a folder for the tests: mkdir -p tests/src/Kernel/Migrate/d7

  2. Using your preferred text editor, create a PHP file in that folder, tests/src/Kernel/Migrate/d7/MigrateTest.php, and edit it as follows, replacing MODULE_NAME with the machine name of the module; and MIGRATION_NAME with the migration name you found in the migrations/MIGRATION_NAME.yml file you’re going to test…

    namespace Drupal\Tests\MODULE_NAME\Kernel\Migrate\d7;
    use Drupal\Tests\migrate_drupal\Kernel\d7\MigrateDrupal7TestBase;
     * Test the MIGRATION_NAME migration.
     * @group MODULE_NAME
    class MigrateTest extends MigrateDrupal7TestBase {
       * {@inheritdoc}
      protected static $modules = ['MODULE_NAME'];
       * Test the MIGRATION_NAME migration.
      public function testMigration() {
        // TODO: Set up fixtures in the source database.
        // Run the migration.
        // TODO: Verify the fixtures data is now present in the destination site.
        // TODO: Remove this comment and the $this->assertTrue(TRUE); line after it once you've added at least one other assertion:
  3. Let’s run the test: php core/scripts/run-tests.sh --sqlite /tmp/test.sqlite --file modules/MODULE_NAME/tests/src/Kernel/Migrate/d7/MigrateTest.php

    This assumes php is in your shell’s $PATH, you’ve changed directories to the path containing Drupal 9’s index.php, you can write temporary files to /tmp/, and you installed the module you’re patching to modules/MODULE_NAME.

    If you’re using Lando or Ddev, you will probably need to lando ssh -s appserver or ddev ssh -s web before running the line above.

    If all goes well, you should see output like…

    Drupal test run
    Tests to be run:
      - Drupal\Tests\MODULE_NAME\Kernel\Migrate\d7\MigrateTest
    Test run started:
      Tuesday, August 24, 2021 - 13:00
    Test summary
    Drupal\Tests\MODULE_NAME\Kernel\Migrate\d7\MigrateTest   1 passes
    Test run duration: 5 sec

But the test isn’t very useful yet. Exactly how to fill in the TODOs we’ve left in there depends on the specific module you’re working on (i.e.: the data it stored in D7, and how that data maps to D9).

A real example

For now, let’s look at a real-world example: migrating the configuration for the Environment Indicator module (note there’s already a migration to do that in issue #3198995 — please do not create a new issue, and please do not leave patches in that issue).

To keep this blog post (relatively) short, I will provide a sample migration definition to migrate two pieces of configuration in environment_indicator. We will discuss how to find data to migrate and how to write migration definitions in future blog posts in this series.

Looking at the code in the latest D7 release, I see 2 pieces of config to migrate: environment_indicator_integration, and environment_indicator_favicon_overlay. Suppose that someone has written following migration definition at migrations/d7_environment_indicator_settings.yml to migrate those 2 pieces of config:

id: d7_environment_indicator_settings
label: Environment indicator settings
  - Drupal 7
  - Configuration
  plugin: variable
    - environment_indicator_integration
    - environment_indicator_favicon_overlay
  source_module: environment_indicator
  toolbar_integration: environment_indicator_integration
  favicon: environment_indicator_favicon_overlay
  plugin: config
  config_name: environment_indicator.settings

You can see here that the MIGRATION_NAME in our template can be filled in with d7_environment_indicator_settings.

So let’s start by copying the migration test template above into the file tests/src/Kernel/Migrate/d7/MigrateTest.php, and replacing MIGRATION_NAME with d7_environment_indicator_settings.

Now, since these two pieces of config were stored in the variable table in D7; we will start by inserting those variables into the variable table through the migrate database connection (i.e.: the source database)…

// TODO: Set up fixtures in the source database.
\Drupal\Core\Database\Database::getConnection('default', 'migrate')
    'name' => 'environment_indicator_integration',
    'value' => serialize(['toolbar' => 'toolbar']),
\Drupal\Core\Database\Database::getConnection('default', 'migrate')
    'name' => 'environment_indicator_favicon_overlay',
    'value' => serialize(TRUE),

Looking at the D9 version of environment_indicator, I can see global config is stored in the environment_indicator.settings config object; and there are two global settings in that object — toolbar_integration and favicon — whose behaviour matches the D7 variables we found. So let’s test the config after the migration:

// TODO: Verify the fixtures data is now present in the destination site.
$this->assertSame(['toolbar' => 'toolbar'], $this->config('environment_indicator.settings')->get('toolbar_integration'));
$this->assertSame(TRUE,  $this->config('environment_indicator.settings')->get('favicon'));

Now let’s run the migration test that we’ve been filling in…

$ php core/scripts/run-tests.sh --sqlite /tmp/test.sqlite --file modules/environment_indicator/tests/src/Kernel/Migrate/d7/MigrateTest.php

Drupal test run

Tests to be run:
  - Drupal\Tests\environment_indicator\Kernel\Migrate\d7\MigrateTest

Test run started:
  Tuesday, August 24, 2021 - 13:05

Test summary

Drupal\Tests\environment_indicator\Kernel\Migrate\d7\Migrate   1 passes

Test run duration: 5 sec

… great!

Let’s clean up a bit by deleting the dummy assertion at the end and its comment (since we’ve added other assertions); and removing the remaining TODOs (since they are done). We can also add a use statement for Drupal\Core\Database\Database and modify the ::getConnection() lines accordingly. Now the full test looks like:

namespace Drupal\Tests\environment_indicator\Kernel\Migrate\d7;

use Drupal\Core\Database\Database;
use Drupal\Tests\migrate_drupal\Kernel\d7\MigrateDrupal7TestBase;

 * Test the d7_environment_indicator_settings migration.
 * @group environment_indicator
class MigrateTest extends MigrateDrupal7TestBase {

   * {@inheritdoc}
  protected static $modules = ['environment_indicator'];

   * Test the d7_environment_indicator_settings migration.
  public function testMigration() {
    // Set up fixtures in the source database.
    Database::getConnection('default', 'migrate')
        'name' => 'environment_indicator_integration',
        'value' => serialize(['toolbar' => 'toolbar']),
    Database::getConnection('default', 'migrate')
        'name' => 'environment_indicator_favicon_overlay',
        'value' => serialize(TRUE),

    // Run the migration.

    // Verify the fixtures data is now present in the destination site.
    $this->assertSame(['toolbar' => 'toolbar'], $this->config('environment_indicator.settings')->get('toolbar_integration'));
    $this->assertSame(TRUE,  $this->config('environment_indicator.settings')->get('favicon'));


Congratulations, you’ve written your first automated Migration test!

Next steps

In the next blog post, we’ll talk about migrating simple configuration (i.e.: D7 variables to D9 config objects).

In the meantime, you could try refactoring the tests/src/Kernel/Migrate/d7/MigrateTest.php test we built in this blog post. Some ideas:

  1. Try splitting the Database::getConnection(...)->...->execute() statements into a helper function,
  2. Try randomizing the fixtures data that you insert,
  3. Try making two test methods, one for environment_indicator_favicon_overlay, where you test both the TRUE and FALSE states; and one for environment_indicator_integration.

If this is your first time writing automated tests, you might be interested in reading PHPUnit’s documentation on writing tests. PHPUnit’s assertions reference can also be pretty handy to refer to when writing tests.

If you have a lot of time, some optional, longer reads are:

The article Easy commit credits with migrations, part 3: Automated tests first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Aug 17 2021
Aug 17

10 minute read Published: 17 Aug, 2021 Author: Matt Parker
Drupal Planet , Migrations , Drupal

This is the second in a series of blog posts on writing migrations for contrib modules:

Stay tuned for more in this series!


While migrating off Drupal 7 Core is very easy, there are still many contrib modules without any migrations. Any sites built using a low-code approach likely use a lot of contrib modules, and are likely blocked from migrating because of contrib. But — as of this writing — Drupal 7 still makes up 60% of all Drupal sites, and time is running out to migrate them!

If we are to make Drupal the go-to technology for site builders, we need to remember that migrating contrib is part of the Site Builder experience too. If we make migrating easy, then fewer site builders will put off the upgrade or abandon Drupal. Plus, contributing to migrations gives us the opportunity to gain recognition in the Drupal community with contribution credits.

Problem / motivation

As a maintainer of several modules, it really helps me when other members of the community review and test patches, and if they work, mark them with the issue status RTBC (“Reviewed and Tested by the Community”)!

Proposed resolution

One of the easiest ways that you can contribute is by testing migration patches as thoroughly as you can, reviewing the code, and marking the issue as RTBC if everything checks out.

Aside: Engaging with the community

Any time you’re engaging with the Drupal community, and especially in the issue queue, it is worth keeping a few things in mind:

  • The Drupal code of conduct — “be considerate; be respectful; be collaborative; when we disagree, we consult others; when we are unsure, we ask for help; and; step down considerately”;
  • The Drupal.org issue etiquette — “dos” and “dont’s” for making issues flow smoother; and;
  • The strategic initiative that you are working towards — in this case, improving the migration experience for Site Builders.

Steps to complete

These steps assume that you have followed the Steps to complete in in part 1 of this blog series at least once.

  1. First, we will need to find a contrib module with a migration patch in “Needs review” status. Read the issue with the patch in detail. Doing so will provide you with some insight into the intended scope of the patch, and also what to test. I suggest taking notes.

    For your convenience, here is a link to a search for the word migration across all projects, filtered to issues in “Needs review” status (but since this blog series is about contrib migrations, you can ignore the results for the project “Drupal core”)1.

    For example, let’s suppose that I found issue #3024040 in this list.

  2. Next, install the latest 7.x release of the contrib module you chose into the Drupal 7 site you set up in part 1 of this blog series (it is reasonable to assume that Site Builders looking to migrate are running the latest release of a contrib module).

    If you haven’t worked much with Drupal 7, the recommended place to install contrib modules is inside sites/all/modules/.

    Using our example issue #3024040; because it is a patch for the Tablefield module, I would install tablefield-7.x-3.6 into my D7 test site (because that was the latest recommended D7 version of the module at time-of-writing).

  3. Then, git clone the contrib module you chose into the web/modules/ folder of the Drupal 9 site you set up in part 1 of this blog series.

    You need to clone the branch specified in the “Version” field of the issue with the patch.

    For your convenience, Drupal.org can generate a git clone command for you to copy-paste: go to the module’s project page, and look at the top for a “Version control” tab… click that tab, choose the “Branch to work from”, and click the “Show” button.

    Using our example issue #3024040, the “Version” field in that issue’s metadata shows 8.x-2.x-dev, i.e. the 8.x-2.x Git branch. If we then click the module name (“Tablefield”) in the issue’s breadcrumb bar, then its Version control tab, then choose 8.x-2.x, and click “Show”, Drupal.org gives us the command git clone --branch '8.x-2.x' https://git.drupalcode.org/project/tablefield.git

  4. Next, we want to apply the most-recent patch in the issue — but before we do that, we should create a branch to apply the patch on (creating a branch will make it easier to generate interdiff files if we need to submit our own patch).

    If the issue is using an Issue fork instead of patches, then click “Show commands”, follow the instructions to “Add & fetch this issue fork’s repository”, then follow the instruction to “Check out this branch”, and skip ahead to the next step — the rest of the instructions in this step are for issues using patches.

    If the issue is using patches, then I usually create a branch named after the issue ID and comment number that I got the patch from. In our example issue #3024040, the most recent patch at time-of-writing is in comment #8, so I would name the branch 3024040-8, i.e.: I would run git checkout -b 3024040-8

    Now we can apply the patch.

    Important note: if you’re following along with the example in issue #3024040, be aware that at some point in the future, the maintainers of the Tablefield module will likely accept and commit the patch — trying to apply the patch after it has been committed will fail.

    If the patch applies successfully, commit the changes in the patch to the new branch. There’s no need to come up with a fancy commit message because we won’t be pushing it anywhere: I use branch name as the commit message (e.g.: git commit -m "3024040-8")

  5. Now, we run through essentially the same process we used in part 1 of this blog series to test the migration. That is to say:

    1. (Re-)install the D7 site using the Standard install profile.
    2. (Re-)install the D9 site using the Standard install profile.
    3. On the D7 site, install the Tablefield module, and set it up (i.e.: add a Tablefield to a node type). Then, create some Tablefield nodes as test migration content.
    4. On the D9 site, install the core Migrate, Migrate Drupal, and Migrate Drupal UI modules; and also install the Tablefield module.
    5. Make a database backup of the D9 site (so you can easily re-run the migration).
    6. On the D9 site, run through the migration wizard at /upgrade.
    7. When the upgrade is complete, check the migrated Tablefield nodes to ensure they contain the test migration content you set up on the D7 site.
  6. If the test migration content you set up on the D7 site did not correctly migrate onto the D9 site, see the “What to do if something goes wrong” section below.

  7. If the migration appeared to go correctly, then read the patch in more detail.

    Future blog posts in this series should make the patch easier to understand, but even now, you can probably get a vague sense of what is being migrated, and how it is being done.

    In particular, if you notice that the patch migrates some things that you did not test, it would be worth reverting to the database backup you made, and trying the migration again, so you can test those new things.

    If you find coding style issues in a contrib patch, I would refrain from pointing them out — let the module maintainer do that if they feel strongly enough about it! Many Contrib maintainers have their own style, or don’t feel strongly about coding style: the coding standards used for Drupal Core are only suggestions for Drupal Contrib. Furthermore, some module maintainers will accept the patch but fix the style issues in the patch when they commit it (this is what I do for modules that I maintain).

    Remember the strategic initiative we are working towards: we want to improve the experience for Site Builders — attaining coding standards perfection will delay the patch and prevent it from helping the Site Builders who need it!

  8. Finally, if you are satisfied with the patch after reading it and testing the migration, then it is time to add a comment to the issue:

    1. In the Issue metadata, set the “Status” field to Reviewed and tested by the community, and make sure the “Assigned” field is set to Unassigned.
    2. Don’t forget to “Attribute this contribution”.
    3. In the “Comment” field, clearly state that it worked for you, and describe what you tested.
    4. Finally, click “Save” to post your comment.

Note that the patch to Tablefield in issue #3024040 is just used as an example — please do not leave comments in that issue unless you have something specific and constructive to add.

What to do if something goes wrong

If the test migration doesn’t go the way you expect, this may not necessarily indicate a problem with the patch! For example:

  1. The D9 version of the module may operate or store data in a different way than the D7 version does;
  2. The D9 version of the module may have fewer or different features from the D7 version;
  3. The issue that you got the patch from intentionally leaves certain migrations out-of-scope; or;
  4. Your expectations might be wrong (this happens to me a lot!).

So, before leaving a comment in the issue, take some time to:

  1. Read the issue in depth (including earlier patch versions and interdiffs, if applicable), to understand what is, and is not, in scope;
  2. Skim the D7 and the D9 versions of the module’s code, to understand the differences between the two versions of the module and how they work; and;
  3. Read the patch, to understand what it is trying to migrate and to try to pinpoint the problem.

If you think that you can pinpoint the problem, then it’s worth posting your own comment on the issue. In your comment:

  1. Describe the steps you took,
  2. Describe how to create the content and/or configuration which did not migrate properly in D7,
  3. Explain what you expected the migrated content and/or configuration to look like in D9,
  4. Explain what you think the problem is.

Be aware that the patch author and/or module maintainer may be okay with things not working perfectly! Recall the Drupal code of conduct: be respectful (of the module maintainer’s decisions), and step down (i.e.: back off) considerately.

Next steps

As mentioned earlier, the best way to find Migration issues that need review is to search for them. As you may know, marking your own patches RTBC is discouraged, so you’ll probably run across patches that I’ve written floating out there in the issue queues!

If you’re reading through an issue, and you find it confusing to keep track of everything that changed, other people probably find it confusing too! You can help move the issue forward by simply updating the issue summary. But, be aware that, like coding standards, following Core’s issue sumamry template is just a suggestion in Contrib.

In the next blog post, we’ll talk about converting some of the testing that you’re doing manually right now.

  1. I’ve proposed officially adopting the migrate issue tag for contrib migration issues, but this needs to be approved by the Drupal.org administrators, so don’t tag issues with it for now. I’ll update this blog post if this proposal is accepted. ↩︎

The article Easy commit credits with migrations, part 2: Can we get an RTBC? first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Aug 10 2021
Aug 10

8 minute read Published: 10 Aug, 2021 Author: Matt Parker
Drupal Planet , Migrations , Drupal

This is the first in a series of blog posts on writing migrations for contrib modules:

Stay tuned for more in this series!


Besides helping small teams do big things at Consensus Enterprises during the week, I also work part-time for a small family business. Naturally, I built the business’ website in Drupal (initially Drupal 6, now Drupal 7), and, like many other small sites, I took a low-code approach by assembling its functionality using configuration and 63 contrib projects. To make the best use of my budget, I focused my custom development efforts on making a unique theme.

But, 8 years on, the website’s theme is looking a bit dated, Drupal 7 is quickly reaching the end of free support, and the small family business I built it for doesn’t have a budget for paid Drupal 7 support. I want to upgrade the site to Drupal 9.

Happily, Drupal 8 and 9 have a simple Drupal-to-Drupal migration wizard, and the community has a fantastic Core migration team, so migrating from D7 Core to D8+ Core is easy and has a great user experience! (seriously — the past and present Core migration team deserves a lot of credit!)

Problem / motivation

While migrating Drupal 6 or 7 Core to Drupal 8 or 9 Core is extremely easy, there are still many contrib modules without any migrations. Any sites built using a low-code approach likely use a lot of contrib modules — and are therefore blocked from migrating because of contrib!

For my small business website, when I went through the migration wizard, and reached the “What will be upgraded?” step, I saw 104 “Modules that will not be upgraded”, compared with only 32 “Modules that will be upgraded”1!

Honestly, I felt pretty discouraged when I saw that, even though I have the knowledge and experience to write the migrations myself!

I don’t think it would be too much of a stretch to imagine that other Site Builders faced with the same situation might consider putting off the upgrade, or abandoning Drupal for a closed hosting platform.

As of this writing, the official Usage statistics for Drupal Core show that there are around 600,000 other Drupal 7 sites — Drupal 7 still powers about 60% of all Drupal sites! (This fact should, perhaps, garner more attention than it does.)

Anyway, During the April 2021 DrupalCon, when Dries said that we need to go back to our Site Builder roots and make Drupal the go-to technology for site builders experience, he hit the nail on the head… Migrating contrib is part of the Site Builder experience too!

How can I help? Why should I help?

Hopefully, your thought is, “I work with Drupal too, and I want to help the Drupal community achieve its strategic initiatives — Can I help?” The answer is, “Yes!”

It is actually pretty easy to write migrations! This makes them a great way for you and/or your employer to gain recognition in the Drupal community through contribution credits. Plus, you’ll get some valuable experience with both Drupal 7 and 8.

Proposed resolution

In this blog series, I will walk you through the ways that I contribute to improve the migration experience for Site Builders, so that you can do those things too! Hopefully, by combining our efforts, you and I can make things easier for anyone who needs to migrate a Drupal 7 site to Drupal 8 or 9 (myself included), and help the Drupal community to achieve our strategic initiatives!

This blog series will be geared towards people who:

  1. know how to download and install Drupal 8 or 9 and contrib modules on their workstation (with or without composer),
  2. know how to apply a patch,
  3. are comfortable doing a little bit of development work (mainly writing YAML files),
  4. are self-motivated to read publicly-available documentation and code,
  5. want to help, and,
  6. don’t mind engaging with the Drupal community.

Steps to complete

Let’s start off this blog series with something easy: setting up test sites and running a simple migration of Drupal core.

Setting up your test sites

First, you need to set up a Drupal 7 site to be your migration source. If you haven’t set up Drupal 7 before, it’s pretty easy: download the tarball or zip from the Core project page, extract it into a folder, set your HTTP Server (ideally Apache) to serve the folder containing index.php (there is no web/ or html/ folder in D7), and visit the site in a web browser to begin the installation process (which looks a lot like the Drupal 8 or 9 install process). Install the D7 site using the Standard install profile in the first step.

You’ll also need to set up a Drupal 9 test site to be your migration destination. This blog series assumes you already know how to do this. Install the D9 site using the Standard install profile in the first step.

A note about Drupal 6

Drupal 6 has much less usage than Drupal 7 (about 17,000 sites, or about 1.7% of Drupal’s total market share, as of this writing), and it requires a version of PHP between 4.3.5 and 5.3.

You can certainly test migrations with Drupal 6 if you want, but be aware that it has already reached end-of-life and has known security vulnerabilities (so only install it where it can be accessed by people you trust, i.e.: not visible to the Internet), and while its install process is similar to D7’s, its PHP version requirements are not, and switching PHP versions will make it annoying to work with.

Preparing data and config to migrate

Once you’ve got the Drupal 7 site set up, enable some modules (core modules for now - we’ll talk about contrib modules in a future post), configure them, and create a small amount of content. Keep track of what you’re doing, so you can see how it gets migrated later.

Running the migration

Here’s where it gets interesting! On your Drupal 9 site,

  1. At /admin/modules, install core’s Migrate (migrate), Migrate Drupal (migrate_drupal), and Migrate Drupal UI (migrate_drupal_ui) modules (in the “Migration” group).
    Make sure any other modules that you installed/enabled on the Drupal 7 site (i.e.: when you were creating data to migrate) are also installed/enabled on the D9 site.
  2. If you wish, make a database backup of the D9 site, so that you can easily re-run the migration process (at time-of-writing, rollbacks were not yet supported through the user interface - although you could run them from the command-line).
  3. Start the migration wizard by going to /upgrade:
    1. Read the first page and click Continue
    2. On the second page, select Drupal 7 as the version of Drupal you’re migrating from, choose the database type (note MariaDB is roughly equivalent to MySQL), and enter the D7 site’s database connection information (i.e.: so the D9 site can connect to the D7 site’s database directly). Set up the Source files section if applicable, and click Review upgrade.
      • If you’re using [Ddev][ddev], make sure both the D7 and D9 projects are running; then the “Database host” should be ddev-D7_PROJECT_NAME-db, and the “Port number” should be 3306
      • If you’re using Lando, make sure both the D7 and D9 apps are running; then the “Database host” should be database.D7_APP_NAME.internal, and the “Port number” should be 3306
    3. If you see an “Upgrade analysis report”, read it and click I acknowledge... (multilingual migrations can be flakey)
    4. Read the “What will be upgraded?” report, and click Perform upgrade
      • If you enabled some modules on D7, but you didn’t enable the corresponding modules in D9 before starting the upgrade process, they will appear as “Modules that will not be upgraded” here.
      • Contrib modules without migrations — what this blog series is intended to help change — will appear as “Modules that will not be upgraded” in this step of the migration wizard.
    5. When the upgrade is complete, you’ll be returned to the site’s front page.

Congratulations, you’re done: you can now explore the D9 site and see the users, content, configuration, etc. that was migrated from the D7 site!

If you go back to /upgrade, you’ll see “An upgrade has already been performed on this site”, and a button to “Import new configuration and content from the old site”, i.e.: things that had changed on the D7 site since the last migration.

Next steps

If you’d like to test out some already-working contrib migrations, try out the Recipe module.

In the next blog post, we’ll talk about reviewing migration patches.

One last thing: a Proposal for a migration issue tag

To close off this blog post, I’d like to propose that the Drupal.org issue tag maintainers add an official migration issue tag to the list of official issue tags.

A migration tag would make it a lot easier for Site Builders to find patches for their modules; and for contributors to write and review those patches.

You can weigh in on this proposal in issue #3227012

  1. I thought you said “63 contrib projects” earlier! Where did 103 modules come from? Recall that a project can have sub-modules/sub-themes - turns out 25 were sub-modules. 17 more were Features (i.e.: what we used before the Configuration Management Initiative). Another 5 were custom modules - I’ve gotta write migrations for those ones myself. ↩︎

The article Easy commit credits with migrations, part 1: Migrating Drupal Core first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Apr 21 2021
Apr 21

6 minute read Published: 21 Apr, 2021 Author: Christopher Gervais
Drupal , Drupal Planet , DevOps

Introduction to the Introduction

Over the last few years we’ve built lots of Drupal 8 sites, and some Drupal 9 ones too, both for our clients and for ourselves. As such, we’ve taken a keen interest in (read: faced many challenges with) the Configuration Management subsystem. This was a major new component in Drupal 8, and so, while it’s functional, it isn’t yet mature. Of course, the vibrant Drupal developer community jumped in to smooth the rough edges and fill the gaps, in what has since become known as CMI 2.0.

At Consensus, we tend to work on fairly large, complex Drupal projects that often require significant custom development. As such, we’ve adopted fairly rigourous software engineering processes, such as standardized local development environments, CI-enabled Test Driven Development, Continuous Delivery, etc.

However, we struggled to find a workflow that leveraged the powerful CMI system in core, while not being a pain for developers.

Configuration Management in D8+

The core CMI workflow assumes you are transferring configuration between a single site install with multiple instances (Dev, Stage, Prod) and that this configuration is periodically synchronized as a lump. For a number of reasons, this wouldn’t work for us.

As a result, we went back to our old standby, the venerable Features module, which worked reasonably well. Unfortunately, we found that it would sometimes handle dependencies between configuration objects poorly. On more than one occasion, this led to time-consuming debugging cycles.

So we switched to using Config Profile instead. However, reverting config changes was still manual, so we started using Config Update and the related Update Helper.

The Update Helper module, “offers supporting functionalities to make configuration updates easier.” Basically, when preparing for a release, Update Helper generates a special file, a “configuration update definition” (CUD). The CUD contains two values for each changed config. The first is the “current” value for a given configuration, as of the most recent release. The second is the new value to which you want to set that config.

These values are captured by first rolling back to the most recent release, then installing the site, so that the value is in active config. Then you checkout your latest commit, so that the new values are available on the filesystem. Update Helper can then generate its CUD, as well as generate a hook_update() implementations to help deploy the new or changed config.

This process turned out to be error-prone, and difficult to automate reliably.

We explored other efforts too, like Config Split and Config Filter which allow for finer-grained manipulation of “sets” of config. Other projects, like Config Distro, are focused on “packaging” groups of features such that they can be dropped in to any given site easily (kind of like Features…)

A simple, reliable method to deploy new or updated configuration remained elusive.

The underlying problem

Note that all the tools mentioned above work very well during initial project development, prior to production release. However, once you need to deploy config changes to systems in production Update Helper or similar tools and processes are required, along with all the overheads that implies.

At this point, it’s worth reminding ourselves that Drupal 7 and earlier versions did not clearly distinguish between content and config. They all just lived in the site’s database, after all. As such, whatever configuration was on the production site was generally considered canonical.

It’s tempting to make small changes directly in production, since they don’t seem to warrant a full release, and all the configuration deployment overhead that entails. This, in turn, requires additional discipline to reproduce those changes in the codebase.

Of course, that isn’t the only reason for configuration drift. Well-meaning administrators cannot easily distinguish between configs that are required for the proper operation of the site, and those that have more cosmetic effects.

Facing these challenges, we’d regularly note how much easier all of this would be if only we could make production configuration read-only.

A new approach

With some reluctance and much consideration, we decided to try an entirely new approach. We built Config Enforce (and its close companion Config Enforce Devel) to solve the two key problems we were running into:

  1. Developers needed an easy way to get from “I made a bunch of config-related changes in the Admin UI of my local site instance” to “I can identify the relevant config objects/entities which have changed, get them into my git repository, and push them upstream for deployment”.
  2. Operations needed an easy way to deploy changes in configuration, and ideally not have to worry too much about the previously-inevitable “drift” in the production-environment configuration, which often resulted in tedious and painful merging of configs, or worse yet, inadvertent clobbering of changes.

Config Enforce has two “modes” of operation: with config_enforce_devel enabled, you are in “development mode”. You can quickly designate config objects you want to enforce (usually inline on the configuration page where you manipulate the configuration object itself), and then changes you make are immediately written to the file system.

This mode leverages Config Devel to effectively bypass the active configuration storage in the database, writing the config files into target extensions you select. Each target extension builds up a “registry” of enforced configuration objects, keeping track of their location and enforcement level. This eases the development workflow by making it easy to identify which configuration objects you’ve changed without having to explictly config-export and then identify all and only the relevant .yml files to commit.

In production mode, you enable only the config_enforce module, which leverages the same “registry” configuration that config_enforce_devel has written into your target extensions, and performs the “enforcement” component. This means that, for any enforced configuration objects, we block any changes from being made via the UI or optionally even API calls directly. In turn, the enforced configuration settings on the file system within target extensions become authoritative, being pulled in to override whatever is in active configuration whenever a cache rebuild is triggered.

This means that deployment of configuration changes becomes trivial: commit and push new enforced configuration files in your target extensions, pull those into the new (e.g., Prod) environment, and clear caches. configuration Enforce will check all enforced configuration settings for changes on the file system, and immediately load them into active configuration on the site.

This workflow requires some adjustment in how we think about configuration management, but we think it has promise. Especially if you are building Drupal distributions or complex Drupal systems that require repeatable builds and comprehensive testing in CI, you should give Config Enforce a try and see what you think. Feedback is always welcome!

We’ve scratched our own itch, and so far have found it useful and productive. We are pleased to make it available to the Drupal community as another in the arena of ideas surrounding CMI 2.0.

The article Introducing Config Enforce first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Jan 27 2021
Jan 27

11 minute read Published: 27 Jan, 2021 Author: Seonaid Lee
DevOps , Drupal Planet

A lot of potential clients come to us with straightforward and small projects and ask, “Well, can you do Kubernetes?” And we say, “Well, we can, but you don’t need it.”

But they’re afraid that they’ll be missing out on something if we don’t add Kubernetes to the stack. So this is a post to tell you why we probably won’t be recommending Kubernetes.

This post is going to look at three perspectives on this question… First, I’ll consider the technical aspects, specifically what problems Kubernetes is really good for, compared with what problems most smaller software projects actually have.

Then I’ll talk about the psychology of “shiny problems.” (Yes, I’m looking at you, Ace Developer. I promise there are other shiny problems in the project you’re working on.)

And last but not least, we’ll consider the business problem of over-engineering, and what gets lost in the process.

Kubernetes solves specific problems

First off, (I’m sorry to draw your attention to this, but): You probably don’t have the problems that Kubernetes solves. Kubernetes lives at the container orchestration level. It shines in its ability to spin up and down stateless servers as needed for load balancing unpredictable or pulsed loads, especially from a large user base. Large, by the way, is not 10,000… it is millions or hundreds of millions.

Especially for the kind of internal custom services that a lot of our clients require, it is overkill. Many purpose-built sites are unlikely to have more than dozens or hundreds of users at a time, and traditional monolithic architectures will be responsive enough.

Kubernetes is designed to solve the problem of horizontal scalability, by making multiple copies of whichever services are most stressed, routing requests to minimize latency, and then being able to turn those machines back off when they are no longer needed. Even if you hope to someday have those problems, we suggest that you should hold off on adding Kubernetes to your stack until you get there, because the added technical overhead of container orchestration is expensive, in both time and dollars.

(It costs more to build, which delays your time to market, which delays your time to revenue, even if you aren’t paying yourself to build it.)

Which does lead to the question, “Why does everybody want to use this technology, anyway?” For that, we’ll have to take a step back and look at…

The Rise of the Twelve-Factor App

With the shift to the cloud and the desire for highly scalable applications, a new software architecture has arisen that has a strong separation between a system’s code and its data.

This approach treats processes as stateless and independent, and externalizes the database as a separate “backing service.” The stateless processes are isolated as microservices, which are each maintained, tested, and deployed as separate code bases.

This microservices approach decomposes the software into a group of related but separate apps, each of which is responsible for one particular part of the application.

Designing according to this architectural approach is non-trivial, and the overhead associated with maintaining the separate code bases, and particularly in coordinating among them is significant. Additionally, each app requires its own separate datastore, and maintaining synchronization in production introduces another level of complexity. Furthermore, extracting relevant queries from distributed systems of data is more challenging than simply writing a well-crafted SQL statement.

Each of these layers of complexity adds to the cost of not only the initial development, but also the difficulty of maintenance. Even Chris Richardson, in Microservices Patterns, recommends starting with a monolithic architecture for new software to allow rapid iteration in the early stages. (https://livebook.manning.com/book/microservices-patterns/chapter-1/174)

For many of the same reasons, you probably don’t need complex layers of data handling either. Redis, for example, is for persisting rapidly changing data in a quickly accessible form. It’s not suitable for a long-standing database with well-established relations, it costs more to run data in memory than to store it on disk, and it’s more difficult to build.

When you are getting started, a SQL back end with a single codebase will probably solve most of your problems, and without the overhead of Kubernetes (or any of the more exotic data stores.) If you’re still not convinced, let’s take a brief detour and consider the lifecycle of a typical application.

Development, CI, and Upgrades (Oh My!)

Most applications have the following characteristics:

  • Predictable load
  • Few (fewer than millions of) users
  • Predictable hours of use (open hours of the business, daily batch processing cron job at 3 AM, etc.)
  • Clear options for maintenance windows
  • Tight connection between the content layer and the presentation layer

Contrast this with the primary assumptions in the twelve-factor app approach.


The goal of moving to stateless servers is focused on different things:

  • Zero downtime
  • Rapid development
  • Scalability

This approach arose from the needs of large consumer-facing applications like Flickr, Twitter, Netflix, and Instagram. These need to be always-on for hundreds of millions or billions of users, and have no option for things like maintenance mode.

Development and Operations have different goals

When we apply the Dev-Ops calculus to smaller projects, though, there is an emphasis on Dev that comes at the expense of Ops.

Even though we may include continuous integration, automated testing and continuous deployment (and we strongly recommend these be included!), the design and implementation of the codebase and dependency management often focuses on “getting new developers up and running” with a simple “bundle install” (or “build install” etc.)

This is explicitly stated as a goal in the twelve-factor list.

This brings in several tradeoffs and issues in the long-term stability of the system; in particular, the focus on rapid development comes at a cost for operations and upgrades. The goal is to ship things and get them standing up from a cold start quickly… which is the easy part. The more difficult part of operations – the part you probably can’t escape, because you probably aren’t Netflix or Flickr or Instagram – is the maintenance of long-standing systems with live data.

Version Upgrades

Upgrades of conventional implementations proceed thusly:

  1. Copy everything to a staging server
  2. Perform the upgrade on the staging server
  3. If everything works, port it over to the production environment

There are time delays in this process: for large sites it can take hours to replicate a production database to staging, and if you want a safe upgrade, you need to put the site into maintenance mode to prevent the databases from diverging. The staging environment, no matter how carefully you set it up, is rarely an exact mirror of production; the connections to external services, passwords, and private keys for example, should not be shared. Generally, after the testing is complete in the staging environment, the same sequence of scripts is deployed in production. Even after extensive testing, it may prove necessary to roll back the production environment, database and all. Without the use of a maintenance freeze, this can result in data loss.

This sort of upgrade between versions is significantly easier in monolithic environments.

But isn’t Kubernetes supposed to make that easier?

It’s tempting to point to Kubernetes’ rolling updates and the ability to connect multiple microservices to different pods of the database running the different versions… but in content-focused environments, the trade-off for zero downtime is an additional layer of complexity required to protect against potential data loss.

Kubernetes and other 12-factor systems resolve the issue of data protection by sharding and mirroring the data across multiple stores. The database is separate from the application, and upgrades and rollbacks proceed separately. This is a strength for continuous delivery, but it comes at a cost: data that is produced in a blue environment during a blue-green deployment may simply be lost if it proves necessary to roll back schema changes. Additionally, if there are breaking changes to the schema and the microservices wind up attached to a non-backward compatible version, they can throw errors to the end-user (this is probably preferable to data loss.)

For data persistence, the data needs to be stored in volumes externally from the K8 cluster, and orchestrating multiple versions of the code base and database simultaneously requires significant knowledge and organization.

A deployment plan for such a system will need to include plans for having multiple versions of the code live on different servers at the same time, each of which connects to its associated database until the upgrade is complete and determined to be stable. It can be done, but even Kubernetes experts point out that this process is challenging to oversee.

When we are moving things into production, we need to have an operations team that knows how to respond when something fails. No matter how much testing you have done, sometimes a Big Hairy Bug gets into production, and you need to have enough control of your system to be able to fix it. Kubernetes, sad to say, makes this harder instead of easier for stateful applications.

So let’s consider what it means to have a stateful application.

When the Data is Intrinsic to the Application

A content management system by its nature is stateful. A stateful application has a lot of data that makes up a large fraction of “what it is.” State can also include cache information, which is volatile, but the data is part and parcel of what we are doing. Databases and the application layer are frequently tightly integrated, and it’s not meaningful to ship a new build without simultaneously applying schema updates. The data itself is the point of the application.

Drupal (for example) contains both content and configuration in the database, but there is additional information contained in the file structure. These, in combination, make up the state of the system… and the application is essentially meaningless without it. Also, as in most enterprise-focused applications, this data is not flat but is highly structured. The relationships are defined by both the database schema and the application code. It is not the kind of system that lends itself to scaling through the use of stateless containers.

In other words: by their very nature, Drupal applications lack the strict separation between database and code that makes Kubernetes an appropriate solution.

Shiny Problems

One of the things that (we) engineers fall into is a desire to solve interesting problems. Kubernetes, as one of the newest and most current technologies, is the “Shiny” technology towards which our minds bend.

But it is complex, has a steep learning curve, and is not the first choice when deploying stateful applications. This means that a lot of the problems you’re going to have to solve are going to be related to the containers and Kubernetes/deployment layer of the application, which will reduce the amount of time and energy you have to solve the problems at the data model and the application layer. We’ve never built a piece of software that didn’t have some interesting challenges; we promise they are available where you are working.

Also, those problems are probably what your company’s revenues rely on, so you should solve them first.

To Sum Up: Over-engineering isn’t free

As I hope I’ve convinced you, the use of heavier technologies than you need burns through your resources and has the potential to jeopardize your project. The desire to architect for the application you hope to have (rather than the one you do) can get your business into trouble. You will need more specialized developers, more complex deployment plans, additional architectural meetings and more coordination among the components.

When you choose technologies that are overpowered (in case you need them at some undefined point in the future), you front-load your costs and increase the risk that you won’t make it to revenue/profitability.

We get it. We love good tech as much as the next person.

But wow! Superfast!

The fact is, though, most projects don’t need response times measured in the millisecond range. They just need to be fast enough to keep users from wandering away from the keyboard while their query loads. (Or they need a reasonable queuing system, batch processing, and notification options.)

And even if you do need millisecond response times but you don’t have millions of users, Kubernetes will still introduce more problems than it solves.

Performance challenges like these are tough, but generally need to be solved by painstaking, time-consuming, unpredictable trial and error–and the more subcomponents your application is distributed/sharded into, the harder (more time consuming, more unpredictable - by orders of magnitude!) that trial and error gets.

But what if we’re the Next Big Thing?

Most sites are relatively small and relatively stable and will do quite well on a properly-sized VM with a well-maintained code base and a standard SQL server. Minimizing your technological requirements to those that are necessary to solve the problems at hand allows you to focus on your business priorities, leaving the complexity associated with containerization and the maintenance of external stateful information to a future iteration.

Leave the “How are we going to scale” problem for once you get there, and you increase the chances that this will eventually be the problem you have.

The article Kubernetes Won’t Save You first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Mar 12 2020
Mar 12

2 minute read Published: 12 Mar, 2020 Author: Colan Schwartz
Drupal Planet , Composer , Aegir , DevOps , Automation , Drupal

Best practices for building Web sites in the Drupal framework (for major versions 8 and above) dictate that codebases should be built with the Composer package manager for PHP. That is, the code repository for any sites relying on it should not contain any upstream code; it should only contain a makefile with instructions for assembing it.

However, there are some prominent Drupal hosting companies that don’t support Composer natively. That is, after receiving updates to Composer-controlled Git repositories, they don’t automatically rebuild the codebase, which should result in changes to the deployed code.

If you’re hosting your site(s) at one of these companies, and you have this problem, why not consider the obvious alternative?

Aegir, the one-and-only open-source hosting system for Drupal that’s been around for over 10 years, has had native Composer support for over 2 years. That is, on each and every platform deployment (“platform” is Aegir-speak for a Drupal codebase), Aegir reassembles the upstream code assets by running the following automatically:

composer create-project --no-dev --no-interaction --no-progress

As a result, any sites created on that platform (or migrated/upgraded to it) will have access to all of the assets built by Composer.

Additionally, Aegir now ships with the Aegir Deploy module, which enhances the platform creation process. It allows for the following types of deployment:

  • Classic/None/Manual/Unmanaged
  • Drush Makefile deployment
  • Pure Git
  • Composer deployment from a Git repository
  • Composer deployment from a Packagist repository

For more information, please read the Deployment Strategies section of the documentation.

If you’d like to get started with Aegir, the best option would be to spin up an Aegir Development VM, which allows you to run it easily, play with it, and get familiar with the concepts. Naturally, reading the documentation helps with this too.

Afterwards, review the installation guide for more permanent options, and take advantage of our Ansible roles. We have a policy role that configures the main role using our favoured approach.

For help, contact the community, or get in touch with us directly. We provide the following Aegir services:

  • Installation & maintenance in corporate/enterprise (or other) environments
  • Architectural and technical support
  • Hosting guidance
  • Coaching
  • Audits
  • Upgrades
  • Conversion to best practices

The article Does your Drupal hosting company lack native Composer support? first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Nov 15 2019
Nov 15

1 minute read Published: 15 Nov, 2019 Author: Colan Schwartz
Drupal Planet , SaaS , OpenSaaS , DevOps , Aegir , OpenStack , Presentations

On Friday, June 14th, I presented this session at Drupal North 2019. That’s the annual gathering of the Drupal community in Ontario and Quebec, in Canada.

As I realized I hadn’t yet posted this information yet, I’m doing so now.

Session information:

Are you (considering) building a SaaS product on Drupal or running a Drupal hosting company? Have you done it already? Come share your experiences and learn from others.

Among other things, we’ll be discussing:

…and any other related topics that come up.

A video recording of my presentation is available on:

My slides (with clickable links) are available on our presentations site.

The article Drupal North 2019: Drupal SaaS: Building software as a service on Drupal first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Nov 07 2019
Nov 07

6 minute read Published: 7 Nov, 2019 Author: Derek Laventure
Drupal Planet , Drupal , Lando , Drumkit

Over the last 2 or 3 years, the Drupal community has been converging around a solid set of Docker-based workflows to manage local development environments, and there are a number of worthy tools that make life easier.

My personal favourite is Lando, not only because of the Star Wars geekery, but also because it makes easy things easy and hard things possible (a lot like Drupal). I appreciate that a “standard” Lando config file is only a few lines long, but that it’s relatively easy to configure and customize a much more complex setup by simply adding the appropriate lines to the config.

In this post I want to focus on an additional tool I’ve come to lean on heavily that complements Lando quite nicely, and that ultimately boils down to good ol’ fashioned Makefiles. Last summer at DrupalNorth I gave a talk that was primarily about the benefits of Lando, and I only mentioned Drumkit in passing. Here I want to illustrate in more detail how and why this collection of Makefile tools is a valuable addition to my localdev toolbox.

The key benefits provided by adding a Drumkit environment are:

  • consistent make -based workflow to tie various dev tasks together
  • ease onboarding of new devs (make help)
  • make multistep tasks easier (make tests)
  • make tasks in Lando or CI environment the same (ie. make install && make tests)

Drumkit is not just for Drupal!

This example is using Drumkit for a Drupal 8 localdev environment, but there’s no reason you couldn’t use it for other purposes (and in fact, we at Consensus have lately been doing just that.

Basic Setup

As an example, suppose you’re setting up a new D8 project from scratch. Following this slide from my Lando talk, you would do the basic Lando D8 project steps:

  1. Create codebase with Composer (composer create-project drupal-composer/drupal-project:8.x-dev code --stability dev --no-interaction)
  2. Initialize Git repository (git init etc.)
  3. Initialize Lando (lando init)

For now, leave out the lando start step, which we’ll let Drumkit handle momentarily. We should also customize the .lando.yml a little with custom database credentials, which we’ll tell Drumkit about later. Append the following to your .lando.yml:

      user: chewie_dbuser
      password: chewie_dbpass
      database: chewie_db

Add Drumkit

To insert Drumkit into this setup, we add it as a git submodule to our project using the helper install.sh script, and bootstrap Drumkit:

wget -O - https://gitlab.com/consensus.enterprises/drumkit/raw/master/scripts/install.sh | /bin/bash
. d  # Use 'source d' if you're not using Bash

The install script checks that you are in the root of a git repository, and pulls in Drumkit as a submodule, then initializes a top-level Makefile for you.

Finally, we initialize the Drumkit environment, by sourcing the d script (itself a symlink to .mk/drumkit) into our shell.

Drumkit modifies the (shell) environment!

Note that Drumkit will modify your PATH and BIN_PATH variables to add the project-specific .mk/.local/bin directory, which is where Drumkit installs any tools you request (eg. with make selenium. This means if you have multiple Drumkit-enabled projects on the go, you’re best to work on them in separate shell instances, to keep these environment variables distinct.

Note that you can take advantage of this environment-specific setup to customize the bootstrap script to (for example) inject project credentials for external services into the shell environment. Typically we would achieve this by creating a scripts/bootstrap.sh that in turn calls the main .mk/drumkit, and re-point the d symlink there.

Set up your kit

Because we’re using Composer to manage our codebase, we also add a COMPOSER_CACHE_DIR environment variable, using the standard .env file, which Drumkit’s stock bootstrap script will pull into your environment:

echo "COMPOSER_CACHE_DIR=tmp/composer-cache/" >> .env
. d # Bootstrap Drumkit again to have this take effect

From here, we can start customizing for Drupal-specific dev with Lando. First, we make a place in our repo for some Makefile snippets to be included:

mkdir -p scripts/makefiles
echo "include scripts/makefiles/*.mk" >> Makefile

Now we can start creating make targets for our project (click the links below to see the file contents in an example Chewie project. For modularity, we create a series of “snippet” makefiles to provide the targets mentioned above:

NB You’ll need to customize the variables.mk file with the DB credentials you set above in your .lando.yml as well as your site name, admin user/password, install profile, etc.

Now our initial workflow to setup the project looks like this:

git clone --recursive 
. d # or "source d" if you're not using Bash
make start
make build
make install

This will get a new developer up and running quickly, and can be customized to add whatever project-specific steps are needed along the way.

But wait- it gets even better! If I want to make things really easy on fellow developers (or even just myself), I can consolidate common steps into a single target within the top-level Makefile. For example, append the make all target to your Makefile:

.PHONY: all

        @$(MAKE-QUIET) start
        @$(MAKE-QUIET) build
        @$(MAKE-QUIET) install

Now, the above workflow for a developer getting bootstrapped into the project simplifies down to this:

git clone --recursive 
. d
make all

Customize your kit

At this point, you can start adding your own project-specific targets to make common workflow tasks easier. For example, on a recent migration project I was working on, we had a custom Features module (ingredients) that needed to be enabled, and a corresponding migration module (ingredients_migrate) that needed to be enabled before migrations could run.

I created the following make targets to facilitate that workflow:

We often take this further, adding a make tests target to setup and run our test suite, for example. This in turn allows us to automate the build/install/test process within our CI environment, which can call exactly the same make targets as we do locally.

Ultimately, Drumkit is a very simple idea: superimpose a modular Makefile-driven system on top of Lando to provide some syntactic sugar that eases developer workflow, makes consistent targets that CI can use, and consolidates multi-step tasks into a single command.

There’s lots more that Drumkit can do, and plenty of ideas we have yet to implement, so if you like this idea, feel free to jump in and contribute!

The article Lando and Drumkit for Drupal 8 Localdev first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Oct 25 2019
Oct 25

1 minute read Published: 24 Oct, 2019 Author: Christopher Gervais
Drupal Planet , Automation , DevOps , Ansible , OpenStack , Presentations

On Friday, October 18th, I presented at DrupalCamp Ottawa 2019. That’s the annual gathering of the Drupal community in Ottawa, Ontario, Canada.

Session information:

Ever heard of infrastructure-as-code? The idea is basically to use tools like Ansible or Terraform to manage the composition and operation of your cloud systems. This allows infrastructure to be treated just like any other software system. The code can be committed into Git which allows auditability, and reproducibility. It can therefore be tested and integrated into full continuous delivery processes.

Ansible provides tonnes of cloud management modules, from simple Linodes or Digital Ocean Droplets through globe-spanning AWS networks. Ansible also strives for simplicity, resulting in playbooks that are essentially self-documenting.

In this session, we will:

  • explore the principles of infrastructure-as-code and how to operationalize them;
  • introduce Ansible and it’s cloud modules;
  • build a full OpenStack cloud infrastructure end-to-end from scratch.

A video recording of my presentation is available on YouTube.

[embedded content]

My presentation slidedeck can be downloaded here: Automate All the Things!

The article DrupalCamp Ottawa 2019: Automate All the Things first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Oct 25 2019
Oct 25

5 minute read Published: 24 Oct, 2019 Author: Colan Schwartz
Drupal Planet , Semantic Web

As a content management framework, Drupal provides strong support for its taxonomical subsystem for classifying data. It would be great if such data could be exposed via the Simple Knowledge Organization System (SKOS) standard for publishing vocabularies as linked data. As Drupal becomes used more and more as a back-end data store (due to features such as built-in support for JSON:API), presenting this data in standard ways becomes especially important.

So is this actually possible now? If not, what remains to be done?

Drupal’s history

First, let’s explore some of Drupal core’s history as it relates to the Semantic Web and Web services formats, also useful for future reference. This is basically the backstory that makes all of this possible.

1. REST support was added to Views

This was implemented in the (now closed) issues:

2. Non-Schema.org namespace mappings were removed (including contrib’s UI support) in Drupal 8

Here’s the change notice:

And a follow-up issue requesting support for additional namespaces:

3. The community chose to replace JSON-LD with HAL in Drupal 8

Here’s an article with the details:

Taxonomy Screenshot

Multiple Components

As this is really a two-part issue, adding machine-readable metadata and then making machine-readable data available, I’ll split the discussion into two sections.

Adding machine-readable metadata

While there’s an RDF UI module that enables one to specify mappings between Drupal entities and their fields with RDF types and properties, it only supports Schema.org via RDFa (not JSON-LD).

As explained very well in Create SEO Juice From JSON LD Structured Data in Drupal, a better solution is to use the framework provided by the Metatag module (used by modules such as AGLS Metadata). The article introduces the Schema.org Metatag module, which uses the Metatag UI to allow users to map Drupal data to Schema.org, and exposes it via JSON-LD.

So one solution would be to:

  1. Clone Schema.org Metatag, calling the new module SKOS Metatag.
  2. Replace all of the Schema.org specifics with SKOS.
  3. Rejoice.

But after taking some time to process all of the above information, I believe we should be able to use the knowledge of the vocabulary hierarchy to add the SKOS metadata. We probably don’t need any admin UI at all for configuring mappings.

Assuming that’s true, we can instead create a SKOS module that doesn’t depend on Metatag, but Metatag may still be useful given that it already supports Views.

Making the machine-readable data available

Exposing the site’s data can be done best though Views. I wouldn’t recommend doing this any other way, e.g. accessing nodes (Drupal-speak for records) directly, or through any default taxonomy links for listing all of a vocabulary’s terms. (These actually are Views, but their default set-ups are missing configuration.) A good recipe for getting this up & running, for both the list and individual items, is available at Your First RESTful View in Drupal 8.

To actually access the data from elsewhere, you need to be aware of the recent API change To access REST export views, one now MUST specify a ?_format=… query string, which explains why some consumers broke.

The JSON-LD format is, however, not supported in Core by default. There is some code in a couple of sandboxes, which may or may not work, that will need to be ported to the official module, brought up-to-date, and have a release (ideally stable) cut. See the issue JSON-LD REST Services: Port to Drupal 8 for details.

Now, the Metatag solution I proposed in the previous section may work with Views natively, already exposing data as JSON-LD. If that’s the case, this JSON-LD port may not be necessary, but this remains to be seen. Also, accessing the records directly (without Views) may work as well, but this also remains to be seen after that solution is developed.


Clearly, there’s more work to be done. While the ultimate goal hasn’t been achieved yet, at least we have a couple of paths forward.

That’s as far as I got with pure research. Due to priorities shifting on the client project, I didn’t get a chance to learn more by reviewing the code and testing it to see what does and doesn’t work, which would be the next logical step.

If you’ve got a project that could make use of any of this, please reach out. We’d love to help move this technology further along and get it implemented.


General information

Contributed modules that probably aren’t helpful (but could be)

Questions about importing SKOS data (not exporting it)

The article Exposing Drupal’s Taxonomy Data on the Semantic Web first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Oct 08 2019
Oct 08

9 minute read Published: 8 Oct, 2019 Author: Derek Laventure
Drupal Planet , Drupal , OpenSocial

In Drupal 7, hook_update()/hook_install() were well-established mechanisms for manipulating the database when installing a new site or updating an existing one. Most of these routines ended up directly running SQL against the database, where all kinds of state, configuration, and content data lived. This worked reasonably well if you were careful and had a good knowledge of how the database schema fit together, but things tended to get complicated.

With the maturing of Features module, we were able to move some of this into configuration settings via the ctools-style export files, making the drush feature-revert command part of standard workflow for deploying new features and updates to an existing site.

In Drupal 8, we’ve made huge strides in the direction of Object Orientation, and started to separate Configuration/State, Content Structure, and Content itself. The config/install directory is often all that’s needed in terms of setting up a contributed or custom module to work out of the box, and with the D8 version of Features, the same is often true of updates that involve straightforward updates to configuration .yml files.

It turns out that both hook_update() and hook_install() are still valuable tools in our box, however, so I decided to compile some of the more complicated D8 scenarios I’ve run across recently.

Drupal 8 Update basics

The hook_update_N API docs reveal that this function operates more or less as before, with some excellent guidelines for how to approach the body of the function’s implementation. The Introduction to update API handbook page provides some more detail and offers some more guidance around the kinds of updates to handle, naming conventions, and adding unit tests to the your update routines.

The sub-pages of that Handbook section have some excellent examples covering the basics:

All of these provided a valuable basis on which to write my own real-life update hooks, but I found I still had to combine various pieces and search through code to properly write these myself.


We recently launched our first complex platform based on Drupal 8 and the excellent OpenSocial, albeit heavily modified to suit the particular requirements of the project. The sub-profile required more extensive customization than simply extending the parent profile’s functionality (as discussed here). Instead, we needed to integrate new functionality into that provided by the upstream distribution, and this often resulted in tricky interactions between the two.

Particularly with a complex site with many moving parts, we take the approach of treating the site as a system or platform, installing and reinstalling regularly via a custom installation profile and set of feature modules. This allows us to integrate:

  • a CI system to build the system repeatedly, proving that everything works
  • a Behat test suite to validate the behaviour of the platform matches the requirements

In the context of a sub-profile of OpenSocial, this became complicated when the configuration we wanted to customize actually lived in feature modules from the upstream profile, and there was no easy way to just override them in our own modules’ config/install directories.

We developed a technique of overriding entire feature modules within our own codebase, effectively forking the upstream versions, so that we could then modify the installed configuration and other functionality (in Block Plugins, for example). The trouble with this approach is that you have to manage the divergence upstream, incorporating new improvements and fixes manually (and with care).

Thus, in cases where there were only a handful of configuration items to correct, we began using hook_install() routines to adjust the upstream-installed config later in the install process, to end up with the setup we were after.

Adjust order of user/register form elements

We make use of entity_legal for Terms of Service, Privacy Policy, and User Guidelines documents. Our installation profile’s feature modules create the 3 entity legal types, but we needed to be able to tweak the order of the form elements on the user/register page, which is a core entity_form_display created for the user entity.

To achieve this using YAML files in the config/install directory per usual seemed tricky or impossible, so I wrote some code to run near the end of the installation process, after the new legal_entity types were created and the core user.register form display was set. This code simply loads up the configuration in question, makes some alterations to it, and then re-saves:

 * Implements hook_install().
function example_install() {

 * Adjust weights of legal docs in user/register form.
function example_update_8001() {

 * Ensure the field weights on the user register form put legal docs at the bottom
function _example_install_adjust_legal_doc_weights() {
       $config = \Drupal::getContainer()->get('config.factory')->getEditable('core.entity_form_display.user.user.register');
       $content = $config->get('content');

       $content['private_messages']['weight'] = 0;
       $content['account']['weight'] = 1;
       $content['google_analytics']['weight'] = 2;
       $content['path']['weight'] = 3;
       $content['legal_terms_of_service']['weight'] = 4;
       $content['legal_privacy_policy']['weight'] = 5;
       $content['legal_user_guidelines']['weight'] = 6;
       $config->set('content', $content)->save();

Modify views configuration managed by upstream (or core)

A slightly more complicated situation is to alter a views configuration that is managed by an upstream feature module during the installation process. This is not an ideal solution, but currently it’s quite challenging to properly “override” configuration that’s managed by a “parent” installation profile within your own custom sub-profile (although Config Actions appears to be a promising solution to this).

As such, this was the best solution I could come up with: essentially, run some code very nearly at the end of the installation process (an installation profile task after all the contrib and feature modules and related configuration are installed), that again loads up the views configuration, changes the key items needed, and then re-saves it.

In this case, we wanted to add a custom text header to a number of views, as well as switch the pager type from the default “mini” type to “full”. This required some thorough digging into the Views API and related code, to determine how to adjust the “handlers” programmatically.

This helper function lives in the example.profile code itself, and is called via a new installation task wrapper function, which passes in the view IDs that need to be altered. Here again, we can write trivial hook_update() implementations that call this same wrapper function to update existing site instances.

 * Helper to update views config to add header and set pager.
function _settlement_install_activity_view_header($view_id) {
  # First grab the view and handler types
  $view = Views::getView($view_id);
  $types = $view->getHandlerTypes();

  # Get the header handlers, and add our new one
  $headers = $view->getHandlers('header', 'default');

  $custom_header = array(
    'id' => 'area_text_custom',
    'table' => 'views',
    'field' => 'area_text_custom',
    'relationship' => 'none',
    'group_type' => 'group',
    'admin_label' => '',
    'empty' => '1',
    'content' => '

Latest Activity

', 'plugin_id' => 'text_custom', 'weight' => -1, ); array_unshift($headers, $custom_header); # Add the list of headers back in the right order. $view->displayHandlers->get('default')->setOption($types['header']['plural'], $headers); # Set the pager type to 'full' $pager = $view->getDisplay()->getOption('pager'); $pager['type'] = 'full'; $view->display_handler->setOption('pager', $pager); $view->save(); }

Of particular note here is the ordering of the Header components on the views. There was an existing Header on most of the views, and the new “Latest Activity” one needed to appear above the existing one. Initially I had tried creating the new custom element and calling ViewExecutable::setHandler method instead of the more complicated $view->displayHandlers->get('default')->setOption() construction, which would work, but consistently added the components in the wrong order. I finally found that I had to pull out a full array of handlers using getHandlers(), then array_unshift() the new component onto the front of the array, then put the whole array back in the configuration, to set the order correctly.

Re-customize custom block from upstream profile.

In most cases we’ve been able to use Simple Block module to provide “custom” blocks as configuration, rather than the core “custom” block types, which are treated as content. However, in one case we inherited a custom block type that had relevant fields like an image and call-to-action links and text.

Here again, the upstream OpenSocial modules create and install the block configs, and we didn’t want to fork/override the entire module just to make a small adjustment to the images and text/links. I came up with the following code block to effectively alter the block later in the installation process:

First, the helper function (called from the hook_install() of a late-stage feature module in our sub-profile), sets up the basic data elements needed, in order to make it easy to adjust the details later (and re-call this helper in a hook_update(), for example):

function _example_update_an_homepage_block() {

  ## Edit $data array elements to update in future ##

  $data = array();
  $data['filename'] = 'bkgd-banner--front.png'; # Lives in the images/ folder of example module
  $data['textblock'] = '

Example.org is a community of practice site.


Sign up now to learn, share, connect and collaborate with leaders and those in related fields.

'; $data['cta1'] = array( 'url' => '/user/register', 'text' => 'Get Started', ); $data['cta2'] = array( 'url' => '/about', 'text' => 'More about the Community', ); ## DO NOT EDIT BELOW THIS LINE! ## ##################################

The rest of the function does the heavy lifting:

  # This code cobbled together from `social_core.install` and # `social_demo/src/DemoSystem.php`
  // This uuid can be used like this since it's defined
  // in the code as well (@see social_core.install).
  $block = \Drupal::entityTypeManager()->getStorage('block_content')->loadByProperties(['uuid' => '8bb9d4bb-f182-4afc-b138-8a4b802824e4']);
  $block = current($block);

  if ($block instanceof \Drupal\block_content\Entity\BlockContent) {
    # Setup the image file
    $fid = _example_setup_an_homepage_image($data['filename']);

    $block->field_text_block = [
      'value' => $data['textblock'],
      'format' => 'full_html',

    // Insert image file in the hero image field.
    $block_image = [
      'target_id' => $fid,
      'alt' => "Anonymous front page image homepage'",
    $block->field_hero_image = $block_image;

    // Set the links.
    $action_links = [
        'uri' => 'internal:' . $data['cta1']['url'],
        'title' => $data['cta1']['text'],
        'uri' => 'internal:' . $data['cta2']['url'],
        'title' => $data['cta2']['text'],

    $itemList = new \Drupal\Core\Field\FieldItemList($block->field_call_to_action_link->getFieldDefinition());
    $block->field_call_to_action_link = $itemList;


The image helper function prepares the image field:

function _example_setup_an_homepage_image($filename) {

  // TODO: use a better image from the theme.
  // Block image.
  $path = drupal_get_path('module', 'example');
  $image_path = $path . DIRECTORY_SEPARATOR . 'images' . DIRECTORY_SEPARATOR . $filename;
  $uri = file_unmanaged_copy($image_path, 'public://'.$filename, FILE_EXISTS_REPLACE);

  $media = \Drupal\file\Entity\File::create([
    'langcode' => 'en',
    'uid' => 1,
    'status' => 1,
    'uri' => $uri,

  $fid = $media->id();

  // Apply image cropping.
  $data = [
    'x' => 600,
    'y' => 245,
    'width' => 1200,
    'height' => 490,
  $crop_type = \Drupal::entityTypeManager()
  if (!empty($crop_type) && $crop_type instanceof CropType) {
    $image_widget_crop_manager = \Drupal::service('image_widget_crop.manager');
    $image_widget_crop_manager->applyCrop($data, [
      'file-uri' => $uri,
      'file-id' => $fid,
    ], $crop_type);

  return $fid;


As with most things I’ve encountered with Drupal 8 so far, the Update system is both familiar and new in certain respects. Hopefully these concrete examples are instructive to understand how to adapt older techniques to the new way of managing install and update tasks.

The article Drupal 8 hook_update() Tricks first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Sep 24 2019
Sep 24

4 minute read Published: 24 Sep, 2019 Author: Colan Schwartz
Drupal Planet , Aegir , DevOps

Aegir is often seen as a stand-alone application lifecycle management (ALM) system for hosting and managing Drupal sites. In the enterprise context, however, it’s necessary to provide mutiple deployment environments for quality assurance (QA), development or other purposes. Aegir trivializes this process by allowing sites to easily be copied from one environment to another in a point-and-click fashion from the Web front-end, eliminating the need for command-line DevOps tasks, which it was designed to do.

Setting up the environments

An Aegir instance needs to be installed in each environment. We would typically have three (3) of them:

  • Development (Dev): While generally reserved for integration testing, it is sometimes also used for development (e.g. when local environments cannot be used by developers or there are a small number of them).
  • Staging: Used for QA purposes. Designed to be a virtual clone of Production to ensure that tagged releases operate the same way as they would there, before being made live.
  • Production (Prod): The live environment visible to the public or the target audience, and the authoritative source for data.

(While outside the scope of this article, local development environments can be set up as well. See Try Aegir now with the new Dev VM for details.)

To install Aegir in each of these, follow the installation instructions. For larger deployments, common architectures for Staging and Prod would include features such as:

  • Separate Web and database servers
  • Multiple Web and database servers
  • Load balancers
  • Caching/HTTPS proxies
  • Separate partitions for (external) storage of:
    • The Aegir file system (/var/aegir)
    • Site backups (/var/aegir/backups)
    • Database storage (/var/lib/mysql)
  • etc.

As these are all out of scope for the purposes of this article, I’ll save these discussions for the future, and assume we’re working with default installations.

Allowing the environments to communicate

To enable inter-environment communication, we must perform the following series of tasks on each Aegir VM as part of the initial set-up, which only needs to be done once.

Back-end set-up

The back-ends of each instance must be able to communicate. For that we use the secure SSH protocol. As stated on Wikipedia:

SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine.

Steps to enable SSH communication:

  1. SSH into the VM.
    • ssh ENVIRONMENT.aegir.example.com
  2. Become the Aegir user.
    • sudo -sHu aegir
  3. Generate an SSH key. (If you’ve done this already to access a private Git repository, you can skip this step.)
    • ssh-keygen -t rsa -b 4096 -C "ORGANIZATION Aegir ENVIRONMENT"
  4. For every other environment from where you’d like to fetch sites:
    1. Add the generated public key (~/.ssh/id_rsa.pub) to the whitelist for the Aegir user on the other VM so that the original instance can connect to this target.
      • ssh OTHER_ENVIRONMENT.aegir.example.com
      • sudo -sHu aegir
      • vi ~/.ssh/authorized_keys
      • exit
    2. Back on the original VM, allow connections to the target VM.
      • sudo -sHu aegir
      • ssh OTHER_ENVIRONMENT.aegir.example.com
      • Answer affirmatively when asked to confirm the host (after verifying the fingerprint, etc.).

Front-end set-up

These steps will tell Aegir about the other Aegir servers whose sites can be imported.

  1. On Aegir’s front-end Web UI, the “hostmaster” site, enable remote site imports by navigating to Administration » Hosting » Advanced, and check the Remote import box. Save the form. (This enables the Aegir Hosting Remote Import module.)
  2. For every other server you’d like to add, do the following:
    1. Navigate to the Servers tab, and click on the Add server link.
    2. For the Server hostname, enter the hostname of the other Aegir server (e.g. staging.aegir.example.com)
    3. Click the Remote import vertical tab, check Remote hostmaster, and then enter aegir for the Remote user.
    4. For the Human-readable name, you can enter something like Foo's Staging Aegir (assuming the Staging instance).
    5. You can generally ignore the IP addresses section.
    6. Hit the Save button.
    7. Wait for the server verification to complete successfully.

All of the one-time command-line tasks are now done. You or your users can now use the Web UI to shuffle site data between environments.

Select remote site to import

Deploying sites from one environment to another

Whenever necessary, this point-and-click process can be used to deploy sites from one Aegir environment to another. It’s actually a pull method as the destination Aegir instance imports a site from the source.

Reasons to do this include:

  • The initial deployment of a development site from Dev to Prod.
  • Refreshing Dev and Staging sites from Prod.


  1. If you’d like to install the site onto a new platform that’s not yet available, create the platform first.
  2. Navigate to the Servers tab.
  3. Click on the server hosting the site you’d like to import.
  4. Click on the Import remote sites link.
  5. Follow the prompts.
  6. Wait for the batch job, Import and Verify tasks to complete.
  7. Enable the imported site by hitting the Run button on the Enable task.
  8. The imported site is now ready for use!

The article Aegir DevOps: Deployment Workflows for Drupal Sites first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Sep 09 2019
Sep 09

3 minute read Published: 9 Sep, 2019 Author: Colan Schwartz
Drupal Planet , Aegir , DevOps

Have you been looking for a self-hosted solution for hosting and managing Drupal sites? Would you like be able able to upgrade all of your sites at once with a single button click? Are you tired of dealing with all of the proprietary Drupal hosting providers that won’t let you customize your set-up? Wouldn’t it be nice if all of your sites had free automatically-updating HTTPS certificates? You probably know that Aegir can do all of this, but it’s now trivial to set up a temporary trial instance to see how it works.

The new Aegir Development VM makes this possible.


Throughout Aegir’s history, we’ve had several projects striving to achieve the same goal. They’re listed in the Contributed Projects section of the documentation.

Aegir Up

Aegir Up was based on a VirtualBox virtual machine (VM), managed by Vagrant and provisioned with Puppet. It was superseded by Valkyrie (see below).

Aegir Development Environment

Aegir Development Environment took a completely different approach using Docker. It assembles all of the services (each one in a container, e.g. the MySQL database) into a system managed by Docker Compose. While this is a novel approach, it’s not necessary to have multiple containers to get a basic Aegir instance up and running.


Valkyrie was similar to Aegir Up, but provisioning moved from Puppet to Ansible. Valkyrie also made extensive use of custom Drush commands to simplify development.

Its focus was more on developing Drupal sites than on developing Aegir. Now that we have Lando, it’s no longer necessary to include this type of functionality.

It was superseded by the now current Aegir Development VM.


Like Valkyrie, the Aegir Development VM is based on a VirtualBox VM (but that’s not the only option; see below) managed with Vagrant and provisioned with Ansible. However, it doesn’t rely on custom Drush commands.


Customizable configuration

The Aegir Development VM configuration is very easy to customize as Ansible variables are used throughout.

For example, if you’d like to use Nginx instead of Apache, simply replace:

    aegir_http_service_type: apache


    aegir_http_service_type: nginx

…or override using the command line.

You can also install and enable additional Aegir modules from the available set.

Support for remote VMs

For those folks with older hardware who are unable to spare extra gigabytes (GB) for VMs, it’s possible to set up the VM remotely.

While the default amount of RAM necessary is 1 GB, 2 GB would be better for any serious work, and 4 GB is necessary if creating platforms directly from Packagist.

Support for DigitalOcean is included, but other IaaS providers (e.g. OpenStack) can be added later. Patches welcome!

Fully qualified domain name (FQDN) not required

While Aegir can quickly be installed with a small number of commands in the Quick Start Guide, that process requires an FQDN, usually something like aegir.example.com (which requires global DNS configuration). That is not the case with the Dev VM, which assumes aegir.local by default.

Simplified development

You can use it for Aegir development as well as trying Aegir!

Unlike the default set-up provisioned by the Quick Start Guide, which would require additional configuration, the individual components (e.g. Hosting, Provision, etc.) are cloned repositories making it easy to create patches (and for module maintainers: push changes upstream).


We’ve recently updated the project so that an up-to-date VM is being used, and it’s now ready for general use. Please go ahead and try it.

If you run into any problems, feel free to create issues on the issue board and/or submit merge requests.

The article Try Aegir now with the new Dev VM first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web