Aug 12 2019
Aug 12

Sean is a strong believer in the open source community at large, and that working collaboratively is best for creating awesome projects. His community work extends into maintaining and building the BADCamp website build, as well as helping to maintain Docksal, a tool used for managing development environments.

Aug 05 2019
Aug 05

I catch up with Brendan Blaine, a developer for the Drupal Association, to find out what it takes to run events.drupal.org, why the conferences run so smoothly, and always remember to use a coaster.

It's really exciting to be a part of the team that puts on DrupalCon.

Jul 22 2019
Jul 22

Can't make a Drupal camp? Kevin Thull has you covered! Kevin donates his time recording sessions at most North American Drupal camps. I find out why, and what food to bribe him with.

Jul 18 2019
Jul 18

Mike and Matt gather a fleet of Lullabots to talk the ins and outs of continuous integration (CI) in 2019.

Tools and Services mentioned in this episode:

...the efficiency gains that you get in day to day development from a finely tuned continuous integration setup outweighs the downsides of downtime, or even maintenance, of something like Jenkins... — Andrew Berry

This Episode's Guests

Andrew Berry

Thumbnail

Andrew Berry is a architect and developer who works at the intersection of business and technology.

Sally Young

Sally Young

Senior Technical Architect working across the full-stack and specialising in decoupled architectures. Core JavaScript maintainer for Drupal, as well as leading the JavaScript Modernization Initiative.

James Sansbury

James Sansbury

An experienced Drupal developer and architect himself, James manages Lullabot's back-end development team. He lives near Atlanta, GA and enjoys fishing, hiking, and camping.

Jul 17 2019
Jul 17

The Drupal community maintains its own set of evergreen coding standards that differs from those of the broader PHP community (e.g., PSR-2). It's encouraged to pour through the standards line-by-line and memorize each for perfect real-time compliance, but for those with better things to do, fear not! The standards will come to you. When starting new projects, just add a composer dependency on vijaycs85/drupal-quality-checker. This tool uses GrumPHP to add a pre-commit hook that checks new code for violations of the Drupal coding standards and outputs helpful information to aid the developer in resolving each one.

What is GrumPHP?

GrumPHP is a clever tool that automatically adds git pre-commit and commit-msg hooks to your local environment to check coding standards before committing. Since git hooks depend on your local git repository configuration and are not part of the codebase, GrumPHP uses Composer to modify the .git/hooks/pre-commit file on your disk.

Let’s review some of the “gotchas” you might run into.

Grrr, I can’t commit my WIP code because it violates Drupal coding standards!

When warranted, use the --no-verify (-n) flag to skip the checks.

git commit -n

But don’t forget to go back and fix them later! Use the --amend flag to “re-do” the commit (before pushing) to trigger GrumPHP to check for errors.

git commit --amend

I can't commit commented-out code, seriously?

Generally speaking, don't commit commented-out code. Instead, rely on the powerful mechanisms provided by git for pulling lines from the VCS history. "What about new code that I have yet to commit but want to keep around for later?" Well, go ahead and commit it (with --no-verify, to skip the checks), then revert the commit (git revert). And voila, now it's available to you in the VCS history.

Sometimes, though, this is just not practical or desirable, and there remain legitimate cases for committing commented-out code. What are your options?

  1. Use @code ... @endcode tags in docblock comments. When adding code samples to functions, files, or class docblocks, using the @code ... @endcode tags prevent PHPCS from throwing a warning.
  2. Prevent coding standards checks on specific lines, code blocks, or entire files. See the next section for details.

But really, deeply, truly, the standards are wrong here, I swear!

If you disagree with or don’t want to follow the standards for whatever reason, here’s how you can whitelist specific lines and files.

Whitelist a single line:

<?php
// @codingStandardsIgnoreLine
$honeyBadger = new HoneyBadger();


<?php
$honeyBadger = new HoneyBadger(); // @codingStandardsIgnoreLine

Whitelist a code block:

<?php
// @codingStandardsIgnoreStart
$honeyBadger = new HoneyBadger();
if ($honeyBadger.cares()) throw new Exception();
// @codingStandardsIgnoreEnd

Whitelist an entire file:

<?php
// @codingStandardsIgnoreFile
class HoneyBadger {
  public function cares() { return FALSE; }
}

OMG, so many errors

If you write a lot of code/make many changes before committing, you may end up with a miles-long list of violations to fix—a real pain when you thought you were done and ready to push! So rather than depending on this git pre-commit check, make your life easier by taking care of the issues as you go. And no, you don’t need to memorize all the Drupal coding standards by heart; there are plugins for every major code editor to help you out. They provide real-time, inline feedback as soon as you write code that doesn’t pass muster; much easier than bouncing back and forth between the error list in your git client and the code, hunting for files and line numbers. Furthermore, your editor can generate the full list of violations and link you straight to the offending lines.

To get started, follow the steps in this Drupal.org documentation page: Installing Coder Sniffer. When the documentation talks about installing PHPCS and the Drupal standards globally, you can instead point it to your project-local version, if you wish:

  • PHP_CodeSniffer command: /path/to/repo/vendor/bin/phpcs
  • Drupal coding standard “sniffs”: /path/to/repo/vendor/drupal/coder/coder_sniffer/Drupal

GrumPHP doesn't do anything?

If GrumPHP shows all green checkmarks when you know there are violations, you may have an issue with the configuration, or your project may not be compatible with GrumPHP. For example, certain project structure hierarchies are not yet supported. Good luck with that! Just kidding, here’s some help.

Furthermore, note that the GrumPHP pre-commit script only checks your added/changed files for violations, not all the code in your project. Generally, this is a good thing, as you don’t want to fix unrelated code in your pull request. But if you do want to run a complete check of all the code in the repository or a specific directory, you can use your code editor to generate a report or one of these command-line tools:

php path/to/repo/vendor/bin/grumphp run

If you need to customize the list of files to include or ignore:

path/to/repo/vendor/bin/phpcs \
  --standard=Drupal           \
  --ignore=*.css              \
  web/modules/custom          \
  web/themes

Extra Credit

The tools described so far help with accidentally committing code that violates Drupal standards, but it doesn’t prevent it entirely. To do that, you’ll want to add a test to your Continuous Integration (CI) workflow that flags pull requests with code that violates a standard.

CI tooling is outside the scope of this post, but lots of good resources exist. Check out this excellent blog post to help you get started: Continuous Integration for Drupal 8 with CircleCI.

Conclusion

Are you ready to get started? It’s as simple as running this command:

composer require vijaycs85/drupal-quality-checker

There’s also a newer kid on the block: drupol/drupal-conventions. I can’t vouch for it as I’ve not yet tried it on a project, but it looks promising and includes a “fixer” config to automatically fix small/unambiguous issues for you.

composer require drupol/drupal-conventions

I hope this has been a helpful post to all. If you have suggestions, improvements, or corrections, please comment below or @ me on drupal.org, thanks!

Jul 08 2019
Jul 08

Having worked the last five years as a Drupal freelancer, getting involved in the Drupal Community as both a Sprint Mentor and member of the Drupal Community Working Group, Rachel has now moved to dedicate all of her time as the Drupal Association’s new Community Liaison.

Jul 03 2019
Jul 03

The information that follows is not meant to diagnose or treat specific medical issues regarding the hands but rather is a list of habits that have kept me healthy and able to continue doing the things I love with either greatly reduced or no pain. I am not a doctor. Proceed with caution and common sense.

I grew up playing video games. Many, many hours were spent leveling up, collecting every collectible, wringing every last hour I could out of each game I got ahold of. When I got to high school, I started getting interested in music. First was the drums, then piano, and finally the guitar. My first career was as a guitar teacher. I would play guitar and teach throughout the day, then unwind by playing video games in the evening. Then I started learning to build websites and write code. I eventually shifted careers into web development, got back to playing guitar strictly for fun, and continued to play video games when I made the time. This is when things started to fall apart.

With the exception of time spent walking, running, or sleeping, I was using my hands all day long. Even while watching television, I’d have my guitar in my hands running exercises over and over. Without giving myself regular breaks, my hands began hurting and it became difficult to continue doing the things I wanted and needed to do in my life. I saw a hand specialist who suggested I either wear braces all the time or stop doing certain activities. Neither of these was really an option. I needed to figure out how to continue to do the things I enjoyed doing without hurting myself.

Below are 4 habits I’ve formed in the years since my pain was at its worst that have allowed me to work a full-time computer job, practice guitar daily for upwards of two hours, and play video games as long as I’d like with little or no pain.

Habit #1: Stretch

I’ll start with the habit that has brought me the most benefit, which is stretching. I begin every single day, even non-working days, by going through a specific series of stretches that takes about 5 minutes to do. After seeing that doctor, buying books on hand problems, reading many articles, and wearing hand braces, I stumbled upon a video on YouTube from a man who’d had even more serious hand pain than I did. His short video details his experience and demonstrates the series of stretches that were recommended to him by a physical therapist. To his series of stretches, I’ve added a standing side bend because it just feels good.

I can’t overly stress just how important this routine has been to my hand health. I strongly suggest you give it a go.

Habit #2: Move

The advent of activity trackers and the proliferation of standing desks reveals that keeping your body moving throughout the day is becoming a priority for many people. My doctor suggested I stop doing the activities that were causing me pain. Instead, I’ve made an effort to do the activities that were causing me pain for shorter stretches of time. If you’re working at a computer for 8 hours a day, that’s fine so long as you take breaks frequently and change things up just as often.

For me this means using my laptop with an external monitor at my desk, both sitting and standing, or sitting on a cushion on the floor with the laptop on a low stool, or using the laptop on the kitchen table. Variety is an important part of this, not the individual tools. I personally don’t like external keyboards and mice. The way my Mac is laid out is the most comfortable for me. But I can still become fatigued if I stay long enough in the same position. My watch reminds me to move each hour if I haven’t. A calendar alert or Pomodoro timer would work just as well. Don’t get too mired in the numbers or any kind of “system” if you don’t need to. Just move!

Habit #3: Move More

Habit #2 is all about changing up how you’re working throughout the day, but it’s important to take breaks from working entirely to give your hands a break and your body some exercise. This can take many forms. My go-tos are taking walks and meditating.

Walking is just plain wonderful. It gets all your limbs moving, it’s basically stress-free on your body, gets the heart rate up, and gets your mind working. I can’t count the number of times I’ve been banging my head against a problem for hours only to have the answer pop into my head once I was out walking. There’s something about stepping away from a problem for a while that allows one’s mind to process it more clearly. Even when I don’t come up with the actual solution to a problem, I invariably come up with at least something new to try.

Meditating is also just plain wonderful. Have a seat, close your eyes, focus your attention on your breathing, and try to relax tense parts of your face and body. Unlike walking, the goal here isn’t necessarily to have eureka moments. Quite the opposite; it’s to allow your mind to rest, improve your patience and focus, and allow you to think more clearly when you need to. The topic of meditation is worthy of an article of its own, but it’s easy enough to try it out now. You’ll be surprised at how much more productive (and happier) you can be after just a few minutes of “doing nothing.”

Habit #4: Strengthen

If you went through the stretches detailed in the video above you may have noticed that many of them stretch large muscles in the forearms, arms, and shoulders rather than the small muscles of the wrists and hands. You can think of the body as being structured much like a pyramid. Our large bones and muscles support successively smaller ones as our limbs extend out from the torso. When we perform fine motions with our fingers and hands repeatedly, we put undue stress on muscles that aren’t strong enough to handle them. The solution to this problem is to strengthen the larger muscles that support our hands going up our arms to provide a sturdier base for these more intricate motions.

Include exercises in your weekly routine that target your upper body, like pull-ups, push-ups, and kettlebell swings. It isn’t important to be able to do large numbers of repetitions, especially if you’re new to upper body workouts. Focus instead on getting the motion right and not doing too much. A little strength training goes a long way and for our purposes we’re looking only to build muscle, not to become a bodybuilder or fundamentally change our overall body composition.

Conclusion

Serious hand problems are serious and you should seek the help of a professional hand doctor and/or physical therapist to diagnose and treat your specific issue. But if you haven’t quite developed a serious condition and want to practice habits that will allow you to live a life free of hand pain, I highly recommend you give the ideas above a try. They cost only your time and time spent working to prevent injury and pain is always time well spent.

Jun 19 2019
Jun 19

Over the past few months working on migrations to Drupal 8, researching best practices, and contributing to core and contributed modules, I discovered that there are several tools available in core and contributed modules, plus a myriad of how-to articles. To save you the trouble of pulling it all together yourself, I offer a comprehensive overview of the Migrate module plus a few other contributed modules that complement it in order to migrate a Drupal site to 8.

Let's begin with the most basic element: migration files.

Migration files

In Drupal 8, the migrate module splits a migration into a set of YAML files, each of them is composed by a source (like the node table in a Drupal 7 database), a process (the field mapping and processing), and a destination (like a node entity in 8). Here is a subset of the migration files present in core:

$ find core -name *.yml | grep migrations
core/modules/statistics/migrations/statistics_settings.yml
core/modules/statistics/migrations/statistics_node_counter.yml
core/modules/shortcut/migrations/d7_shortcut_set.yml
core/modules/shortcut/migrations/d7_shortcut.yml
core/modules/shortcut/migrations/d7_shortcut_set_users.yml
core/modules/tracker/migrations/d7_tracker_node.yml
core/modules/tracker/migrations/d7_tracker_settings.yml
core/modules/tracker/migrations/d7_tracker_user.yml
core/modules/path/migrations/d7_url_alias.yml
...

And below is the node migration, located at core/modules/node/migrations/d7_node.yml:

id: d7_node
label: Nodes
audit: true
migration_tags:
  - Drupal 7
  - Content
deriver: Drupal\node\Plugin\migrate\D7NodeDeriver
source:
  plugin: d7_node
process:
  nid: tnid
  vid: vid
  langcode:
    plugin: default_value
    source: language
    default_value: "und"
  title: title
  uid: node_uid
  status: status
  created: created
  changed: changed
  promote: promote
  sticky: sticky
  revision_uid: revision_uid
  revision_log: log
  revision_timestamp: timestamp
destination:
  plugin: entity:node
migration_dependencies:
  required:
    - d7_user
    - d7_node_type
  optional:
    - d7_field_instance
    - d7_comment_field_instance

Migration files can be generated dynamically via a deriver like the node migration defines above, which uses D7NodeDeriver to generate a migration for each content type’s data and revision tables. On top of that, migrations can be classified via the migration_tags section (the migration above has the Drupal 7 and Content tags).

Configuration vs content migrations

Migration files may have one or more tags to classify them. These tags are used for running them in groups. Some migration files may have the Configuration tag—like the node type or field migrations—while others might have the Content tag—like the node or user migrations. Usually, you would run the Configuration migrations first in order to configure the new site, and then the Content ones so the content gets fetched, transformed, and inserted on top of such configuration.

Notice though that depending on how much you are planning to change the content model, you may decide to configure the new site manually and write the Content migrations by hand. In the next section, we will examine the differences between generating and writing migrations.

Generating vs Writing migrations

Migration files living within the core, contributed, and custom modules are static files that need to be read and imported into the database as migration plugins so they can be executed. This process, depending on the project needs, can be implemented in two different ways:

Using Migrate Upgrade

Migrate Upgrade module implements a Drush command to automatically generate migrations for all the configuration and content in an existing Drupal 6 or 7 site. The best thing about this approach is that you don’t have to manually create the new content model in the new site since Migrate Upgrade will inspect the source database and do it for you by generating the migrations.

If the existing content model won’t need to go through major changes during the migration, then Migrate Upgrade is a great choice to generate migrations. There is an API that developers can interact with in order to alter migrations and the data being processed. We will see a few examples further down in this article.

Writing migrations by hand

If the content model will go through a deep reorganization such as merging content from different sources into one, reorganizing fields, and changing machine names, then configuring the new site manually and writing content migrations may be the best option. In this scenario, you would write the migration files directly to the config/sync directory so then they can be imported via drush config:import and executed via drush migrate:import.

Notice that if the content model has many entity types, bundles, and fields, this can be a tough job so even if the team decides to go this route, generating content migrations with Migrate Upgrade can be useful since the resulting migrations can serve as templates for the ones to be written.

Setting up the new site for running migrations

Assuming that we have a new site created using the Composer Drupal Project and we have run the installer, we need to require and install the following modules:

$ composer require drupal/migrate_tools drupal/migrate_upgrade drupal/migrate_plus
$ drush pm:enable --yes migrate_tools,migrate_upgrade,migrate_plus

Next, we need to add a database connection to the old site, which we would do at web/sites/default/settings.local.php:

// The default database connection details.
$databases['default']['default'] = [
  'database' => 'drupal8',
  'username' => 'root',
  'password' => 'root',
  'prefix' => '',
  'host' => '127.0.0.1',
  'port' => '3306',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
];

// The Drupal 7 database connection details.
$databases['drupal7']['default'] = [
  'database' => 'drupal7',
  'username' => 'root',
  'password' => 'root',
  'prefix' => '',
  'host' => '127.0.0.1',
  'port' => '3306',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
];

With the above setup, we can move on to the next step, where we will generate migrations.

Generating migrations with Migrate Upgrade

The following command will read all migration files, create migrate entities out of them and insert them into the database so they are ready to be executed:

$ drush --yes migrate:upgrade --legacy-db-key drupal7 --legacy-root sites/default/files --configure-only

It is a good practice to export the resulting migrate entities as configuration so we can track their changes. Therefore, we will export configuration after running the above command, which will create a list of files like the following:

$ drush --yes config:export
 [notice] Differences of the active config to the export directory:
+------------------------------------------------------------------+-----------+
| Config                                                           | Operation |
+------------------------------------------------------------------+-----------+
| migrate_plus.migration.upgrade_d7_date_formats                   | Create    |
| migrate_plus.migration.upgrade_d7_field                          | Create    |
| migrate_plus.migration.upgrade_d7_field_formatter_settings       | Create    |
| migrate_plus.migration.upgrade_d7_field_instance                 | Create    |
| migrate_plus.migration.upgrade_d7_field_instance_widget_settings | Create    |
| migrate_plus.migration.upgrade_d7_file                           | Create    |
| migrate_plus.migration.upgrade_d7_filter_format                  | Create    |
| migrate_plus.migration.upgrade_d7_filter_settings                | Create    |
| migrate_plus.migration.upgrade_d7_image_styles                   | Create    |
| migrate_plus.migration.upgrade_d7_menu                           | Create    |
| migrate_plus.migration.upgrade_d7_menu_links                     | Create    |
| migrate_plus.migration.upgrade_d7_node_article                   | Create    |
| migrate_plus.migration.upgrade_d7_node_revision_article          | Create    |
...

In the above list, we see a mix of Configuration and Content migrations being created. Now we can check the status of each migration via the migrate:status Drush command:

$ drush migrate:status
-----------------------------------------------------------------------------------------------------
 Migration ID                                 Status   Total   Imported   Unprocessed   Last Imported
-----------------------------------------------------------------------------------------------------
 upgrade_d7_date_formats                      Idle     7       0          0
 upgrade_d7_filter_settings                   Idle     1       0          0
 upgrade_d7_image_styles                      Idle     193     0          0
 upgrade_d7_node_settings                     Idle     1       0          0
 upgrade_d7_system_date                       Idle     1       0          0
 upgrade_d7_url_alias                         Idle     25981   0          0
 upgrade_system_site                          Idle     1       0          0
 upgrade_taxonomy_settings                    Idle     0       0          0
 upgrade_d7_path_redirect                     Idle     518     0          0
 upgrade_d7_field                             Idle     253     0          0
 upgrade_d7_field_collection_type             Idle     9       0          0
 upgrade_d7_node_type                         Idle     16      0          0
 upgrade_d7_taxonomy_vocabulary               Idle     34      0          0
 upgrade_d7_field_instance                    Idle     738     0          0
 upgrade_d7_view_modes                        Idle     24      0          0
 upgrade_d7_field_formatter_settings          Idle     1280    0          0
 upgrade_d7_field_instance_widget_settings    Idle     738     0          0
 upgrade_d7_file                              Idle     5731    0          0
 upgrade_d7_filter_format                     Idle     6       0          0
 upgrade_d7_menu                              Idle     5       0          0
 upgrade_d7_user_role                         Idle     6       0          0
 upgrade_d7_user                              Idle     82      0          0
 upgrade_d7_node_article                      Idle     322400  0          0
 upgrade_d7_node_page                         Idle     342     0          0
 upgrade_d7_menu_links                        Idle     623     0          0
 upgrade_d7_node_revision_article             Idle     742577  0          0
 upgrade_d7_node_revision_page                Idle     2122    0          0
 upgrade_d7_taxonomy_term_tags                Idle     1729    0          0
-------------------------------------------- -------- ------- ---------- ------------- ---------------

You should inspect any contributed modules in use on the old site and install them in the new one as they may contain migrations. For example, if the old site is using Redirect module, by installing itm and generating migrations like we did above, you should see a new migration provided by this module ready to go.

Running migrations

Assuming that we decided to run migrations generated by Migrate Upgrade (otherwise skip the following sub-section), we would first run configuration migrations and then content ones.

Configuration migrations

Here is the command to run all migrations with the tag Configuration along with their dependencies.

$ drush migrate:import --tag=Configuration --execute-dependencies
 [notice] Processed 10 items (10 created, 0 updated, 0 failed, 0 ignored) - done with 'upgrade_d7_date_formats'
 [notice] Processed 5 items (5 created, 0 updated, 0 failed, 0 ignored) - done with 'upgrade_d7_filter_settings'
 [notice] Processed 20 items (20 created, 0 updated, 0 failed, 0 ignored) - done with 'upgrade_d7_image_styles'
 [notice] Processed 10 items (10 created, 0 updated, 0 failed, 0 ignored) - done with 'upgrade_d7_node_settings'
 [notice] Processed 1 items (1 created, 0 updated, 0 failed, 0 ignored) - done with 'upgrade_d7_system_date'
 [notice] Processed 5 items (5 created, 0 updated, 0 failed, 0 ignored) - done with 'upgrade_system_site'
 [notice] Processed 100 items (100 created, 0 updated, 0 failed, 0 ignored) - done with 'upgrade_d7_field'
 [notice] Processed 200 items (200 created, 0 updated, 0 failed, 379 ignored) - done with 'upgrade_d7_field_instance'
...

The above command will migrate site configuration, content types, taxonomy vocabularies, view modes, and such. Once this is done, it is recommended to export the resulting configuration via drush config:export and commit the changes. From then on, if we make changes in the old site’s configuration or we alter the Configuration migrations (we will see how further down), we will need to roll back the affected migrations and run them again.

For example, the Media Migration module creates media fields and alters field mappings in content migrations so after installing it we should run the following commands to re-run them:

$ drush --yes migrate:rollback upgrade_d7_view_modes,upgrade_d7_field_instance_widget_settings,upgrade_d7_field_formatter_settings,upgrade_d7_field_instance,upgrade_d7_field
$ drush --yes migrate:import --execute-dependencies upgrade_d7_field,upgrade_d7_field_instance,upgrade_d7_field_formatter_settings,upgrade_d7_field_instance_widget_settings,upgrade_d7_view_modes
$ drush --yes config:export

Once we have executed all Configuration migrations, we can run content ones.

Content migrations

Content migrations are straightforward. We can run them with the following command:

$ drush migrate:import --tag=Content --execute-dependencies

Logging and tracking

Migrate keeps track of all the migrated content via the migrate_* tables. If you check out the new database after running migrations, you will see something like this:

  • A set of migrate_map* tables, storing the old and new identifiers of each migrated entity. These tables are used by the migrate:rollback Drush command to roll back migrated data.
  • A set of migrate_messages* tables, which hold errors and warnings that occurred while running migrations. These can be seen via the migrate:messages Drush command.

Rolling back migrations

In the previous section, we rolled back the field migrations before running them again. This process is great for reverting imported Configuration or Content, which you will do often while developing a migration.

Here is an example. Let’s suppose that you have executed content migrations but articles did not get migrated as you expected. The process you would follow to fix them would be:

  1. Find the migration name via drush migrate:status | grep article.
  2. Roll back migrations with drush migrate:rollback upgrade_d7_node_revision_article,upgrade_d7_node_article.
  3. Perform the changes that you need either directly at the exported migration at config/sync or by altering them and then recreating them with migrate:upgrade like we did at Generating migrations with Migrate Upgrade. We will see how to alter migrations in the next section.
  4. Run the migrations again with drush migrate:import upgrade_d7_node_article,upgrade_d7_node_revision_article.
  5. Verify the changes and, if needed, repeat steps 2 to 4 until you are done.

Migrate events and hooks

Before diving into the APIs to alter migrations, let’s clarify that there are two processes that we can hook into:

  1. The migrate:upgrade Drush command, which reads all migration files in core, contributed, and custom modules and imports them into the database.
  2. The migrate:import Drush command, which runs migrations.

In the next sub-sections, we will see how can we interact with these two commands.

Altering migrations (migrate:upgrade)

Drupal core offers hook_migration_plugins_alter(), which receives the array of available migrations that migrate:upgrade creates. Here is a sample implementation at mymodule.module where we delegate the logic to a service:

/**
 * Implements hook_migration_plugins_alter().
 */
function mymodule_migration_plugins_alter(array &$migrations) {
  $migration_alterer = \Drupal::service('mymodule.migrate.alterer');
  $migration_alterer->process($migrations);
}

And here is a subset of the contents of the service:

class MigrationAlterer {

  /**
   * Processes migration plugins.
   *
   * @param array $migrations
   *   The array of migration plugins.
   */
  public function process(array &$migrations) {
    $this->skipMigrations($migrations);
    $this->disablePathautoAliasCreation($migrations);
    $this->setContentLangcode($migrations);
    $this->setModerationState($migrations);
    $this->persistUuid($migrations);
    $this->skipFileCopy($migrations);
    $this->alterRedirect($migrations);
  }

  /**
   * Skips unneeded migrations.
   *
   * @param array $migrations
   *   The array of migration plugins.
   */
  private function skipMigrations(array &$migrations) {
    // Skip unwanted migrations.
    $migrations_to_skip = [
      'd7_block',
      'd7_comment',
      'd7_comment_entity_display',
      'd7_comment_entity_form_display_subject',
      'd7_comment_field',
      'd7_comment_entity_form_display',
      'd7_comment_type',
      'd7_comment_field_instance',
      'd7_contact_settings',
    ];
    $migrations = array_filter($migrations, function ($migration) use ($migrations_to_skip) {
      return !in_array($migration['id'], $migrations_to_skip);
    });
  }

  // The remaining methods would go here.

}

In the next section, we will see how to alter the data being migrated while running migrations.

Altering data while running migrations (migrate:import)

Drupal core offers hook_migrate_prepare_row() and hook_migrate_MIGRATION_ID_prepare_row, which are triggered before each row of data is processed by the migrate:import Drush command. Additionally, there is a set of events that we can subscribe to such as before and after the migration starts or before and after a row is saved.

On top of the above, Migrate Plus module exposes an event that wraps hook_migrate_prepare_row(). Here is a sample subscriber for this event:

class MyModuleMigrationSubscriber implements EventSubscriberInterface {

  /**
   * Prepare row event handler.
   *
   * @param \Drupal\migrate_plus\Event\MigratePrepareRowEvent $event
   *   The migrate row event.
   *
   * @throws \Drupal\migrate\MigrateSkipRowException
   *   If the row needs to be skipped.
   */
  public function onPrepareRow(MigratePrepareRowEvent $event) {
    $this->alterFieldMigrations($event);
    $this->skipMenuLinks($event);
    $this->setContentModeration($event);
  }

  /**
   * Alters field migrations.
   *
   * @param \Drupal\migrate_plus\Event\MigratePrepareRowEvent $event
   *   The migrate row event.
   *
   * @throws \Drupal\migrate\MigrateSkipRowException
   *   If a row needs to be skipped.
   * @throws \Exception
   *   If the source cannot be changed.
   */
  private function alterFieldMigrations(MigratePrepareRowEvent $event) {
    $field_migrations = [
      'upgrade_d7_field',
      'upgrade_d7_field_instance',
      'upgrade_d7_view_modes',
      'upgrade_d7_field_formatter_settings',
      'upgrade_d7_field_instance_widget_settings',
    ];

    if (in_array($event->getMigration()->getPluginId(), $field_migrations)) {
      // Here are calls to private methods that alter these migrations.
    }
  }

  /**
   * Skips menu links that are either implemented or not needed.
   *
   * @param \Drupal\migrate_plus\Event\MigratePrepareRowEvent $event
   *   The migrate row event.
   *
   * @throws \Drupal\migrate\MigrateSkipRowException
   *   If a row needs to be skipped.
   */
  private function skipMenuLinks(MigratePrepareRowEvent $event) {
    if ($event->getMigration()->getPluginId() != 'upgrade_d7_menu_links') {
      return;
    }

    $paths_to_skip = [
      'some/path',
      'other/path',
    ];

    $menu_link = $event->getRow()->getSourceProperty('link_path');
    if (in_array($menu_link, $paths_to_skip)) {
      throw new MigrateSkipRowException('Skipping menu link ' . $menu_link);
    }
  }

  /**
   * Sets the content moderation field on node migrations.
   *
   * @param \Drupal\migrate_plus\Event\MigratePrepareRowEvent $event
   *   The migrate row event.
   *
   * @throws \Exception
   *   If the source cannot be changed.
   */
  private function setContentModeration(MigratePrepareRowEvent $event) {
    $row = $event->getRow();
    $source = $event->getSource();

    if (('d7_node' == $source->getPluginId()) && isset($event->getMigration()->getProcess()['moderation_state'])) {
      $state = $row->getSourceProperty('status') ? 'published' : 'draft';
      $row->setSourceProperty('moderation_state', $state);
    }
    elseif (('d7_node_revision' == $source->getPluginId()) && isset($event->getMigration()->getProcess()['moderation_state'])) {
      $row->setSourceProperty('moderation_state', 'draft');
    }
  }

}

Conclusion

When you are migrating a Drupal site to 8, Migrate Upgrade module does a lot of the work for free by generating Configuration and Content migrations. Even if you decide to write any of them by hand, it is convenient to run the command and use the resulting migrations as a template for your own. In the next article, we will see how to work seamlessly on a migration while the rest of the team is building the new site.

Thanks to Andrew Berry, April Sides, Karen Stevenson, Marcos Cano, and Salvador Moreno for their feedback and help. Photo by Barth Bailey on Unsplash.

Jun 10 2019
Jun 10

Hussain began working with PHP in 2001, and at that time, wouldn't touch any CMS or framework and preferred to write his own. He grew tired of issues with PHP and was about to switch to another language when he came across a volunteer project that needed Drupal's capabilities, so in 2010 he tried Drupal 6.

Jun 05 2019
Jun 05

According to Drupal’s community documentation, “The Benevolent Dictator for Life (BDFL),” Dries Buytaert, is the “chief decision-maker for the [Drupal] project.” In practice, as Dries has pointed out, he wears “a lot of different hats: manager of people and projects, evangelist, fundraiser, sponsor, public speaker, and BDFL.” And he is chairman and chief technology officer of a company that has received $173,500,000 in funding. Recently I was wondering, how often does Dries wear his Drupal core code committer hat?

In this article, I will use data from the Drupal Git commit history, as well as other sources, to demonstrate how dramatically the Drupal core “code committing” landscape has changed. I do not intend to tell the entire story about power structures in the Drupal community in a single article. I believe that issue credits, for instance, offer more clues about power structures. Rather, my analysis below argues that the process of committing code to Drupal core is a far more complex process than some might assume of a project with a BDFL.

Understanding Drupal Core Committers

Whereas Dries used to commit 100% of the core code, he now leads a team of core committers who “are the people in the project with commit access to Drupal core.” In other words, core committers are the people who can make changes to Drupal core. We can get an idea about the work of core committers from sites such as Open Hub or the GitLab contributor charts, but those charts omit key details about this team. In this analysis, I’d like to offer more context.

The Drupal core committer team has grown exponentially since the start of the Drupal codebase more than 19 years ago. At present, there are 11 core committers for Drupal 8, and from what I can tell, these are the dates that each new core committer was announced:

Unsurprisingly, one task of a core committer is to commit code. For a Drupal core committer to consider a change to Drupal, the proposed change must advance through a series of “core gates,” including the accessibility gate and the performance gate. Code must pass Drupal’s automated tests and meet Drupal’s coding standards. But even after making it through all of the “gates,” only a core committer can add, delete, or update Drupal code. At any given time, there might be 100 or more Drupal core issues that have (presumably) gone through the process of being proposed, discussed, developed, tested, and eventually, “Reviewed & tested by the community,” or RTBC.

Core committers can provide feedback on these RTBC issues, review and commit code from them, or change their status back to “Needs work” or “Needs review.” Just because core committers have the power to commit code does not necessarily mean they view their role as deciding what code gets into core and what does not. For example, Alex Pott told me, “I feel that I exert much more influence on the direction of core in what I choose to contribute code to than in what I commit.” He said that he views the RTBC queue more as a “TODO list” than a menu from which he can select what he wants to commit.

Many people might not realize that core committers do a lot more than just committing code. On the one hand, as Dries shared with me, “The hard work is not the actual committing – that only takes a few seconds. The hard work is all the reviews, feedback, and consensus building that happens prior to the actual commit.” Indeed, core committers contribute to the Drupal project in many ways that are difficult to measure. For instance, when core committers offer feedback in the issue queue, organize initiative meetings, or encourage other contributors, they do not get any easily measured “credit.” It was Jess who suggested that I work on the Configuration Management Initiative (CMI) and I will be forever grateful because her encouragement likely changed the course of my career.

The core committers play significant roles in the Drupal project, and those roles are not arbitrary. Each core committer has distinct responsibilities. According to the community documentation (a “living document”), “the BDFL assigns [core committers] certain areas of focus.” For instance, within the team of core committers, a Product Manager, Framework Manager, and Release Manager, each has different responsibilities. The “core committers are a group of peers, and are expected to work closely together to achieve alignment, to consult each other freely when making difficult decisions, and to bring in the BDFL as needed.”

Part of my goal here is to show that the commit history can only tell part of the story about the team of core committers. I’d also like to point out that in this article I limit my focus to Drupal 8 core development, and not, for instance, the work of the Drupal 7 core committers, the maintainers of the 43,000+ contributed modules, the Drupal documentation initiative, conference selection committees, or any of the other groups of people who wield power in the Drupal community.

This work is one component of my larger project to evaluate publicly-available data sources to help determine if any of them might be beneficial to the Drupal community. I acknowledge that by counting countable things I risk highlighting trivial aspects of a thoughtful community or shifting attention away from what the Drupal community actually values. Nevertheless, I believe that interpreting Drupal’s commit history is a worthwhile undertaking, in part because it is publicly-available data that might be misinterpreted, but also because I think that a careful analysis reveals further evidence of a claim that Dries and I made in 2016: Drupal “is a healthy project” and “the Drupal community is far ahead in understanding how to sustain and scale the project.”

Who Commits Drupal Core Code?

The Git commit history cannot answer all of our questions, but it can answer some questions. As one GitLab employee put it, “Git commit messages are the fingerprints that you leave on the code you touch.” Commit messages tell us who has been pushing code and why. The messages form a line by line history of the Drupal core codebase, from the very first commit to the “birth” of Drupal 1.0.0, to today.

The commit history can answer questions such as, “Who has made the most commits to Drupal core?” Unsurprisingly, the answer to that question is “Dries”:

However, since 2015 Dries has dramatically reduced his core commits. In fact, he only has 4 commits since October 2015:

If someone just looked at the contributor charts or a graph like the one above, they might not realize the fact that Dries is as committed to Drupal as ever. He spends less time obsessing about the code and architecture and more time setting strategy, helping the Drupal Association, talking to core committers, and getting funding for core initiatives and core committers. In recent years he has dedicated considerable time to communication and promotion, and he has been forthcoming with regards to his new role. He has been writing more in-depth blog posts about the various Drupal initiatives as well as other aspects of the project. In other words, he has intentionally shifted his focus away from committing towards other aspects of the project, and his “guiding principle” is to “optimize for impact.”

Another part of the reason that Dries has had fewer commits stems from the recent shift in effort from Drupal core to contrib. Overall commits to Drupal core have decreased since their highest point in 2013, and have been down considerably since the release of Drupal 8 in 2015:

But once again, we must interpret these data carefully. Even if the total number of commits to Drupal core has declined since 2015, the Drupal project continues to evolve. Since Drupal 8.0.0, BigPipe, Workflows, Migrate, Media, and Layout Builder are just a few of the new modules that have become stable, and the list of strategic initiatives remains ambitious. So while the data may seem to suggest that interest in Drupal core has waned, I suspect that, in fact, the opposite is true.

We can, on the other hand, use the git commit history to get a sense for how the other core committers have become involved in committing code to Drupal core. We can visualize all commits by day over the entire history of the Drupal codebase for each (current) individual core committer:

We get a better sense of the distribution of work by looking beyond total commits to the percentage of core commits per committer for each year. Using percentages better demonstrates how the work of the code committing has become far more distributed (in this chart, "colorful") than it was during the early years of Drupal's lifespan:

You might notice that the chart above does not include past core committers such as the Drupal 5 core committer, Neil Drumm (406 commits), or the Drupal 4.7 core committer, Gerhard Killesreiter (193 commits). I’m more interested in recent changes.

When we shift back to looking at total commits (rather than percentages) we can watch the team grow over the entire history of the Drupal project in the following animation, which stacks (ranks) committers by year based on their total number of commits:

 

One fact that caught my attention was that Alex Pott’s name topped the list for 6 of the last 7 years. But I’d like to stress again that this visualization can only tell part of the story. For instance, those numbers don’t reflect the fact that Alex quit his job in order to work on Drupal 8 (before becoming a core committer) or his dedication to working on “non-technical” issues, such as a recent change that replaced gendered language with gender-neutral language in the Drupal codebase. I admit to a particular bias because I have had the pleasure of giving talks as well as working with him on the Configuration Management Initiative (CMI), but I think the correct way to interpret these data is to conclude simply that Alex Pott, along with Nathaniel Catchpole and Angie Byron, are a few of the members of the core committer team who have been spending more of their time committing code.

We find a slightly different story when we look beyond just the number of commits. The commit history also contains the total number of modified files, as well as the number of added and deleted lines. Each commit includes entries like this:

2 files changed, 4 insertions(+), 15 deletions(-)

Parsing the Git logs in order to measure insertions and deletions reveals a slightly different breakdown, with Nathaniel Catchpole’s name at the top of the list:

Differences in the ranking are largely the result of just a few issues that moved around more than 100,000 lines of code and significantly affected the totals, such as removing external dependencies, moving all core files under /core, converting to array syntax, not including vendor test code, and removing migrate-db.sh.

The commit history contains a wealth of additional fascinating data points that are beyond the scope of this article, but for now, I’d like to discuss just one more to suggest the changing nature in the land of core committing: commit messages. Every core commit includes a message that follows a prescribed pattern and includes the issue number, a comma-separated list of usernames, and a short summary of the change. The syntax looks like this:

Issue #52287 by sun, Dries: Fix outdated commit message standards

Combining all commit messages and removing the English language “stopwords” – such as “to,” “if,” “this,” and “were” – results in a list of words and usernames, with one core committer near the top of the list, alexpott (Alex Pott’s username):

 

Only one other user, Daniel Wehner (dawehner), is mentioned more than Alex Pott. I find it mildly interesting to see that “dawehner” and “alexpott” appear in more commit messages than words such as “tests,” “use,” “fix,” “entity,” “field,” or even “Drupal.” It also caught my attention that the word “dries” did not make my top 20 list. Thus, I would suggest that a basic ranking of the words used in commit messages does not provide much value and is not even a particularly good method to determine who is contributing code to Drupal – DrupalCores, for instance, does a much better job.

Nonetheless, I mention the commit messages because they are part of the commit history and because those messages remind us once again that core committers like Alex Pott do a lot more than commit code to the Drupal project – they also contribute a remarkable amount of code. Alex Pott, Jess, Gábor Hojtsy, Nathaniel Catchpole, and Alex Bronstein are each (as of this writing) among the top 20 contributors to Drupal 8. Moreover, this list of words brings us back to questions about the suitability of a term such as “BDFL.”

BDFL Comparisons

While Dries could still legitimately don a hat that reads “Undisputed Leader of the Drupal Project,” it seems clear that the dynamics of committing code to Drupal core have shifted and that core committers assume a variety of key roles in the success of the Drupal project. During the process of writing this article, someone even opened an issue on Drupal.org to “Officially replace the title BDFL with Project Lead.” Whatever his official title, the evolving structure of the core committer team has allowed Dries to focus on the overall direction of the Drupal project and spend less time involved in choices about the code that gets committed to Drupal core on a daily basis. And it’s a considerable amount of code – since Drupal 8 was released there have been more than 5719 commits to Drupal core, or roughly 4.42 commits per day.

While other well-known free software projects with a BDFL, such as Vim, only have one contributor, numerous other well-known projects have moved in a direction comparable to Drupal. As of this writing, Linus Torvalds sits at #37 on the list of contributors to the Linux kernel. Or perhaps more related to Drupal, Matt Mullenweg, who calls himself the BDFL of WordPress, is not listed as a core contributor to the project and is not the top contributor to the project – that honor goes to Sergey Biryukov, who has held it for a while.

Further, one could reasonably conclude that Drupal’s commit history calls into question a concern that many people, including me, have raised regarding the influence of Acquia (Dries’s company) in the Drupal community. Acquia sponsors a lot of Drupal development, including core committers. Angie Byron, Jess, Gábor Hojtsy, and Alex Bronstein are all paid by Acquia to work on Drupal core full-time. However, I still believe what Dries and I wrote in 2016 when we stated that we do not think Acquia should “contribute less. Instead, we would like to see more companies provide more leadership to Drupal and meaningfully contribute on Drupal.org.” On this topic, the commit logs indicate positive movement: since Drupal 8 was released, Alex Pott and Nathaniel Catchpole – the two most active core committers – have made 72% of the commits to Drupal core – and neither of them work for Acquia. So while everyone in the Drupal community owes a debt of gratitude to Acquia for their sponsorship of the Drupal project, we should also thank companies the sponsor core committers like Alex Pott and Nathaniel Catchpole, including Thunder, Acro Media, Chapter Three, Third and Grove, and Tag1 Consulting.

And the other core committers? Well, I can’t possibly visualize all of the work that they do. They are helping coordinate core initiatives, such as the Admin UI & JavaScript Modernisation initiative and Drupal 9 initiative. They are working on Drupal’s out-of-the-box experience and ensuring consistency across APIs. They are helping other contributors collaborate more effectively and efficiently. They are coordinating with the security team and helping to remove release blockers. The core committers embody the spirit of the phrase that appears on every new Drupal installation: “Powered by Drupal.” I am grateful for their dedication to the Drupal project and the Drupal community. The work they do is often not highly visible, but it’s vital to the continued success of the project.

A deeper appreciation for the work of the Drupal core committers has been just one of the positive consequences of this project. My first attempts at interpreting Drupal’s commit history were somewhat misguided because I did not fully understand the inner workings of the team of core committers. But in fact, nobody can completely understand or represent what the core committers do, and I personally believe that the “Drupal community” is little more than a collection of stories we choose to believe. However, we live in a time where people desire to understand the world through dashboards that summarize data and where we gloss over complexities. Consequently, I feel more motivated than ever to continue my search for data that more accurately reflect the Drupal community for which I have so much respect. (Incidentally, if you are a statistician with an interest in free software, I would love to collaborate.) If we want a deeper understanding of who contributes to Drupal, we need more and better sources of information than Drupal’s “contributors” page. I accept that I will never concoct the magical visualization that perfectly represents “Drupal,” but I am enjoying the search.

Code for this project is available on GitLab. I would like to thank Cathy Theys, Megh Plunkett, Dries Buytaert, and Alex Pott for their thoughtful feedback on earlier drafts of this article.

Jun 03 2019
Jun 03

Fresh off the inaugural Flyover Camp, co-organizer Karl Kedrovsky talks organizing local user groups, what it means to give back to the community, and why some furniture is timeless.

You've done it once...you've done it one more time than most people, so you can absolutely give a talk or just share some knowledge on the subject.

May 27 2019
May 27

Cathy Theys could often be found roaming contribution days at DrupalCons organizing people, but she's recently switched gears back to development. I caught up with her in Seattle to find out why.

Somebody showed me that if you help other people in the issue queues, they'll help you back, and that snowballed.

May 23 2019
May 23

Mike and Matt invite Layout Initiative lead Tim Plunkett on the podcast to talk everything about Drupal's new Layout Builder, its use-cases, issues, and what's new in Drupal 8.7, and what's coming next!

People want visibility controls like they have in the block system... there are people working on that issue.

May 22 2019
May 22

One of the significant advantages of using free and open-source software is the ability to fix bugs without being dependent on external teams or organizations. As PHP and JavaScript developers, our team deals with in-development bugs and new features daily. So how do we get these changes onto sites before they’re released upstream?

In Drupal 7, there was a fairly standard approach to this. Since we would commit Drupal modules to a site’s git repository, we could directly apply patches to the code and commit them. A variety of approaches came up for keeping track of the applied patches, such as a /patches directory containing copies of each patch or using drush make.

With Drupal 8’s adoption of Composer for site builds, not only did the number of third-party dependencies increase but a new best practice came along: /vendor (and by extension, /core and /modules/contrib) should not be committed to git. Instead, these are left out of the site repository and installed with composer install instead. This left applying patches in a tricky place.

I’m sure I wasn’t the only one who searched for “composer patches” and immediately found the composer-patches plugin. My first Drupal 8 work was building a suite of modules and not whole sites, so it was a while until I actually used it day-by-day. However, when that time came, our team ran into several edge cases. Here are some of them.

Some issues we ran into with patching

❗ Note that we originally did this investigation in June of 2018, so some of these issues may be different or may be fixed today.

Patching of composer.json is tricky

Sometimes, a change to a library or module also requires changes to that project’s composer.json file. Typically this is when a new API is added that the project now requires. For example, we needed to update the kevinrob/guzzle-cache-middleware library in the guzzle_cache module so we could use PHPUnit 6. Since composer-patches can only react to code after it’s installed it can’t see the updated require line in the module’s composer.json. Even if composer were modified to detect the change and rebuild the project’s dependency tree, that is a slow process and would impact composer update even more.

“Just apply the patch” assumes consistent patch formatting

It’s a fact of life that different projects have different patch standards. While the widespread adoption of git has improved things, proper prefix detection is tricky. For example, we ran into a situation where a Drupal core Migrate patch that only added new files was being added to the wrong directory. There was a configuration option we could set to override this, but it was many hours of debugging to figure it out.

Patch tools themselves (git apply and patch) don’t have great APIs for programs to use. For example, git apply has had many subtle changes over the years that break expectations when users are running old versions. Trying to script these programs for use across the broad spectrum of systems is nearly impossible. There’s an open issue to rewrite patching in PHP for composer-patches, which is a big undertaking.

Patching on top of patches

Some sites may require multiple patches to the same module or library. For example, on a recent Drupal 8 site we launched, we floated between 5 to 10 patches to Drupal core itself (with the actual set of patches changing over time). If patches conflict with each other, resolving the conflicts is a lot of work. You have to apply one patch, and then apply each successive patch resolving conflicts at each step. Then, a new patch has to be generated and included in composer.json. When an upstream release occurs, or a patch is merged, the whole process starts over. This can be especially tricky in situations where a security advisory has been published, and you’re under time pressure to get a release out the door.

Faith in rm -rf vendor web/core web/modules/contrib

When patches fail to apply correctly, it can sometimes leave local vendor code in a broken state. For a long time we had an rm in our CI build scripts even though we were trying to improve build performance by caching vendor code. Several times a week developers on our team would report having to do the same on their locals. On the one hand, at least this was an option to get things up and running. However, it was concerning that this was our solution so much of the time when we couldn’t quickly find a root cause.

You still have to fork some projects

By default, applying patches is a root-only configuration in composer.json. That means that if a Drupal module requires a specific patch in another library, it won’t be applied. It’s possible to enable patching of dependencies, but it still requires the developer to be aware that the patches are required. If you’re maintaining a module or library used by different teams, it’s much less of a support burden to fork the patched project.

If you have an existing code base, and you haven’t hit any of these issues, then it’s fine to stick with what you have. We have some teams who haven’t had any trouble with patches. But, if the above issues are familiar, read on!

What does upstream use?

After encountering the above issues, we took a step back and looked beyond the Drupal island. Certainly, we weren’t alone. What are other composer-managed PHP projects doing to solve this problem?

After some research into what PHP projects outside of Drupal, and npm projects do, most use a forking model instead of a patching model.

I was curious about patch-focused solutions for other code dependency managers. I found patch-package for npm, which is the most popular package for applying patches during npm builds. Looking at their issue queue, it seemed like that entirely independent project experienced similar classes of bugs (#11, #49, #96) as composer-patches. For example, there are bugs related to patches applying differently based on the host environment, handling multiple patches to a single package, or applying patches in libraries and not root projects.

I’m in no way trying to say that these issues can’t be fixed, but it does provide some evidence that my prior troubles with composer-patches were more due to the complex nature of the problem than the specific implementation. As someone who routinely works on projects where a code base is handed off to a client team for maintenance, solving bugs with less code (in this case, by removing a plugin) is almost always the best solution.

The Steps for Patching and Forking

Let’s suppose that we need to patch and fork the Pathauto module.

  1. Fork Pathauto to your user or organization. In most cases, forking to a public repository is fine. If it's a Drupal module, you can import the repository to GitHub using their import tool. Any other Git code host will work too.
  2. Clone from the fork you created and check out the tag you are patching against.
  3. Create a branch called patched.
  4. Apply the patch there, and file and merge a pull request to your patched branch with the changes.
    1. Optionally, you can create tags like 1.2-patch1, incrementing patch1 each time you change the patch set against a given tag.
  5. In your site, add the newly forked repository to composer.json:
    "repositories": [
     {
         "type": "vcs",
         "url": "https://github.com/lullabot/drupal-pathauto"
     },
     {
         "type": "composer",
         "url": "https://packages.drupal.org/8"
     }
    ]
    
    
  6. Use the new branch or tag in the require line for the project you are patching, such as dev-patched or ^1.2-patch1.
  7. Run composer update drupal/pathauto and you will get the new version.

We like to create a PATCHES.md in each fork file to keep track of the current patches, but it’s not required.

Rerolling your patch

If you haven’t already, you’ll need to add a second remote to your local git checkout. For example, if you imported Pathauto to GitHub, you’ll need to add the Drupal.org git repository with git remote add drupal https://git.drupalcode.org/project/pathauto.git. Then, to fetch new updates, run git fetch --tags drupal.

In most cases, you can simply run git merge <new pathauto tag> into your patched branch. The patch(es) previously applied will be left as-is. If there are conflicts, you can use your normal merge resolution tools to resolve them, and then update the issue with the new patch.

Removing your patch

If your patch has been merged, and it was the only change you’d made to the project, you could simply abandon the fork and remove it from your site’s composer.json.

Otherwise, in most cases you can simply merge the new tag into your patched branch and git will detect that the changed files now have identical contents. You can double check by running git diff <tag> to see exactly what changes your fork has compared to a given version.

Handling Drupal Core

There are two small cases with Drupal to be aware of.

First, the Drupal Composer project has a composer script that runs to download index.php, ROBOTS.txt, and other “scaffolding” files when you change Drupal core releases. For example, if you create a 8.7.0-patch1 tag in your fork, the scaffolding script will look for that tag on Drupal.org, which won’t exist.

The best way to handle this is with Composer inline aliases, specifying in require:

"drupal/core": "8.7.0-patch1 as 8.7.0"

Unfortunately, drupal-scaffold doesn’t check for inline aliases, but there’s an open pull request adding that support. Or, you can ignore the errors and manually download the files when updating Drupal.

A slightly trickier issue comes from how Drupal core handles release branches. Most other projects using Git, such as Symfony, will forward-merge changes from older versions to newer versions. For example, with Symfony you could checkout the 3.4 branch, and merge in 4.1 without any conflicts.

Drupal treats its branches as independent due to the patch-based workflow. A given bug may be fixed in both 8.6 and 8.7, but the actual changes might conflict with each other. For example, merging 8.7.0 into 8.6.15 leads to dozens of conflicts.

Luckily, git has tools for this situation.

$ git checkout 8.7.0 # Checkout the new version of Drupal.
$ git checkout -b update-core # Create a branch
git merge -s ours 8.6.15 # Merge branches, ignoring all changes from 8.6.15. The code on disk is identical to 8.7.0.
git merge patched # Merges just the changes left over from our patched branch into 8.7.0 - only patch related conflicts remain.

Once the transition to GitLab on Drupal.org is complete, and we start using git merges in Drupal workflows, these extra steps should go away.

Transitioning an existing project

When we moved to this workflow, we didn’t want to update every single patched dependency at once. At first, we just updated Drupal core as it was the project we had the most trouble with, and we wanted to be sure we’d see a real improvement. In fact, the site I first used this workflow on still has a few patches applied with composer-patches. For existing projects, it’s completely reasonable to transition dependencies one at a time as they need to be updated for other reasons.

Results

Over a period of several months, we incrementally replaced patches with forks as we updated dependencies. Over that time, we saw:

  • Fewer git checkouts in vendor. With patches, you need to set "preferred-install": "source" which causes Composer to use git to fetch dependencies. With forks, Composer uses the GitHub API to download a zip of the code, which is significantly faster.

  • A much faster composer install.
  • Easier peer review, because the patch is applied in the forked repository as a pull request.
  • Simplified updating of patched dependencies, as we could use all of git’s built-in tools to help with conflict resolution.
  • Forks completely solved problems our front-end developers had with composer install failing to apply patches correctly.
  • An identical workflow for Drupal modules, PHP Libraries, and node packages.
  • Reduced maintenance by sharing patch sets with multiple projects.
    • For example, we are working on a Drupal 8 project for a client that is their second Drupal 8 site. Rather than track and apply patches individually, we point Composer to the existing Drupal 8 fork from the first site. This significantly reduces update and testing time for new core releases.
  • Simplified checks for dependency updates by running composer outdated.

There were a few downsides:

  • Sometimes slower composer update especially if dependencies have lots of tags. In practice, a straight composer update on a site takes between one and two minutes.
  • Many more repositories to manage.
  • Simple patch updates with no conflicts are a little more work as you have to push to your forked repository.
  • If your team doesn’t have any private repositories with custom modules as a Composer dependency, team members running composer update may need to generate an API token. Luckily, Composer provides a one-click link to generate one.

Many thanks to James Sansbury, Juampy NR, and Eduardo Telaya for their technical reviews.

May 20 2019
May 20

For years, SimplyTest.me has provided a once-and-done tool for testing Drupal, and Adam Bergstein has recently taken over maintainership. In this episode we find out why, how you can help, and coffee!

May 17 2019
May 17

In our booth during DrupalCon Seattle this year, we had the pleasure of speaking with people in the Drupal community about our new Support & Maintenance offering. The response we heard most often was, “Doesn’t Lullabot already do support and maintenance?” The short answer is yes. We have supported and continue to support our clients and maintain their websites and applications, but never before have we intentionally started working with a new client on a strict support and maintenance basis, until now.

Over the past eight years, our primary focus has been engaging with clients as early as possible, often beginning with Strategy and Design and then implementing that vision with our Development team. We would perform or augment large capital web projects and then roll off when the project moved into a day-to-day operations mode, handing it back to internal resources or even sometimes to other Drupal shops for long-term maintenance.

Why Support & Maintenance

For the past couple of years we’ve seen:

  • Increasing requests for smaller engagements,
  • Drupal 7 and 8 coming to end-of-life, and
  • Clients wanting us to remain in a limited capacity to support their team long after the site is “done.” 

To meet these needs, we’ve assembled a team of experienced developers with a good mix of project management, front-end, and back-end expertise, as well as a strong emphasis on communication. Some of us have even taught college classes. We’ve chosen team members who are cross-functional; while they are experts in some areas they possess a strong comprehension of all areas related to enterprise development, deployment, and best practices. Surprisingly, Support & Maintenance has done a lot more mentorship than initially anticipated. (We love to teach and share what we know.) As with all of our clients, those working with our Support & Maintenance department have access to our hive mind, which includes the leads of multiple Drupal core initiatives, maintainers of popular modules, conference speakers, and authors of numerous Drupal publications.

Tools and Process

Each project and each client is different. For that reason, we endeavor to be flexible. Some clients may already be using their own instance of Jira for ticketing, some may have no ticketing system at all. We endeavor to learn our client’s existing processes before recommending changes. If there’s a process issue that’s preventing us from being effective, we’ll let you know and make suggestions for improvement. Some of our clients use Pantheon or Acquia, while others may deploy sites on AWS or their own internal infrastructure. Our Support & Maintenance team is aware of these differences and we pride ourselves on being adaptable and diverse in our areas of expertise. We’re able to adjust quickly to fit the needs of each individual client and project.

Next Steps

If your organization has worked with us in the past and wants to work with us again, or has always wanted to work with us but feared your project or budget was not enough to justify a full-time developer, please reach out to our team to see if Support & Maintenance might be a good alternative. 

David Burns

Thumbnail

That special place where people, code, and business requirements meet is the place that I want to be.

May 15 2019
May 15

Before I dive into our Mental Health Initiative, I'll tell you how it came to exist. Leading up to our annual team retreat, I send a team survey to discover what excites or worries people. The questions change year to year, but here are what appear to be the perennial questions. I'll include the majority response from the team to each item as well.

This survey has had a significant impact on where we've put our focus and attention this year. We've created a support and maintenance offering and continue to find ways to celebrate our team for all the hard work and effort they put into their projects. But when I asked the question about making one change to our company, a couple of people responded with "better support for mental health." Having recently attended DrupalCorn in Iowa and listening to J.D. Flynn's inspiring and courageous presentation "Erasing the Stigma, Mental Health in Tech," I worried that we might not be doing enough, but I also didn't know if it was a problem—because we don't talk much about mental illness.

There's a stigma connected to mental illness, even when it's a sickness...not a weakness. Thinking of mental illness as a weakness is like telling someone who is wearing glasses that they aren't looking hard enough and shouldn't need glasses in the first place. As a company, it makes sense to help your team stay healthy, happy, and reduce the number of sick days people need to take. Depression, anxiety, and mood disorders all actively work to undermine performance and contribute to burnout. Burnout is awful.

Mental health is as essential for knowledge work in the 21st century as physical health was for physical labor in the past. As psychiatrist Dr. Michael Freeman writes:

By the end of the twentieth century, creativity became the driving force behind America's explosive economic growth. The creativity that drives economic growth is also a common feature of people with bipolar spectrum conditions, depression, substance use conditions, and ADHD.

In other words, there's likely a correlation between the rise of and demand for creative work and mental illness. Further, for as much as remote work lowers stress levels, it can foster loneliness and isolation, which left unchecked, can lead to anxiety and depression. Or as the CEO of Doist, Amir Salihefendic said, "When you don't see your coworkers in person every day, it's easy to assume that everything is ok when it's not."

So in August of 2018, I put out a call to action to form a Mental Health Initiative at Lullabot and many people stepped forward to better understand the issues surrounding mental health. Something useful from the beginning was to have a Human Resources representative as part of the team because many of the questions and insights led back to insurance-related questions. And, since we have insurance coverage for people in the US, Canada, and the UK, it can get confusing.

Phase 1: Forming a mission statement

After several meetings, we defined the purpose of the group.

To promote optimal mental health at Lullabot and reduce the stigma of mental illness by raising awareness, providing resources, and increasing support.

The first part of the mission statement acknowledges that we have two goals: to promote optimal mental health and to reduce the stigma of mental illness. In other words, this isn't just for people with a diagnosed illness. We also want to provide support for any mental health issues, which could be other stressors or challenges. Our strategy to make this happen is made of three parts:

  1. Raising awareness

  2. Providing resources

  3. Increasing support

With a three-pronged strategy in place, we could begin to categorize the services and support we currently provide.

Phase 2: What do we currently provide?

After we formed our mission statement, it was time to audit our existing support and services. We basically looked at everything we did for mental health and mapped them to each of the three strategies outlined in our mission statement.

We found that we did a lot as a company to increase support and provide resources. At the top of those lists were things like creating a #being-human Slack channel, team calls, manager one-on-ones, and so on. For resources, we provide health insurance with coverage for mental health services, an employee assistance program, and a handbook. Where we noticeably fell short was on raising awareness. We didn't really have anything on that list.

Next, we did a quick brainstorm of the things it might be helpful to add. Some examples were taking mental health first aid classes, increasing our education and awareness on mental health, discussing burnout and so on. But before we went any further, we knew it was time to get insight and direction from the team.

What does our team need?

While we have lots of ideas on ways we can improve mental health, none of it matters if it doesn't help the team. Nothing beats asking people for input over making assumptions on their behalf. So we created a survey to benchmark and set goals for future improvement. The study was anonymous and optional, with the results only available to Human Resources. And the poll was primarily modeled from the fantastic work of the Open Source Mental Health Initiative.

Phase 3: Next steps

Currently, we're in the process of aggregating the survey feedback from our team. Some of the broad takeaways so far:

  • Many team members found our insurance coverage lacking support for mental health needs. Specifically, some said the remaining cost after deductibles were still too high.

  • 75% of the team wants to learn more about our insurance coverage as it relates to mental health. We need to learn the kinds of questions people need answering and make it easy for them to get that information.

  • A little less than half the respondents said mental health issues sometimes interfere with work productivity. A "mental health issue" can be anything from daily stressors, challenges with working remotely, and finding coping mechanisms, to not-yet-diagnosed mental health conditions.

  • 12% of the team said they would not feel comfortable discussing a mental health issue with a supervisor.

  • 70% of the team was interested in learning more about burnout, anxiety, depression and understanding mental illness in general. This speaks to a significant educational opportunity.

It's scary to find out we’re not doing enough. But we would have never made it this far if we hadn't brought it up in the first place, and ignorance isn't an option. There is a lot to unpack in the five bullet points above. Is health coverage worse for certain countries? What does supplemental care look like? How does workers compensation help, if at all? Is there training we can offer managers to understand mental health issues better and cultivate compassion? How do we best spread awareness of mental health? How do we keep these conversation engaging and not too heavy?

We don't have all the answers, but we've lifted the veil of ignorance and set a clear path forward with actionable steps we can take. We also hope that sharing our efforts will contribute to reducing the stigma of mental illness. If you have resources or ways you can help support our initiative, please let us know. For example, if you are someone who routinely speaks to others about mental health, we'd love to connect and possibly have you talk to our team during a lunch-and-learn. Meanwhile, we will continue to share our insights with our broader communities as well.

Huge thanks to J.D. Flynn who paved the way for us to have this conversation by presenting the topic with courage and compassion. Please consider supporting him so he can continue his mental health advocacy efforts. And thank you to the work of the OSMI group who paved the way for our internal survey. Finally, we wouldn’t have made any progress without the leadership of Kris Konrady and Marissa Epstein. Huge thanks to the rest of our Mental Health working group: Angus Mak, Greg Dunlap, Chris Albrecht, Juan Olalla, Dave Reid, and James Sansbury.

May 03 2019
May 03

For a long time now, I’ve preferred Vagrant for local development. My starting point of choice for using Vagrant on a project has been the excellent trusty32-lamp VM, maintained by Andrew Berry. However, with Ubuntu 14.04 reaching end of life, Andrew thought to merge the best of trusty32-lamp VM with Laravel’s Homestead. Thus, in a beautiful instance of open source collaboration, it was so.

Homestead is a similarly fashioned Vagrant box, maintained by the Laravel community, built on Ubuntu 18.04. The result of the marriage is a feature packed, ready to go local development environment that includes multiple versions of PHP, your choice of nginx or apache, xdebug support, profiling with xhprof and xhgui, your choice of MySQL or MariaDB, and so much more.

Let’s look at how you would get set up with Homestead for your Drupal project.

tldr;

If you’re the type to dive into code rather than wade through an article, and you’ve worked with Vagrant before, run this, and take the box for a spin. It sets up a stock Drupal 8 site, ready to install. The database name is drupal_homestead, the root database user and password to install Drupal is homestead / secret.

$ composer create-project m4olivei/drupal-project:8.x-dev drupal_homestead
$ cd drupal_homestead
$ vendor/bin/homestead make
$ vagrant up

Onwards.

Preliminaries

I’m kind of assuming that you’ve worked with Vagrant before, but if you haven’t, fear not! Homestead makes the world of Vagrant very approachable. You’ll just need some software before continuing. You’ll need to install a VM provider, eg. VirtualBox, VMWare, Parallels or Hyper-V. I use VirtualBox, as it’s free and the most straightforward to install. Also, you’ll need Vagrant.

Composer all the things

One really nice thing about Homestead is it can be installed and setup as a composer package. This means that you can easily add and share a Homestead local setup with everyone on your project via version control.

We’ll start with a clone of the Composer Drupal project and add Homestead to it. If you’re adding Homestead to an existing project, skip this step.

$ composer create-project drupal-composer/drupal-project:8.x-dev drupal_homestead --no-interaction
$ cd drupal_homestead
$ git init .
$ git add .
$ git commit -m "Initial commit"

Now that we have a Drupal site to add Homestead to, change into your project directory (wherever your root composer.json file is) and continue by requiring the laravel/homestead package:

$ composer require laravel/homestead --dev

Home at last

At this point, we’re ready to setup Homestead for our project. Homestead comes with a handy console application which will scaffold some files that are required to provision the Vagrant box. Run the following:

$ vendor/bin/homestead make

This will copy a handful of files to your project directory:

  • Homestead.yaml
  • Vagrantfile
  • after.sh
  • aliases

At the very least we’ll want to make tweaks to Homestead.yaml. By editing Homestead.yaml we can easily customize the Vagrant box to our liking. In a typical Vagrant box setup, you would edit the Vagrantfile directly, but here, Homestead exposes the essentials to customize on a per project basis in a much more palatable form. Open up the Homestead.yaml file in your editor of choice. As of this writing, it’ll look something like this:

ip: 192.168.10.10
memory: 2048
cpus: 1
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
    - ~/.ssh/id_rsa
folders:
    -
        map: /Users/m4olivei/projects/drupal_homestead
        to: /home/vagrant/code
sites:
    -
        map: homestead.test
        to: /home/vagrant/code/public
databases:
    - homestead
name: drupal-homestead
hostname: drupal-homestead

It’s worth highlighting a couple things here. If you have better than a single core machine, bump the cpus to match. For my 2013 Macbook with a core i7, I’ve set this to cpus: 4. The VM won’t suck CPU when it’s idle, so take advantage of the performance.

Next folders lists all the folders you wish to share with your Homestead environment. As the files change on your local machine, they are kept in sync between your local machine and the Homestead environment *.

folders:
    -
        map: /Users/m4olivei/projects/drupal_homestead
        to: /home/vagrant/code

Here all the files from /Users/m4olivei/projects/drupal_homestead will be shared to the /home/vagrant/code folder inside the Vagrant box. 

Next sites, as you might guess, lists all of the websites hosted inside the Vagrant box. Homestead can be used for multiple projects. I prefer to keep it to a single project per VM, especially since Drupal codebases tend to be so huge, so we’ll just keep the one site. Homestead ships with the option to use either nginx or apache as the web server. The default is nginx, but if you prefer Apache, like I do, you configure that using the type property.

sites:
    -
        map: drupal-homestead.local
        to: /home/vagrant/code/web
        type: "apache"
        xhgui: "true"

Notice I’ve also changed the map property to drupal-homestead.local. That’s for a couple of reasons. First, I want my domain to be unique. Homestead always starts you with a homestead.test domain, assuming you may use the same Homestead instance for all your projects, but that’s not the case for per-project setups. Second, I’ve used .local as the TLD to take advantage of mDNS. Homestead is configured to work with mDNS**, which will mean that you shouldn’t have to mess around with your /etc/hosts file (see caveats), which is nice. I also changed the to property to reflect the web root for our project. Composer Drupal project sets that up as <project root>/web. Finally, I’ve aded xhgui: "true" to my site configuration. We’ll talk more about that later.

Next, we’ll customize the database name:

databases:
    - drupal_homestead

Homestead will create an empty database for you when you first up the box. Homestead can also automatically backup your database when your Vagrant box is destroyed. If you want that feature, simply add backup: true to the bottom of your Homestead.yaml.

Finally, we’ll want to add some services. We’ll need mongodb for profiling with xhprof and xhgui. I also like to use MariaDB, rather than MySQL, and Homestead nicely supports that. Simply add this to the bottom of your Homestead.yaml:

mongodb: true
mariadb: true

In the end, we have a Homestead.yaml that looks like this:

ip: 192.168.10.10
memory: 2048
cpus: 4
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
    - ~/.ssh/id_rsa
folders:
    -
        map: /Users/m4olivei/projects/drupal_homestead
        to: /home/vagrant/code
sites:
    -
        map: drupal-homestead.local
        to: /home/vagrant/code/web
        type: "apache"
        xhgui: "true"
databases:
    - drupal_homestead
name: drupal-homestead
hostname: drupal-homestead.local
mariadb: true
mongodb: true

There are plenty more configurations you can do. If you want to learn more, see the Homestead documentation.

Fire it up

With all our configuration done, we’re ready to fire it up. In your project directory simply run:

$ vagrant up

On your first time running this, it will take quite a while. It needs to first get the base box, which is a pretty hefty download. It then does all of the provisioning to install and configure all the services necessary. Once it’s finished, visit http://drupal-homestead.local in your browser and you’ll be greeted by the familiar Drupal 8 install screen. Yay!

Features

You get a lot all nicely configured for you with Homestead. I’ll highlight some of my favourites, having come from trusty32-lamp VM. I’m still exploring all the Homestead goodness.

Xdebug

Xdebug comes bundled with Homestead. To enable it, SSH into the Vagrant box using vagrant ssh and then run:

$ sudo phpenmod xdebug
$ sudo systemctl restart php7.3-fpm

Once enabled, follow your IDE’s instructions to enable debugging.

Profiling with Tideways (xhprof) and xhgui

Every now and again, I find it useful to have a profiler for weeding out poorly performing parts of code. Homestead comes bundled with Tideways and xhgui that make this exercise straightforward. Simply append a xhgui=on query string parameter to any web request and that request and any that follow are profiled. To read the reports navigate to /xhgui, eg. for our configuration above, http://drupal-homestead.local/xhgui.

Database snapshots

Here's another of my favourite features. From the documentation:

Homestead supports freezing the state of MySQL and MariaDB databases and branching between them using Logical MySQL Manager. For example, imagine working on a site with a multi-gigabyte database. You can import the database and take a snapshot. After doing some work and creating some test content locally, you may quickly restore back to the original state.

Homestead documentation

I’ve found this to be a huge time saver for instances where I need to work on issues that only manifest with certain application state stored in the database. Load the database with the errant application state, create a branch using sudo lmm branch errant-state, try your fix that processes and changes that application's state and if it doesn’t work, sudo lmm merge errant-state to go back and try again.

Portability and consistency of Vagrant

This is more of a benefit of Vagrant than Homestead, but your local dev environment becomes consistent across platforms and sharable. It solves the classic works-on-my-machine issue without being overly complicated like Docker can be. Homestead does add some simplicity to the configuration over just using Vagrant.

Moar

There are many more features packed in. I mentioned the ease of choosing between apache and nginx. Flipping between PHP versions is also easy to do. Front-end tooling including node, yarn, bower, grunt and gulp are included. Your choice of DBMS between MySQL, MariaDB, PostgreSQL and Sqlite is made incredibly easy. Read more about all the features of Homestead in the Homestead documentation.

Caveats

* File sharing

By default, the type of share used will be automatically chosen for your environment. I’ve personally found that for really large Drupal projects, it’s better for performance to set the share type to rsync. Here is an example of that setup:

folders:
    -
        map: /Users/m4olivei/projects/drupal_homestead
        to: /home/vagrant/code
        type: "rsync"
        options:
          rsync__exclude: [".git/", ".idea"]
          rsync__args: ["--verbose", "--archive", "--delete", "-z", "--chmod=g+rwX"]

An rsync share carries some added maintenance overhead. Namely, you need to ensure that your running vagrant rsync-auto to automatically detect and share changes on the host up to the Vagrant box. If you need to change files in the Vagrant box, you would kill any vagrant rsync-auto process you have running, vagrant ssh into the box, make your changes, and then on your host machine run vagrant rsync-back before running vagrant rsync-auto again. Not ideal, but worth it for the added performance gain and all the joys of Vagrant local development. There are other options for type including nfs. See the Homestead documentation for more details, under “Configuring shared folders”.

** DNS issues

A handful of times I’ve run into issues with mDNS where the *.local domains don’t resolve. I’ve seen this after running vagrant up for the first time on a new vagrant box. In that case, I’ve found the fix to be to simply to reload the vagrant box by running vagrant up. In another instance, I’ve found that *.local domains fail to resolve after using Cisco AnyConnect VPN. For this case, it sometimes works to reload the vagrant box, and in others, I’ve only been able to fix it by restarting my machine.

Acknowledgments

Big thanks to the following individuals for help with this article:

  • Andrew Berry for porting features from trusty32-lamp VM to Homestead and also for technical and editorial feedback.
  • Matt Witherow for technical and editorial feedback.
  • Photo by Polina Rytova on Unsplash
May 02 2019
May 02

Matt and Mike talk with Angie "Webchick" Byron, Gábor Hojtsy, and Nathaniel Catchpole about the next year's release of Drupal 9. We discuss what's new, what (if anything) will break, and what will remain compatible.

Gábor and Angie selfie at DrupalCon NashvilleGábor and Angie selfie at DrupalCon Nashville
Apr 23 2019
Apr 23

Recent Aaron Winborn Award winner, Leslie Glynn, talks about what keeps her coming back to DrupalCon, her love for illuminating people, and when the heck will Tom Brady retire?

"...to continue learning new things, and that's where you learn, is from the community."

Apr 19 2019
Apr 19

Mike and Matt gather a random group of Drupalers in Seattle, drag them back to a hotel room, and record a podcast. 

Apr 17 2019
Apr 17

CircleCI is great at enabling developers defining a set of images to spin up an environment for testing. When dealing with a website with a database, the usual build process involves downloading a database dump, installing it, and then performing tests. Here is a sample job that follows this approach. Notice where the majority of the time is allocated:

CircleCI spent two minutes downloading and installing a fairly small database (92MB compressed, 366MB uncompressed). This may be acceptable, but what if the database size was 1GB? CircleCI jobs would take several minutes to complete, which could slow down the development team’s performance. In this article, we will discover how moving the database installation into a custom Docker image helps to dramatically reduce build times on CircleCI. Let’s go straight to the results and see the log of a job where this has been already implemented:

Notice the timings: we are down to eight seconds while the previous screenshot showed two minutes. Isn’t that great? Let’s compare how the traditional method works to the suggested way of doing it.

Installing the database at CircleCI

Here is a CircleCI config file, which uses the official MariaDB image and then downloads and installs a database:

With the above setup, CircleCI spends two minutes downloading the database and installing it. Now, let’s see what this job would look like if the database image had the database in it:

With the above setup, CircleCI takes only eight seconds because the database is there once the database image has been pulled and the container has started. Notice that we are using a custom Docker image hosted at Docker Hub:

The above repository is in charge of building and hosting the image that contains the database in it. In the next section, we will discover how it works.

Installing the database via a Dockerfile

I created a repository on GitHub that contains both the CircleCI job above and the Dockerfile, which Docker Hub uses to build the custom image. Here is the Dockerfile, inspired by lindycoder’s prepopulated MySQL database. Notice the highlighted section, which downloads and places the database in a directory where MariaDB will find it and import it:

The GitHub repository is connected with Docker Hub, which has a build trigger to build the image that CircleCI uses in the continuous integration jobs:

Benefits

In addition to achieving faster jobs at CircleCI, once we had the process in place we realized that it was making a positive impact in other areas such as:

  • Faster build times on Tugboat, our tool for deploying a website preview for every pull request on GitHub.
  • Developers started using the database image locally as it gave them an easy way to get a database for their local environment.

Hosting private images

This article’s example uses a public repository at Docker Hub to store the database image that contains the database. For a real project, you would host this in a private repository. Here are a couple options to do so:

Quay.io

Quay.io costs $15 per month for five repositories. What I like the most about Quay.io is that it allows the use of robot accounts, which can be used on CI services like CircleCI in order to pull images without having to share your Quay.io credentials as environment variables. Instead, you would create a robot account which has read-only permissions on a single repository on Quay.io and then use the resulting credentials on CircleCI.

Docker hub

Docker hub's free account supports a single private repository. However, it does not support robot accounts like Quay.io so unless you create a Docker account just for the project you're working on, you would have to store your personal Docker Hub passwords with third-party services like CircleCI as environment variables.

Try it out

Find out how this implementation approach works for your project. If you run into any issues or have feedback, please post a comment below or in the repository.

Acknowledgments

This article was possible thanks to the help of the following folks:

Juampy NR

Juampy Headshot

Loves optimizing development workflows. Publishes articles, books, and code.

Apr 05 2019
Apr 05

When I attended my first DrupalCon in San Francisco I brought three suits. At that point, I had been speaking at academic conferences for a decade, and in my experience, conferences were places where attendees dressed formally and speakers literally read their papers (here's a real example from a 2005 Women's and Gender Studies Conference where I spoke). I arrived in San Francisco thinking I would spend some time exploring the city while I was there, but I ended up spending nearly all of my extra time in the ChX Coder Lounge learning everything I could about Drupal from kind people in the Drupal community. I never really left the conference other than to eat or attend a party. In fact, I was so excited about Drupal that some of my friends had to stop following me on social media.

DrupalCon San Francisco changed the course of my life. That said, the Drupal community feels very different today, and that's okay with me. As I wrote in 2015, "Drupal is always changing. The community constantly reinvents Drupal with new code and reimagines Drupal with new words." This week, some people feel "excited" while others are "frustrated." Next week at DrupalCon, we will hear a lot more choice adjectives to describe Drupal and its community. Of course, none of them will be correct.

Every once in a while, I find it helpful to remember that there is no such thing as Drupal. Certainly, we need to use words like "Drupal" so we can talk to each other, but I have yet to find even a single, unchanging characteristic that constitutes Drupal. Even if I am demonstrably incorrect, I have discovered over the past couple of years that I experience the most happiness when I don't expect anything from DrupalCon or Drupal. Instead, I try to focus on a single goal: find ways to use Drupal to help other people. Naturally, I fall short of this goal regularly, but just having the goal has produced beneficial results.

I have invested considerable effort trying to "find Drupal" by talking to people in the Drupal community, writing about Drupal, studying commit credit data, and every time I look, I find something different. After processing so much data, I started to wonder if any possible metric could tell us something about, for example, the diversity of the community. I even opened an issue in drupaldiversity (which was moved to drupal.org) asking about the prospect of finding specific, measurable diversity goals. If there is no such thing as Drupal, then it might seem as if the answer to my question about metrics doesn't matter. But I believe that moving forward together with goals in mind offers incredible benefits and that agreeing on goals is a serious challenge. If "finding Drupal" is difficult, trying to grasp all of Drupal's communities -- and the distinct challenges that each of them faces -- is nearly impossible.

To counter thoughts of that kind, I find tremendous benefit in focusing on the here and now. What can I do today? What benefit does understanding all of Drupal do if I don't take the time to help this person that turned to me for advice? Who can I turn to when I'm stuck? The frustration caused by the Drupal code and the Drupal community is real, but in the face of such feelings, it seems like my best option is often to let go and move forward. It can feel like letting go of anger and frustration is not always possible, but I am not ready to give up on the idea that letting go is always an option.

While I believe that Drupal doesn't actually exist, I also believe that people's actions have real consequences. Harassing people in the issue queues causes pain. Insecure and non-functional code causes headaches. Unwanted advances are actually unwanted. As important as it is for us to all read the DrupalCon Code of Conduct, it's equally important to scroll down on that page for the reminder that "We're all in this together."

During my time in the Drupal community, I have witnessed the suffering produced when we use labels that separate us. "Drupal Rock Star" is one of those labels, and as much as I'm flattered that someone thinks I could potentially "end the myth of the Drupal rockstar with [my] DrupalCon presentation, The Imaginary Band of Drupal Rock Stars," I can't. But I can demonstrate how the rock star myth causes pain and I can offer alternative constructions.

In the course of my research over the past few months, I've been exploring what actions the Drupal community values and how some other musical metaphors such as "arranger" or "improviser" might help reframe our understanding of useful Drupal contributions. If that sounds interesting (or not), yes please come to my talk or introduce yourself at DrupalCon next week.

In my experience, DrupalCon can create both excitement and frustration, so my goal this year is to let go of expectations. I don't find them helpful. Drupal is a process, and for some, an ongoing struggle. But just because our community traffics in 1s and 0s doesn't mean I should transfer binary thinking to the people with whom I interact. I'm not suggesting that everyone prance around the conference as if everything that everyone does is acceptable. I have witnessed some seriously unwelcome behavior at past DrupalCons that needed to be addressed. The Drupal community is awash with thoughtful people, and I frequently turn to them to help me continue to recognize my considerable privilege and bias. I can only hope that having a goal of helping other people while I'm at DrupalCon -- rather than focusing exclusively on maximizing the value of my expensive ticket -- will, in turn, improve my week.

While I recognize not everyone shares my worldview, I hope to improve my time at DrupalCon by recognizing my essentialist ideas and instead focus on the people and situations in front of me -- a bit more like I imagine I did at DrupalCon San Francisco.

Apr 04 2019
Apr 04

Mike and Matt talk to a conglomerate of Lullabots about their DrupalCon sessions on Thursday, April 11th.

This Episode's Guests

Helena McCabe

Thumbnail

Helena is a Drupal-loving front-end developer based out of Orlando, Florida who specializes in web accessibility.

Karen Stevenson

Thumbnail

Karen is one of Drupal's great pioneers, co-creating the Content Construction Kit (CCK) which has become Field UI, part of Drupal core.

Wes Ruvalcaba

Thumbnail

Wes is a designer turned front-end dev with a strong eye for UX.

Putra Bonaccorsi

Thumbnail

An expert in content management systems like Drupal, Putra Bonaccorsi creates dynamic, user friendly sites and applications.

Mike Herchel

Thumbnail

Front-end Developer, community organizer, Drupal lover, and astronomy enthusiast

Matt Westgate

Thumbnail

Matt Westgate is Lullabot's CEO.

April Sides

Thumbnail

April Sides is a seasoned Drupal Developer who is passionate about community building.

Sally Young

Sally Young

Senior Technical Architect working across the full-stack and specialising in decoupled architectures. Core JavaScript maintainer for Drupal, as well as leading the JavaScript Modernization Initiative.

Mar 28 2019
Mar 28

Matt and Mike talk with a bevy of bots who are presenting on Wednesday at DrupalCon Seattle.

We will be releasing a podcast detailing Lullabot's Thursday DrupalCon sessions next Thursday, April 4th, 2019.

Can you edit that out? I really regret saying that. — Greg Dunlap

This Episode's Guests

Karen Stevenson

Thumbnail

Karen is one of Drupal's great pioneers, co-creating the Content Construction Kit (CCK) which has become Field UI, part of Drupal core.

Greg Dunlap

Thumbnail

Greg has been involved with Drupal for 8+ years, specializing in configuration management and deployment issues.

Ezequiel Vázquez

Thumbnail

Zequi is a developer specialized on backend, with strong background on DevOps, love for automation and huge passion for information security,

Joe Shindelar

Thumbnail

Joe Shindelar is now the Lead Developer and Lead Trainer at Drupalize.Me (launched by Lullabot and now an Osio Labs company).

Marc Drummond

Thumbnail

Senior front-end developer passionate about responsive web design, web standards, accessibility, Drupal and more. Let's make the web great for everyone!

Sally Young

Sally Young

Senior Technical Architect working across the full-stack and specialising in decoupled architectures. Core JavaScript maintainer for Drupal, as well as leading the JavaScript Modernization Initiative.

Mar 26 2019
Mar 26

As a user experience designer, most of my career has been focused on designing for adults. When the opportunity arose to help redesign a product for kids, I jumped at the chance to learn something new. Though, switching focuses from serving adult audiences to children proved to be a challenge. I'm not a parent and also usually do not interact with kids on a daily basis.

Nevertheless, I was excited about the challenge and was confident that I could learn through research and observation. With the help of my young nieces and a stack of bookmarked articles, I began my journey to guide the redesign of a new product. Here’s what I learned along the way.

How kids think

I kicked off my research by observing how my two nieces, ages nine and seven, interacted with their favorite apps and websites. To my surprise, they worked quickly to solve problems and accomplish certain goals. Both nieces would often click on items just to see what would happen. They also weren’t shy about taking as much time as they needed to get familiar with a new app or website.

“Do you know what that icon means?” I asked one niece while pointing to what looked like a pencil.

“I think I can draw with it but let me check…” She clicked on the pencil and it indeed was a drawing tool.

I noted that they were both explorers and took the time to study and understand the function of a component. It wasn’t a big deal if they didn’t initially understand what something was or what it would do. They would simply click and find out.

After making some initial notes of my observations, I then began to dig into research articles about child development and designing apps and websites for kids. I noted some significant findings that would later apply to our product’s new design.

Differences between age groups

The age group for children can range from 1-18 years. That’s a big gap! Even when we focus specifically on grade school children (typically ages 6-11 years), there is still a significant gap in reading levels, comprehension, and tastes. Based on Jean Piaget’s Theory of Cognitive Development, I segmented out the grade school age group and documented these differences to be considered when redesigning the new product:

6 years (kindergarten) - Children in this age group are in the pre-operational stage. They can think in terms of symbols and images, but reading comprehension and reasoning are developing. This age group will have a minimal vocabulary and might still be learning to read and write the alphabet. We will need to use simple words and rely heavily on symbols, icons, and pictures to communicate meaning. Actions, feedback, and navigation should not have multiple tiers since children in this age group generally focus on one thing at a time.

7-9 years (1st, 2nd, and 3rd grade) - Children in this age group are in the concrete operational stage. They begin having logical thoughts and can work out things in their minds. Reading comprehension is still developing, but they have an expanded vocabulary and understand more complex sentences and words. Symbols, icons, and pictures should be used as a primary way to communicate meaning, as children tend to ignore copy.

9-11 years (3rd, 4th, and 5th grade) - Children in this age group start to become experts at touchscreens and understand basic user interface patterns. They use logical thought, and they have a more mature reading comprehension and vocabulary. Symbols, icons, and pictures should still be used to keep this age group engaged in the content; however, they rely less on these resources to communicate meaning. Basic hierarchical tiers can be used for feedback, navigation, and actions as children in this age group are able to focus on more than one thing at a time.

To help make our redesign more focused and appealing to a specific age group, we decided to narrow our target audience to 9-11 years old. We would still need to consider the 6-9-year-old age group to ensure they would still be able to use the product.

Design/UX recommendations

Armed with the above research, we began to explore design patterns. Our team would often run quick ideas past a group of children that were mostly in the target age group. Librarians and teachers were also included in our feedback loop as the design progressed.

We learned a lot through feedback and testing (see the user testing paragraph section below to learn more):

Color

Considering our targeted age groups, color can be used to help engage and communicate. Younger children tend to be attracted to bright, primary colors because their eyesight is still developing. Bright, contrasting colors stand out in their field of vision. Depending on the age group, basic primary colors may feel too young. You might want to refine the color palette and experiment with deeper, muted colors.

We tested several color palettes with our target age group. To our surprise, all but one child preferred the more muted color palette. The tastes of our 9-11-year-olds were more mature than we initially guessed. Some preferred the muted color palette because it felt more mature. Others pointed out that the color palette reminded them of an app or website that they already use.

Text

Ever notice that picture books and easy readers tend to have much larger text than an adult book? That’s because the eyesight of grade school children is still developing. It can be difficult for them to see and process smaller type. Increasing the font size and line height of copy ensures that it’s easy to read for kids, especially on a screen. When we conducted usability tests, we found that we needed to make the copy much bigger than we had initially guessed. To accommodate the different development stages for a wide age range of children, we also included a set of buttons that they could use to increase or decrease the font size.

Simplifying vocabulary and replacing text descriptions with imagery is another way to help kids understand the content. Our rule of thumb was the simpler, the better. We removed text descriptions and replaced them with icons when we could. If we thought the icon wouldn’t be recognizable by our age group, we’d include a text description below it. When in doubt, run a quick usability test to see if children understand what the icon or description means. When we ran a couple of usability tests, it surprised us what kids didn’t understand. We had to rethink labeling such as “more search options” and “link” because our audience wasn’t sure what these labels meant. Instructions were also simplified while providing a step-by-step process to help guide them through actions such as printing or downloading.

Layout and Functionality

Since mental models and cognitive reasoning are still developing in the grade school age, we simplified the layout and site functionality when possible. This would help our audience focus on the task at hand, and eliminate any confusion.

We took a minimal approach to navigation and information architecture. Initially, we used four navigational links, though eventually they were reduced to one.  We also removed unnecessary fluff, like marketing messages and taglines. When we conducted user tests, we found that kids rarely read the mission statement or marketing messages. We took the marketing and mission statements and placed them on a page specifically for parents so they could get more information about the product if they needed.

Important links became buttons because kids noticed and interacted with them more often than links. To kids, buttons definitely meant that something could be clicked on, where a hyperlink was questionable. We also created more noticeable feedback for action items by adding animation.

Getting feedback

We tend to be distracted by the voices in our own heads telling us what the design should look like.

Michael Bierut, Graphic Designer & Design Critic

During the process of our redesign, our team tried to get as much feedback as we could from our audience at significant points in the project. Would kids understand the actions they could take on a particular page? Let’s ask our user group! We recruited a handful of children in order to conduct informal usability testing. Most were related to the people involved in the project, but we felt that some testing for feedback was better than nothing.

We kept the tasks and questions simple. To help keep kids engaged, we conducted a series of smaller tests instead of one large test. Parents were encouraged to help moderate (we found kids were more comfortable with them than a stranger), and we offered them brief moderation tips to help them feel confident when conducting the tests. We also decided to use a more refined, completed prototype for testing since kids wanted to click on everything, even if it wasn’t part of the test.

Summary

When given the opportunity to design for an unfamiliar user group, I highly recommend that you accept the challenge to learn something new. Designing for kids was definitely a challenge, and it took a bit to get used to designing for an audience outside of my comfort zone. Research and usability testing played a significant role in the success of our project, and even though it hasn’t been officially released yet, we’ve received fantastic feedback on the completed prototype.

There’s a lot more to consider when it comes to usability testing with kids like ethics, forming the right questions and creating the proper prototypes. Unfortunately, I can't cover it all in this article. If this is a topic you’re interested in learning more about, I highly recommend reading the below resources:

Mar 11 2019
Mar 11

Jim is a Drupal developer at Oak Ridge National Laboratory and has presented several sessions at Drupal camps on a variety of subjects, such as Drupal 8 migration, theming, front end performance, using images in responsive web design, Barracuda-Octopus-Aegir, Panels, and Twitter.

Mar 08 2019
Mar 08

Mike and Matt gather the Lullabot team around the campfire to discuss real world data migrations into Drupal, and everything that goes into it.

Mar 06 2019
Mar 06

What do you do when you accomplish your dream? If you’re like Addison Berry, you make another dream.

In 2010, we launched Drupalize.Me to teach people how to build Drupal websites. It was a natural offshoot to the workshops and conferences Lullabot ran back in the day (Remember Do It With Drupal, anyone?). I won’t try to summarize the last nine years, but with the help of an incredibly talented and dedicated team, combined with passionate customers and a singular mission, Drupalize.Me has become a premier destination for learning Drupal.

Lullabot Education, the company behind Drupalize.Me, became its own business entity in 2016 with bigger plans. They realized it was more than Drupal that held them together: it was open source and the communities that it creates. With imposter syndrome at full tilt, the team began attending React, Gatsby and Node conferences to connect and learn more. They started contributing back by giving presentations, committing code, and hiring new team members like Jon Church to focus on Node. Speaking as someone who serves on the board, we couldn’t be more proud of the effort the team has taken to be intentional with their endeavors.

Osio Labs: Open Source Inside and Out

Lullabot Education is now Osio Labs. The name change reflects a commitment to fostering and growing other communities through open-source contribution, including Drupal. Drupal is not going away. If anything, they hope to amplify the experience for everyone—to have these communities eventually grow and support one another.

I’ve known Addi for 13 years now. She attended the first ever Lullabot workshop in 2006 in Washington D.C. I remember her then as the brightest star in the room. Eventually, Lullabot had a job posting, she applied, and we instantly hired her. I've had the honor of watching her grow into the leader she’s become, and I couldn’t be more proud.

By branching out and trying to be more to the world, Osio Labs is sharing more of Lullabot’s story too. Both companies believe that sharing makes us stronger. It’s what we’re passionate about and how we empower others. It’s why we show up to work every morning. Or, in our case, as employees of a distributed company, shuffle to our desks in our bunny slippers.

Be sure to follow Osio Labs on Twitter to stay up to date on their latest projects.

Matt Westgate

Thumbnail

Matt Westgate is Lullabot's CEO.

Mar 04 2019
Mar 04

We're excited to announce that 15 Lullabots will be speaking at DrupalCon Seattle! From presentations to panel discussions, we're looking forward to sharing insights and good conversation with our fellow Drupalers. Get ready for mass Starbucks consumption and the following Lullabot sessions. And yes, we will be hosting a party in case you're wondering. Stay tuned for more details!

Karen Stevenson, Director of Technology

Karen will talk about the challenges of the original Drupal AMP architecture, changes in the new branch, and some big goals for the future of the project.

Zequi Vázquez, Developer

Zequi will explore Drupal Core vulnerabilities, SA-CORE-2014-005 and SA-CORE-2018-7600, by discussing the logic behind them, why they present a big risk to a Drupal site, and how the patches work to prevent a successful exploitation.

Sally Young, Senior Technical Architect (with Matthew Grill, Senior JavaScript Engineer at Acquia & Daniel Wehner, Senior Drupal Developer at Chapter Three)

Discussing common problems and best practices of decoupled Drupal has surpassed the question of whether or not to decouple. Sally, Matthew, and Daniel will talk about why the Drupal Admin UI team went with a fully decoupled approach as well as common approaches to routing, fetching data, managing state with autosave and some level of extensibility.

Sally Young, Senior Technical Architect (with Lauri Eskola, Software Engineer in OCTO at Acquia; Matthew Grill, Senior JavaScript Engineer at Acquia; & Daniel Wehner, Senior Drupal Developer at Chapter Three)

The Admin UI & JavaScript Modernisation initiative is planning a re-imagined content authoring and site administration experience in Drupal built on modern JavaScript foundations. This session will provide the latest updates and a discussion on what is currently in the works in hopes of getting your valuable feedback.

Greg Dunlap, Senior Digital Strategist

Greg will take you on a tour of the set of tools we use at Lullabot to create predictable and repeatable content inventories and audits for large-scale enterprise websites. You will leave with a powerful toolkit and a deeper understanding of how you use them and why.

Mike Herchel, Senior Front-end Developer

If you're annoyed by slow websites, Mike will take you on a deep dive into modern web performance. During this 90 minute session, you will get hands-on experience on how to identify and fix performance bottlenecks in your website and web app.

Matt Westgate, CEO & Co-founder

Your DevOps practice is not sustainable if you haven't implemented its culture first. Matt will take you through research conducted on highly effective teams to better understand the importance of culture and give you three steps you can take to create a cultural shift in your DevOps practice. 

April Sides, Developer

Life is too short to work for an employer with whom you do not share common values or fits your needs. April will give you tips and insights on how to evaluate your employer and know when it's time to fire them. She'll also talk about how to evaluate a potential employer and prepare for an interview in a way that helps you find the right match.

Karen Stevenson, Director of TechnologyPutra Bonaccorsi, Senior Front-end DeveloperWes Ruvalcaba, Senior Front-end Developer; & Ellie Fanning, Head of Marketing

Karen, Mike, Wes, and team built a soon-to-be-launched Drupal 8 version of Lullabot.com as Layout Builder was rolling out in core. With the goal of giving our non-technical Head of Marketing total control of the site, lessons were learned and successes achieved. Find out what those were and also learn about the new contrib module Views Layout they created.

Matthew Tift, Senior Developer

The words "rockstar" and "rock star" show up around 500 times on Drupal.org. Matthew explores how the language we use in the Drupal community affects behavior and how to negotiate these concepts in a skillful and friendly manner.

Helena McCabe, Senior Front-end Developer (with Carie Fisher, Sr. Accessibility Instructor and Dev at Deque)

Helena and Carie will examine how web accessibility affects different personas within the disability community and how you can make your digital efforts more inclusive with these valuable insights.

Marc Drummond, Senior Front-end Developer Greg Dunlap, Senior Digital Strategist (with Fatima Sarah Khalid, Mentor at Drupal Diversity & Inclusion Contribution Team; Tara King, Project lead at Drupal Diversity & Inclusion Contribution Team; & Alanna Burke, Drupal Engineer at Kanopi Studios)

Open source has the potential to transform society, but Drupal does not currently represent the diversity of the world around us. These members of the Drupal Diversity & Inclusion (DDI) group will discuss the state of Drupal diversity, why it's important, and updates on their efforts.

Mateu Aguiló Bosch, Senior Developer (with Wim Leers, Principal Software Engineer in OCTO at Acquia & Gabe Sullice, Sr. Software Engineer, Acquia Drupal Acceleration Team at Acquia)

Mateu and his fellow API-first Initiative maintainers will share updates and goals, lessons and challenges, and discuss why they're pushing for inclusion into Drupal core. They give candy to those who participate in the conversation as an added bonus!

Jeff Eaton, Senior Digital Strategist

Personalization has become quite the buzzword, but the reality in the trenches rarely lives up to the promise of well-polished vendor demos. Eaton will help preserve your sanity by guiding you through the steps you should take before launching a personalization initiative or purchasing a shiny new product. 

Also, from our sister company, Drupalize.Me, don't miss this session presented by Joe Shindelar:

Joe will discuss how Gatsby and Drupal work together to build decoupled applications, why Gatsby is great for static sites, and how to handle private content, and other personalization within a decoupled application. Find out what possibilities exist and how you can get started.


 

Photo by Timothy Eberly on Unsplash

Mar 01 2019
Mar 01

Our annual company retreat happens at the beginning of each year. It’s a perfect time to think of the intentions we want to set in our personal and professional lives. This year’s retreat was once again at the quiet and unassuming Smoke Tree Ranch in Palm Springs, California. It’s a destination that now feels like home for Lullabot, with this year marking our fifth trip to the same place.

It used to be that we’d “maximize business value” by having all-day company meetings while at a retreat. It was exhausting, and the whole event felt like a work marathon. We didn’t leave feeling rejuvenated and inspired. While we’d accomplish a lot, we also tended to burn out and spend the following week catching our breath. So over the years, we’ve shifted the role the retreat plays for us. The goal in recent years (in this order) has been to:

  1. Relax. Making space for creative thought and problem-solving is essential. We need to recharge because our work is intellectually demanding. We also have clients, families, friends and even animal companions that depend on us to be our best when we return. It makes no sense to burn out from a retreat.
  2. Have fun. Forming new bonds and friendships with each other. Building up our emotional reserves, celebrating the year and lifting each other up. We’re only together for 5 out of 365 days in a year, so let’s make them count.
  3. Work on Lullabot. Using the time to collaborate openly on how we work and looking for ways we can improve what we do for our clients and the world.

I’ll admit. It’s still easy to forget to take recharge and self-care time when we don’t get to see each other very often. But we’re doing a better job of building it into the schedule and having the program be a reflection of these goals. As one of our team members put it:

I continue to be impressed by how much closer I feel to my co-workers than I ever did working for a co-located company. Our remoteness has the effect of making us feel comfortable (we're in our own homes and offices) enough to be ourselves and thus are inclined to be vulnerable, speak our mind, tell a joke, share what we love with others. The team retreat, then, is our one week a year to be the way this company has inspired us to be in person. It stands as a testament to the efficacy of our core values, our communication tools, our transparency, that we can effortlessly have fun and feel relaxed with people we rarely see. The importance of this, of feeling human in the workplace, cannot be understated. 

Hunter MacDermut (after his first Lullabot retreat) 

This year we had 53 people with us physically at the retreat, and one person with us remotely (she’s about to have a baby!). We were lucky enough to be joined by Osio Labs (formerly Lullabot Education) who co-retreated with us this year. Both companies had their individual retreats in the morning, and we were together for meals and evening events. I couldn’t be more proud of the Osio Labs teams and they work they’ve done on Drupalize.me and begin to contribute to the Gatsby and React communities.

Each retreat starts the same way, with a State of the Company talk. This year’s presentation was almost entirely forward-looking, with the recap of 2018 becoming a separate presentation led by Seth Brown. Since we practice Open Books Management at Lullabot, the team tends to know our financials and KPIs well before we get to the retreat. So the 2018 recap serves instead as a “highlights and leave-behinds” conversation with the team.

My State of the Company presentation was emotional and at times visceral. I spoke my heart. I shared with the team my vision for Lullabot in 2022 and 2026. You see, several years ago I set an intention for 2022. I picked 2022 because alliteration is fun and it’s less than five years in the future. I'd like our company to strive for employee ownership by 2022. We've been slowly working towards this for a while, and I want to keep going.

And 2026 will be our 20th company anniversary, so I love the idea of setting intention around an incredible moment worth celebrating. I won't go into details, but take a look at my State of the Company presentation if you would like to learn what’s driving us to do the work we do. As most founders do, I spend a great deal of time thinking about the company I’m creating and why I’m building it. And the first time an employee tells you "I want to retire here," it changes your life, and your role forever.

In terms of how we did in 2018, the first half of the year was difficult, and we did not hit our revenue projections. And yet, 2018 ended up being our highest revenue year so far. We’ve been fortunate enough to welcome seven new team members to Lullabot. We’re so glad to have them here. We also joined our clients in the launch of several new websites this year, including sites for IBM, Pantheon, Carnegie Mellon University, Edutopia and Newsbank. Forty-six percent of our clients were new, 37% of our clients were continuing clients from the previous year, and 17% were clients we had worked with in the past who came back to work with us again.

What does the overall schedule of the retreat end up looking like? Here’s the general format:

  • 9 am-11:30 am / Company presentations and workshops.
  • 11:30 am-12 pm / Client time. We take the week off from client work, but emergencies happen, and new prospects reach out. So we set aside a little bit of time each day, so our inbox doesn’t overwhelm us in the background.
  • 1 pm-3 pm / Free-time. Take a nap, go swimming, play volleyball, chill.
  • 3 pm-5 pm / Open Spaces & BOFs.
  • 7 pm-9 pm / Evening activities

In the mornings, we talk about company things. And by company things, I mean client success stories, new lines of business, our engineering (and departmental) values, and things we’d like to leave behind in the year we just had. In recent years, workshopping has become an integral part of our onsite time together. That work is led by our strategy and design team. Sometimes at the retreat, we’ll try out new workshop ideas on ourselves for feedback and improvement before we roll it out to our clients. Workshopping is one of the best uses of onsite time together because strategy and brainstorming are harder to do as a distributed company.

The evening events are planned and organized by the team. And pretty much every team member led or participated in an event at the retreat. This year we had Ignite talks, storytelling, talent shows, awards show for the best moments of 2018 organized by Helena (complete with our own band, The Ternary Operators), trivia night, and we celebrated the launch of the new lullabot.com website with a party on the last night. Our security team put on a capture-the-flag challenge that lasted all week. One group received a perfect score, while another group discovered a root exploit and acquired a beyond-perfect score. Ahem. Chris Albrecht organized a 5K, 3K, and 1K run-or-walk. And Wes Ruvalcaba led our board game night, as a casual way to relax on arrival day.

One of my favorite experiences of the retreat happens on the last full day. We use that time to find a way to give back to the community. In the past, we’ve built bicycles for the Boys and Girls club, cleaned up the Pacific Crest Trail, and volunteered at an animal shelter. This year there were two activities organized by Esther Lee. Half of us went back to the local animal shelter to build cat beds and prepare treats for the animals. The other half of us went to the desert and pulled invasive plant species with Friends of the Desert Mountains. I love to find ways to connect with the team outside of work while also helping others. Memories like that help keep our hearts full.

The feedback so far from the retreat is that we delivered better on our intention to have it become an experience of renewal and rejuvenation. That's phenomenal, and much of that credit goes to Haley who organizes the event for us. I hope we also take away from this experience my call-to-action for 2019, a call for leadership. And yes, "leadership" can be a loaded word with varied meanings. There are three aspects of leadership I want us to embody: courage, include, and give.

Courage is having the authenticity to speak up. Not to put someone else down, but to improve the whole. Inclusion is to bring others with you. To amplify what you have received. And giving is sharing your time and abilities with others to make something greater. If there’s one thing I want this retreat to be remembered for, it’s to work towards a culture which fosters and practices this kind of leadership. It's the world I want to participate in. It’s a community in which I thrive. 

And you? What do you like most about your company retreat? What event or experience do you most look forward to and why?

Feb 27 2019
Feb 27

Introduction

Anyone who’s built a React app of any appreciable complexity knows how challenging it can be selecting and configuring the multitude of libraries you’ll need to make it fast and performant. Gatsby, a static-site generator built with React and GraphQL, alleviates these pain points while also providing a straightforward way to consume data from an API. On the back-end, we can leverage Drupal’s content modeling, creation, and editing tools along with the JSON:API module to serve that content to our Gatsby front-end. Together, Gatsby and Drupal form a powerful combination that makes an excellent case for decoupling your next project.

What You’ll Build and Learn

In this article, we’ll take a hands-on look at using Drupal, the JSON:API module, and Gatsby to build a simple blog. We’ll create an index page listing all of our blog posts with a brief snippet of the article, and a page for each post with the full text. By the end, you should feel comfortable pulling JSON into a Gatsby site and displaying it on the page.

  • Drupal 8 - Drupal 8 will be used locally to model and manage our content. We’ll take advantage of the article content type that ships with the standard profile and the convenient admin interface for content creation.
  • JSON:API - This Drupal 8 module will take care of exporting our content in well-formed JSON.
  • Gatsby - We’ll use Gatsby to build our front-end and consume content from Drupal.

Prerequisites

  • Familiarity with creating and navigating a fresh install of Drupal 8, including creating content and adding/enabling modules
  • NPM and Node 10+ installed
  • Basic understanding of React

Step 1: Drupal 8 + JSON:API

Let’s begin with a fresh install of Drupal 8.

We’ll create three articles so we’ll have some stuff to work with.

Time to install the JSON:API module. Head over to https://www.drupal.org/project/jsonapi and copy a link to the module download. Once installed, enable both JSON:API and the Serialization modules and you’re done. We now automatically have endpoints created and available for our two content types. Let’s hit one via Postman to verify this using this URL:

http://localhost:8888/gatsbyblogdrupal/jsonapi/node/article

As the URL indicates, we’re accessing the JSON output for nodes of type article.

We can see a list of all the articles returned. If we wanted to retrieve a specific article, we just need to append the node id to the end of the URL:

http://localhost:8888/gatsbyblogdrupal/jsonapi/node/article/e2e62d88-7c5f-443d-9008-0dc5d79e1391

And with that, we’re ready to move on to Gatsby.

Step 2: Gatsby

From your command line, navigate to a directory where you’d like to create your application. From here, install the Gatsby CLI by typing npm install --global gatsby-cli. You can find more detailed installation instructions at https://www.gatsbyjs.org/tutorial/part-zero/ if you need them.

Run gatsby new gatsby_blog to create the app, change into the new directory that’s generated, then run gatsby develop to start the dev server. Hitting http://localhost:8000 shows you the starter page and confirms you’re up and running properly:

Let’s take a quick tour of Gatsby.

  1. Open your project directory in your editor of choice and navigate to src/pages. You’ll see a file each for the index page (which you just saw above), an example of a second page, and a 404 page.
  2. Taking a look at index.js, you’ll see a <Layout> component wrapping some markup that you see on the index page.
  3. Make a change to the <h1> text here to see your site live reload.
  4. Near the bottom, there’s a <Link> component that points us to the second page.

You may notice that the big purple header you see in the browser isn’t shown in the index file. That bit is part of the <Layout> component. Navigate to src/components and open layout.js.

  1. About midway down the page, you’ll see a <Header> component with a siteTitle prop. It points to data.site.siteMetadata.title.
  2. Just above that, you’ll see a <StaticQuery> component, which appears to be doing something related to that site title. This is just a taste of how Gatsby makes use of GraphQL to manage app data. We’ll explore this more fully in the next section of this article. For now, take note that we’re accessing siteMetadata to fetch the site’s title.

Head over to gatsby-config.js to see where siteMetadata is set.

  1. Right at the top of gatsby-config.js we can see a big JS object is being exported. The first property in this object is siteMetadata, and within that, we can see the title property. Change the title property to “My Gatsby Blog” and save. Your site will hot reload and that purple header’s text will change to match the config.

The plan is to have our index show the latest blog posts in reverse chronological order like a standard Drupal site would. We’ll show the post headline as a link, its image if it has one, and a short piece of the article. Clicking on the headline will take us to a full page view of the post. It will be a pretty barebones blog but should illustrate nicely how Gatsby and Drupal can work together.

Step 3: Gatsby with Real Data from Drupal

Gatsby uses GraphQL to pull in data from external sources and query for bits of it to use in your markup. Let’s take a look at how it works. When you ran gatsby develop earlier, you may have noticed a few lines that mention GraphQL:

It says we can visit http://localhost:8000/_graphql to explore our site’s data and schema using the GraphiQL IDE. Let’s hit that URL and see what’s up.

On the left-hand side, we can write queries that auto-complete and see the result of the query on the right-hand side. I like to come to this interface, experiment with queries, then copy and paste that query right into my app. To see how this works, we’ll write a query to fetch the site’s title.

  1. Start by typing an opening curly brace. This will automatically create the closing brace.
  2. From here, type ctrl + space to bring up the auto-complete. This gives you a list of all the possible properties available to query. Typing anything will also bring up the auto-complete.
  3. Type the word “site”, then another set of curly braces. Ctrl + space again will show the auto-complete with a much smaller list of options. Here you’ll see siteMetadata.
  4. Type siteMetadata (or use auto-complete), followed by another set of curly braces.
  5. Type ctrl + space one more time to bring up the auto-complete, where we’ll see title as an option.
  6. Type “title” and then ctrl + enter (or the play button at the top) to run the query. On the right-hand side, we’ll see the result.

And there we have our site title. Take a look back to src/components/layout.js. You’ll see this exact query (with a little formatting) as the query prop of the <StaticQuery> component at the top. This is the approach we’ll use to build our queries when we start pulling in data from Drupal.

If you have any experience with React apps and pulling in data, any method to which you’re already accustomed will still work inside a Gatsby app (it’s still React, after all). You can use AJAX, the fetch API, async/await, or any third-party library you like. However, when working with Drupal (or a number of other common data sources), there’s a simpler approach: source plugins. Head over to https://www.gatsbyjs.org/packages/gatsby-source-drupal/ for details about this plugin, which is designed to automatically pull data from Drupal via JSON:API.

From within your project directory, run npm install --save gatsby-source-drupal. This will make the package available to use in our config. Open gatsby-config.js and notice the property called plugins. There’s already one plugin defined: gatsby-source-filesystem. This is what’s allowing the app to access local files via GraphQL, like that astronaut image we saw on the index page earlier. We’ll follow a similar pattern to add the Drupal source plugin just below this one:

{
  resolve: `gatsby-source-drupal`,
  options: {
    baseUrl: `http://localhost:8888/gatsby_blog_drupal/`,
  },
},

We’ve created a new object inside the plugins array with two properties: resolve and options. We set resolve to gatsby-source-drupal (note the back-ticks instead of quotes) and set the baseUrl option to the path of our Drupal site. Now, restart the Gatsby server. You should see something like this scroll by in the command line during server startup:

So, what’s happened? With the Drupal source plugin installed and configured, on startup Gatsby will fetch all the data available from Drupal, observe any GraphQL queries you’ve made inside your components, and apply the data. Pertinent to the site we’re building, it’ll make allNodeArticle available to query, which represents an array of all the article nodes on our site. Head back to your GraphiQL IDE (http://localhost:8000/_graphql) and check it out. Here’s what a query for the titles of all articles looks like:

Step 4: Querying Drupal from Gatsby

Now data from our Drupal site is available to our Gatsby app. We just need to insert some queries into our code. As of the release of Gatsby 2.0, there are two different ways of querying data with GraphQL. Let’s take a look at both options before deciding what’s best suited for our site.

Page Level Query

First is the page-level query. This method can only be used when the component represents a full page like our index.js, page-2.js, and 404.js files. For the page-level query, at the bottom of index.js, create a new variable called query that contains the full GraphQL query we looked at just a moment ago.

export const query = graphql`
  query {
    allNodeArticle {
      edges {
        node {
          title
        }
      }
    }
  }
`

Remember to import { graphql } from "gatsby" at the top of the file. Next, we need to make the data returned from this query available to our page component. Inside the parenthesis for our <IndexPage> component, add the data variable like so:

const IndexPage = ({ data }) => ( ...

The data variable is now available for use in our component. Note that we’re putting it between curly braces, which is ES6 shorthand for creating an object whose key is the same as the variable name. Let’s output the title of our first blog post just to confirm everything’s working. Add the following to the markup of the index page component:

<p>{ data.allNodeArticle.edges[0].node.title }</p>

You should see the title of the blog post added to the page of your running Gatsby site. Unpacking this a bit:

  1. reach into the data variable for the array containing all articles
  2. each item in this array is considered an “edge” — a term originating from Relay
  3. grab the first post at index 0
  4. access its node property
  5. and finally its title

You just utilized real data from Drupal in your Gatsby app!

If there’s a case where you have a smaller, non-page-level component into which you’d like to pull data, there’s another method for doing so, called Static Query.

Static Queries

A Static Query is a higher-order component that accepts two props: a query, and a render method. To accomplish the same outcome as our previous example, you’d wipe-out everything currently inside your index page component and replace it with this:

const IndexPage = () => (
  <StaticQuery
    query={graphql`
      query {
        allNodeArticle {
          edges {
            node {
              title
            }
          }
        }
      }
    `}
    render={data => (
      <Layout>
        <SEO title="Home" keywords={[`gatsby`, `application`, `react`]} />
        <p>{ data.allNodeArticle.edges[0].node.title }</p>
        <h1>Hi people</h1>
        <p>Welcome to your new Gatsby site.</p>
        <p>Now go build something great.</p>
        <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
          <Image />
        </div>
        <Link to="/page-2/">Go to page 2</Link>
      </Layout>
    )}
  />
)

Remember to import { graphql, StaticQuery } from "gatsby" at the top of the file. Your page should reload automatically and show the same output as before. Using Static Query allows for more flexibility in how you pull in data. For the rest of this article, I’ll be using the first method since we only need to query data on the page level. Stay tuned to this page for details about the differences between the two options.

Step 5: Building Out Our Pages

Time to start putting our Drupal data to use for real. We’ll start with the index page, where we want our blog listing.

First, we need to update our page query to also pull in a few more things we’ll need, including the body of each blog post, its created date, and its image:

allNodeArticle {
  edges {
    node {
      title
      body {
        value
      }
      created
      relationships {
        field_image {
          localFile {
            childImageSharp {
              fluid(maxWidth: 400, quality: 100) {
                ...GatsbyImageSharpFluid
              }
            }
          }
        }
      }
    }
  }
},

Working with images in Gatsby is an article all its own. The high-level explanation of all that stuff up there is:

  • field_image is filed under relationships, along with things like tags
  • Gatsby processes all images referenced in your site files, creating multiple versions of them to ensure the best quality to file size ratio at various screen sizes, which means we’re querying for a local file
  • childImageSharp is the result of some image transformation plugins that come configured by default, it packages up all the various quality/size options mentioned above
  • fluid represents a srcset of images to accommodate various screen sizes, more details here
  • …GatsbyImageSharpFluid is a query fragment, which comes from GraphQL world, more details here

Keep an eye on https://www.gatsbyjs.org/packages/gatsby-image/ for the latest documentation on using images in Gatsby. It’s a somewhat complicated subject, but if you play around in the GraphiQL IDE, you’ll find what you need. The end result is your images are transformed into various sizes to fit various breakpoints and are lazy-loaded with an attractive blur-up effect. As you’ll see below, we can use the <Img> component to insert these images into our page.

We’re now ready to map over all of our blog posts and spit out some markup:

{data.allNodeArticle.edges.map(edge => (
  <>
    <h3><Link to={ edge.node.id }>{ edge.node.title }</Link></h3>
    <small><em>{ Date(edge.node.created) }</em></small>
    <div style={{ maxWidth: `300px`, marginBottom: `1.45rem`, width: `100%` }}>
      <Img fluid={ edge.node.relationships.field_image.localFile.childImageSharp.fluid } />
    </div>
    <div dangerouslySetInnerHTML={{ __html: edge.node.body.value.split(' ').splice(0, 50).join(' ') + '...' }}></div>
  </>
))}

There’s a lot going on here, so let’s unpack it line by line:

  • line 1: in typical React fashion, we map over the individual article nodes
  • line 2: this is a React fragment, using shorthand syntax, it allows us to wrap several lines of markup without unnecessarily polluting it with wrapping divs
  • lines 3 & 4: here we access some values on each node
  • lines 5-7: here is our article image, wrapped in a div with CSS-in-JS styling to restrict overall width, then our <Img> component as mentioned above, more details on styling in Gatsby here
  • line 8: JSON:API provides the body content of our article as a string of HTML, so we can use React’s dangerouslySetInnerHTML element to convert the string to markup, preserving whatever formatting we included when creating our content

Additionally, I’ve converted the created date to a JavaScript Date object using the Date() function and manipulated the article body content to be truncated.

Nothing fancy, but it’s a good start. You may have noticed in the previous code snippet that our headline is linking to the article’s node id. Right now, that link is broken until we create pages for each node. Thankfully we don’t have to do this manually. Let’s take a look at how we can generate some pages automatically for each of our posts.

Head on over to gatsby-node.js and add the following:

exports.createPages = ({ graphql, actions }) => {
  return graphql(`
    {
     allNodeArticle {
       edges {
         node {
           id
         }
       }
     }
    }
  `
  ).then(result => {
   console.log(JSON.stringify(result, null, 4))
  })
}

Here we’re writing our own implementation of the createPages API. Gatsby will use this to generate pages for each node id. Stop and restart the development server to see the following output in your console:

"data": {
    "allNodeArticle": {
        "edges": [
            {
                "node": {
                    "id": "e2e62d88-7c5f-443d-9008-0dc5d79e1391"
                }
            },
            {
                "node": {
                    "id": "1058ebcb-c910-4127-a496-b808740e49a5"
                }
            },
            {
                "node": {
                    "id": "29326c97-c2bc-4161-bcd7-f1c4564343f2"
                }
            }
        ]
    }
}

Next, we’ll need a template for our posts. Create a file called src/templates/blog-post.js and paste in the following:

import React from "react"
import { graphql } from "gatsby"
import Layout from "../components/layout"
import Img from 'gatsby-image'

export default ({ data }) => {
  const post = data.nodeArticle
  return (
    <Layout>
      <div>
        <h1>{ post.title }</h1>
        <small><em>{ Date(post.created) }</em></small>
        <div style={{ maxWidth: `900px`, marginBottom: `1.45rem`, width: `100%` }}>
          <Img fluid={ post.relationships.field_image.localFile.childImageSharp.fluid } />
        </div>
        <div dangerouslySetInnerHTML={{ __html: post.body.value }}></div>
      </div>
    </Layout>
  )
}

export const query = graphql`
  query($id: String!) {
    nodeArticle(id: { eq: $id }) {
      title
      body {
        value
      }
      created
      relationships {
        field_image {
          localFile {
            childImageSharp {
              fluid(maxWidth: 400, quality: 100) {
                ...GatsbyImageSharpFluid
              }
            }
          }
        }
      }
    }
  }
`

Notice that we’re querying for a single article node now, passing in an id as an argument. Otherwise, it looks a lot like our index page. You’ll see how this template gets access to that id variable in the next snippet. With a template in place, we can update gatsby-node.js to use it when generating pages:

const path = require(`path`)

exports.createPages = ({ graphql, actions }) => {
  const { createPage } = actions
  return graphql(`
    {
     allNodeArticle {
       edges {
         node {
           id
         }
       }
     }
    }
  `
  ).then(result => {
    result.data.allNodeArticle.edges.forEach(({ node }) => {
      createPage({
        path: node.id,
        component: path.resolve(`./src/templates/blog-post.js`),
        context: {
          id: node.id,
        },
      })
    })
  })
}

We’re now passing each node’s id as a context variable to our template, which enables us to query that specific node and access its fields. Now the headline links in our index will link to a full page view of the post. Give it a try:

There’s plenty left we could do to enhance the site, like adding some of our own styling finding a friendlier date format, adding alt text to images, using slugs in place of ids in URLs, etc. But as of now, we have a functioning blog powered by data directly from Drupal. The best part is, we don’t have to worry at all about setting up a server, installing Drupal, and managing that whole side of things.

Step 6: Creating A Build and Deploying to GitHub Pages

In this final step, we’re ready to generate a build of the site we’ve developed and deployed it for free to GitHub Pages. Additionally, we’ll look at how we can streamline the process so updating our deployed site when new content is added is fast and easy.

We’ll start with a couple more Gatsby commands:

  • gatsby build - this creates a build of your site, complete with static files inside the public directory
  • gatsby serve - this allows you to preview your built site at http://localhost:9000/

If you haven’t yet, run git init in your project directory to initialize a new repo. To make deployment to GitHub Pages a snap, we’ll take advantage of a handy package called gh-pages. Run npm install --save-dev gh-pages to install, then we’ll add a script to our package.json file:

"scripts": {
  ...,
  "deploy": "gatsby build --prefix-paths && gh-pages -d public"
},

The site will be located inside a directory named for your repo. That --prefix-paths bit allows us to drill down into this directory. We just need to include the proper prefix path in our gatsby-config.js:

module.exports = {
  ...
  pathPrefix: "/gatsby_blog",
}

Head on over to GitHub and create yourself a new repository called gatsby_blog. Back on your command line, run git remote add origin http://github.com/username/gatsby_blog.git, substituting your own GitHub username.

Once that’s done, run npm run deploy and wait for the build and deploy to complete. If everything went correctly, you should be able to visit your live blog at http://username.github.io/gatsby_blog/. Congrats!

When you need to update your site’s content, simply run your deployment script again.

Conclusion

React is a powerful and flexible library for building dynamic websites, but it’s often only one of many libraries you’ll end up using to reach your goal. Gatsby gathers up everything you’ll likely need and handles all the tricky wiring for you, vastly improving the speed and quality of your site as well as the overall developer experience. Pairing it with Drupal’s excellent content management tools and the zero-config JSON:API module makes many aspects of decoupling your site much easier. This hybrid approach may be just what you need for your next project.

Further Reading

Feb 25 2019
Feb 25

Matthew Tift talks with James Sansbury and Matt Westgate about the history of Lullabot, building a team of Drupal experts, and moving away from the phrase "rock star." Ideas about "rock stars" can prevent people from applying to job postings, cause existing team members to feel inadequate, or encourage an attitude that doesn't work well in a client services setting. Rather than criticize past uses of this phrase, we talk about the effects of this phrase on behavior.

Feb 25 2019
Feb 25

EvolvingWeb Co-Founder Suzanne Dergacheva spills on why she recently joined the Drupal Association, what's happening with Drupal in Montreal, and the Oboe.

Feb 21 2019
Feb 21

Drupal has a great reputation as a CMS with excellent security standards and a 30+ member security team to back it up. For some Drupal sites, we must do more than just keep up-to-date with each and every security release. A Drupal site with private and confidential data brings with it some unique risks. Not only do you want to keep your site accessible to you and the site’s users, but you also cannot afford to have private data stolen. This article provides a checklist to ensure the sensitive data on your site is secure.

Keeping a Drupal 8 site secure requires balancing various needs, such as performance, convenience, accessibility, and resources. No matter how many precautions we take, we can never guarantee a 100% secure site. For instance, this article does not cover vulnerabilities beyond Drupal, such as physical vulnerabilities or team members using unsafe hardware, software, or networks. These suggestions also do not ensure General Data Protection Regulation (GDPR) compliance, a complex subject that is both important to consider and beyond the scope of this checklist.

Rather than systems administrators well-versed in the minutia of securing a web server, the list below targets Drupal developers who want to advise their clients or employers with a reasonable list of recommendations and implement common-sense precautions. The checklist below should be understood as a thorough, though not exhaustive, guide to securing your Drupal 8 site that contains private data, from development to launch and after.

The information below comes from various sources, including the security recommendations on Drupal.org, articles from Drupal Planet that discuss security, a Drupal security class I took with Gregg Knaddison and his 2009 book Cracking Drupal, a review a numerous internal Lullabot documents regarding security, and other sources. This list was reviewed by Lullabot's security team. That said, for more comprehensive content and considerations, I recommend learning more about securing your site on Drupal.org and other security sources.

Infrastructure

  • Configure all servers and environments to use HTTPS.
  • Install the site following the Drupal guidelines to secure file permissions and ownership (using the fix-permissions.sh included with the guidelines is a quick way to do this).
  • Hide important core files that may allow attackers to identify the installed version (the point release) of Drupal, install new sites, update existing sites, or perform maintenance tasks. For example, when using Apache add something like this to your .htaccess configuration file:
<FilesMatch "(MAINTAINERS|INSTALL|INSTALL.mysql|CHANGELOG).txt">
  Order deny,allow
  deny from all
  Allow from 127.0.0.1  <-- your domain goes here
</FilesMatch>
<FilesMatch "(authorize|cron|install|update).php">
  Order deny,allow
  deny from all
  Allow from 127.0.0.1  <-- your domain goes here
</FilesMatch>

  • In settings.php:
    • Configure a hard-to-guess table name prefix (e.g. 5d3R_) in the $databases array to deter SQL injections (this suggestion comes from KeyCDN).
    • Check that $update_free_access = FALSE; to further prevent using the update.php script.
    • Configure trusted host patterns (e.g. $settings['trusted_host_patterns'] = ['^www\.example\.com$'];).
    • In each environment (development, staging, and production) configure a private files directory located outside of the web root.
  • If you are running MariaDB 10.1.4 or greater with a XtraDB or InnoDB storage engine on your own hardware (as opposed to a hosting provider such as Linode or AWS), enable data-at-rest encryption.
  • Establish a regular process for code, files, and database off-site backup using an encryption tool such as GPG or a service such as NodeSquirrel.
  • If a VPN is available, restrict access to the administrative parts of the site to only local network/VPN users by blocking access to any pages with “admin” in the URL.
  • Note that these recommendations also apply to Docker containers, which often run everything as root out of the box. Please be aware that the Docker Community -- not the Drupal Community or the Drupal Security Team -- maintains the official Docker images for Drupal.

Development: Pre-Launch

  • Before launching the site, calibrate security expectations and effort. Consider questions such as, who are the past attackers? Who are the most vulnerable users of the site? Where are the weaknesses in the system design that are "by design"? What are the costs of a breach? This discussion might be a good project kickoff topic.
  • Establish a policy to stay up-to-date with Drupal security advisories and announcements. Subscribe via RSS or email, follow @drupalsecurity on Twitter, or create an IRC bot that automatically posts security updates. Use whatever method works best for your team.
  • Create a process for keeping modules up to date, how and when security updates are applied and considers the security risk level. Drupal security advisories are based on the NIST Common Misuse Scoring System (NISTIR 7864), so advisories marked "Confidentiality impact" (CI) are especially crucial for sites with private and confidential data.
  • Establish a process for handling zero-day vulnerabilities.
  • Review content types to ensure that all sensitive data, such as user photos and other sensitive files, are uploaded as private files.
  • Implement one or more of the Drupal modules for password management. The United States National Institute of Standards and Technology (NIST) offers detailed guidelines about passwords. Here are some policies that are easy to implement using submodules of the Password Policy module:
    • At least 8 characters (Password Character Length Policy module)
    • Maximum of 2 consecutive identical characters (Password Consecutive Characters Policy module)
    • Password cannot match username (Password Username Policy)
    • Password cannot be reused (Password Policy History)
  • Consider having your organization standardize on a password manager and implement training for all employees.
  • Consider using the Two-Factor Authentication (TFA) module.
  • Change the admin username to something other than admin.
  • Enable the Login Security module, which detects password-guessing and brute-force attacks performed against the Drupal login form.
  • Enable Security Review module, run the Security Review checklist, and make sure all tests pass. Disable this module before site launch, but use this module to perform security reviews each time core or contributed modules are updated.
  • Carefully review Drupal user permissions and lock down restricted areas of the site.

Development: Ongoing

Matthew Tift

Thumbnail

Senior developer, musicologist, and free software activist

Feb 08 2019
Feb 08

Welcome to the latest version of Lullabot.com! Over the years (since 2006!), the site has gone through at least seven iterations, with the most recent launching last week at the 2019 Lullabot team retreat in Palm Springs, California.

Back to a more traditional Drupal architecture

Our previous version of the site was one of the first (and probably the first) decoupled Drupal ReactJS websites. It launched in 2015.

Decoupling the front end of Drupal gives many benefits including easier multi-channel publishing, independent upgrades, and less reliance on Drupal specialists. However, in our case, we don’t need multi-channel publishing, and we don’t lack Drupal expertise.

One of the downsides of a decoupled architecture is increased complexity. Building blocks of our decoupled architecture included a Drupal 7 back end, a CouchDB middle-layer, ReactJS front end, and Node.js server-side application. Contrast this with a standard Drupal architecture where we only need to support a single Drupal 8 site.

The complexity engendered by decoupling a Drupal site means developers take longer to contribute certain types of features and fixes to the site. In the end, that was the catalyst for the re-platforming. Our developers only work on the site between client projects so they need to be able to easily understand the overall architecture and quickly spin up copies of the site.

Highlights of the new site

In addition to easily swapping in and out developers, the primary goals of the website were ease of use for our non-technical marketing team (hi Ellie!), a slight redesign, and to maintain or improve overall site speed.

Quickly rolling developers on and off

To aid developers quickly rolling on and off the project, we chose a traditional Drupal architecture and utilized as little custom back-end code as possible. When we found holes in functionality, we wrote modules and contributed them back to the Drupal ecosystem. 

We also standardized to Lando and created in-depth documentation on how to create a local environment. 

Ease of use

To enable our marketing team to easily build landing pages, we implemented Drupal’s new experimental Layout Builder module. This enables a slick drag-and-drop interface to quickly compose and control layouts and content.

We also simplified Drupal’s content-entry forms by removing and reorganizing fields (making heavy use of the Field Group module), providing useful descriptions for fields and content types, and sub-theming the Seven theme to make minor styling adjustments where necessary.

Making the front end lean and fast 

Normally, 80% of the delay between navigating to a webpage and being able to use the webpage is attributed to the front end. Browsers are optimized to quickly identify and pull in critical resources to render the page as soon as possible, but there are many enhancements that can be made to help it do so. To that end, we made a significant number of front-end performance optimizations to enable the rendering of the page in a half-second or less.

  • Using vanilla JavaScript instead of a framework such as jQuery enables the JS bundle size to be less than 27kb uncompressed (to compare, the previous version’s bundle size was over 1MB). Byte for byte, JavaScript impacts the performance of a webpage more than any other type of asset. 
  • We heavily componentize our stylesheets and load them only when necessary. Combined with the use of lean, semantic HTML, the browser can quickly generate the render-tree—a critical precursor to laying out the content.
  • We use HTTP2 to enable multiplexed downloads of assets while still keeping the number of HTTP requests low. Used with a CDN, this dramatically lowers the time-to-first-byte metric and time to download additional page assets.
  • We heavily utilize resource-hints to tell the browser to download render-blocking resources first, as well as instructing the browser to connect third-party services immediately.
  • We use the Quicklink module to pre-fetch linked pages when the browser is idle. This makes subsequent page loads nearly instantaneous.

There are still some performance @todos for us, including integrating WEBP images (now supported by Chrome and Firefox), and lazy-loading images. 

Contributing modules back to the Drupal ecosystem

During the development, we aimed to make use of contributed modules whenever it made sense. This allowed us to implement almost all of the features we needed. Only a tiny fraction of our needs was not covered by existing modules. One of Lullabot’s core values is to Collaborate Openly which is why we decided to spend a bit more time on our solutions so we could share them with the rest of the community as contributed modules.

Using Layouts with Views

Layout Builder builds upon the concept of layout regions. These layout regions are defined in custom modules and enable editors to use layout builder to insert these regions, and then insert content into them.

Early on, we realized that the Views module lacked the ability to output content into these layouts. Lullabot’s Director of Technology, Karen Stevenson, created the Views Layout module to solve this issue. This module creates a new Views row plugin that enables the Drupal site builder to easily select the layout they want to use, and select which regions to populate within that layout.

Generating podcast feeds with Drupal 8

Drupal can generate RSS feeds out of the box, but podcast feeds are not supported. To get around this limitation, Senior Developer Mateu Aguiló Bosch created the Podcast module, which complies with podcast standards and iTunes requirements.

This module utilizes the Views interface to map your site’s custom Drupal fields to the necessary podcast and iTunes fields. For more information on this, checkout Mateu’s tutorial video here.

Speeding up Layout Builder’s user interface

As stated earlier, Layout Builder still has “experimental” status. One of the issues that we identified is that the settings tray can take a long time to appear when adding a block into layout builder.

Lullabot Hawkeye Tenderwolf identified the bottleneck as the time it takes Drupal to iterate through the complete list of blocks in the system. To work around this, Karen Stevenson created the Block Blacklist module, in which you can specify which blocks to remove from loading. The result is a dramatically improved load time for the list of blocks.

Making subsequent page loads instantaneous 

A newer pattern on the web (called the PRPL pattern) includes pre-fetching linked pages and storing them in a browser cache. As a result, subsequent page requests return almost instantly, making for an amazing user experience. 

Bringing this pattern into Drupal, Senior Front-end Dev Mike Herchel created the Quicklink module using Google’s Quicklink JavaScript library. You can view the result of this by viewing this site’s network requests in your developer tool of choice. 

Keeping users in sync using the Simple LDAP module

Lullabot stores employee credentials in an internal LDAP server. We want all the new hires to gain immediate access to as many services as possible, including Lullabot.com. To facilitate this, we use the Simple LDAP module (which several bots maintain) to keep our website in sync with our LDAP directory.

This iteration of Lullabot.com required the development of some new features and some performance improvements for the D8 version of the module.

Want to learn more?

Built by Bots

While the site was definitely a team effort, special thanks go to Karen Stevenson, Mike Herchel, Wes Ruvalcaba, Putra Bonaccorsi, David Burns, Mateu Aguiló Bosch, James Sansbury, and, last but not least, Jared Ponchot for the beautiful designs.

Feb 08 2019
Feb 08

This Episode's Guests

Karen Stevenson

Thumbnail

Karen is one of Drupal's great pioneers, co-creating the Content Construction Kit (CCK) which has become Field UI, part of Drupal core.

Mateu Aguiló Bosch

Thumbnail

Loves to be under the sun. Very passionate about programming.

Wes Ruvalcaba

Thumbnail

Wes is a designer turned front-end dev with a strong eye for UX.

Putra Bonaccorsi

Thumbnail

An expert in content management systems like Drupal, Putra Bonaccorsi creates dynamic, user friendly sites and applications.

Feb 06 2019
Feb 06

If you are a programmer looking to improve your professional craft, there are many resources toward which you will be tempted to turn. Books and classes on programming languages, design patterns, performance, testing, and algorithms are some obvious places to look. Many are worth your time and investment.

Despite the job of a programmer often being couched in technical terms, you will certainly be working for and with other people, so you might also seek to improve in other ways. Communication skills, both spoken and written, are obvious candidates for improvement. Creative thinking and learning how to ask proper questions are critical when honing requirements and rooting out bugs, and time can always be managed better. These are not easily placed in the silo of “software engineering,” but are inevitable requirements of the job. For these less-technical skills, you will also find a plethora of resources claiming to help you in your quest for improvement. And again, many are worthwhile.

For all of your attempts at improvement, however, you will be tempted to remain in the non-fiction section of your favorite bookstore. This would be a mistake. You should be spending some of your time immersed in good fiction. Why fiction? Why made-up stories about imaginary characters? How will that help you be better at your job? There are at least four ways.

Exercise your imagination

Programming is as much a creative endeavor as it is technical mastery, and creativity requires a functioning imagination. To quote Einstein:

Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.

You can own a hammer, be really proficient with it, and even have years of experience using it, but it takes imagination to design a house and know when to use that hammer for that house. It takes imagination to get beyond your own limited viewpoint. This can make it easier to make connections and analogies between things that might not have seemed related which is a compelling definition of creativity itself.

Your imagination works like any muscle. Use it or lose it. And just like any other kind of training, it helps to have an experienced guide. Good authors of fiction are ready to be your personal trainers.

Understanding and empathy

The best writers can craft characters so real that they feel like flesh and blood, and many of those people can be similar to actual people you know. Great writers are, first and foremost, astute observers of life, and their insight into minds and motivations can become your insight. Good fiction can help you navigate real life.

One meta-study suggests that reading fiction, even just a single story, can help improve someone’s social awareness and reactions to other people. For any difficult situation or person you come across in your profession, there has probably been a writer that has explored that exact same dynamic. The external trappings and particulars will certainly be different, but the motivations and foibles will ring true.

In one example from Jane Austen’s Mansfield Park, Sir Thomas is a father who puts too much faith in the proper appearances, and after sternly talking to his son about an ill-advised scheme, the narrator of the books says, “He did not enter into any remonstrance with his other children: he was more willing to believe they felt their error than to run the risk of investigation.”

You have probably met a person like this. You might have dealt with a project manager like this who will limit communication rather than “run the risk of investigation". No news is good news. Austen has a lot to teach you about how one might maneuver around this type of personality. Or, you might be better equipped to recognize such tendencies in yourself and snuff them out before they cause trouble for yourself and others.

Remember, all software problems are really people problems at their core. Software is written by people, and the requirements are determined by other people. None of the people involved in this process are automatons. Sometimes, how one system interfaces with another has more to do with the relationship between two managers than any technical considerations.

Navigating people is just as much a part of a programmer’s job as navigating an IDE. Good fiction provides good landmarks.

Truth and relevance

This is related to the previous point but deserves its own section. Good fiction can tell the truth with imaginary facts. This is opposed to much of the news today, which can lie with the right facts, either by omitting some or through misinterpretation.

Plato, in his ideal republic, wanted to kick out all of the poets because, in his mind, they did nothing but tell lies. On the other hand, Philip Sidney, in his Defence of Poesy, said that poets lie the least. The latter is closer to the truth, even though it might betray a pessimistic view of humanity.

Jane Austen’s novels are some of the most insightful reflections on human nature. Shakespeare’s plays continue to last because they tap into something higher than “facts”. N.N. Taleb writes in his conversation on literature:

...Fiction is a certain packaging of the truth, or higher truths. Indeed I find that there is more truth in Proust, albeit it is officially fictional, than in the babbling analyses of the New York Times that give us the illusions of understanding what’s going on.

Homer, in The Iliad, gives us a powerful portrait of the pride of men reflected in the capriciousness of his gods. And, look at how he describes anger (from the Robert Fagles translation):

...bitter gall, sweeter than dripping streams of honey, that swarms in people's chests and blinds like smoke.

That is a description of anger that rings true and sticks. And maybe, just maybe, after you have witnessed example after vivid example of the phenomenon in The Iliad, you will be better equipped to stop your own anger from blinding you like smoke.

How many times will modern pundits get things wrong, or focus on things that won’t matter in another month? How many technical books will be outdated after two years? Homer will always be right and relevant.

You also get the benefit of aspirational truths. Who doesn’t want to be a faithful friend, like Samwise Gamgee, to help shoulder the heaviest burdens of those you love? Sam is a made up character. Literally does not exist in this mortal realm. Yet he is real. He is true.

Your acts of friendship might not save the world from unspeakable evil, but each one reaches for those lofty heights. Your acts of friendship are made a little bit nobler because you know that they do, in some way, push back the darkness.

Fictional truths give the world new depth to the reader. C.S. Lewis, in defending the idea of fairy tales, wrote:

He does not despise real woods because he has read of enchanted woods: the reading makes all real woods a little enchanted.

Likewise, to paraphrase G.K. Chesterton, fairy tales are more than true — not because they tell us dragons exist, but because they tell us dragons can be beaten.

The right words

One of the hardest problems in programming is naming things. For variables, functions, and classes, the right name can bring clarity to code like a brisk summer breeze, while the wrong name brings pain accompanied by the wailing and gnashing of teeth.

Sometimes, the difference between the right name and the wrong name is thin and small, but represents a vast distance, like the difference between “lightning” and “lightning bug,” or the difference between “right” and “write”.  

Do you know who else struggles with finding the right words? Great authors. And particularly, great poets. Samuel Taylor Coleridge once said:

Prose = words in their best order; — poetry = the best words in the best order. 

"The best words in the best order" could also be a definition of good, clean code. If you are a programmer, you are a freaking poet.

Well, maybe not, but this does mean that a subset of the fiction you read should be poetry, though any good fiction will help you increase your vocabulary. Poetry will just intensify the phenomenon. And when you increase your vocabulary, you increase your ability to think clearly and precisely.

While this still won’t necessarily make it easy to name things properly - even the best poets struggle and bleed over the page before they find what they are looking for - it might make it easier.

What to read

Notice the qualifier “good”. That’s important. There were over 200,000 new works of fiction published in 2015 alone. Life is too short to spend time reading bad books, especially when there are too many good ones to read for a single lifetime. I don’t mean to be a snob, just realistic.

Bad fiction will, at best, be a waste of your time. At worst, it can lie to you in ways that twist your expectations about reality by twisting what is good and beautiful. It can warp the lens through which you view life. The stories we tell ourselves and repeat about ourselves shape our consciousness, and so we want to tell ourselves good ones.

So how do you find good fiction? One heuristic is to let time be your filter. Read older stuff. Most of the stuff published today will not last and will not be the least bit relevant twenty years from now. But some of it will. Some will rise to the top and become part of the lasting legacy of our culture, shining brighter and brighter as the years pass by and scrub away the dross. But it's hard to know the jewels in advance, so let time do the work for you.

The other way is to listen to people you trust and get recommendations. In that spirit, here are some recommendations from myself and fellow Lullabots:

Jan 21 2019
Jan 21

Tom Sliker started Broadstreet Consulting more than a decade ago, and has made Drupal a family affair. We dragged Tom out of the South Carolina swamps and into DrupalCamp Atlanta to get the scoop.  How does Tom service more than 30 clients on a monthly basis with just a staff of five people?  His turn-key Aegir platform, that's how!

Jan 07 2019
Jan 07

Note: This article is a re-post from Mateu's personal blog.

I have been very vocal about the JSON:API module. I wrote articles, recorded videos, spoke at conferences, wrote extending software, and at some point, I proposed to add JSON:API into Drupal core. Then Wim and Gabe joined the JSON:API team as part of their daily job. That meant that while they took care of most of the issues in the JSON:API queue, I could attend the other API-First projects more successfully. I have not left the JSON:API project by any means, on the contrary, I'm more involved than before. However, I have just transitioned my involvement to feature design and feature sign-off, sprinkled with the occasional development. Wim and Gabe have not only been very empathic and supportive with my situation, but they have also been taking a lot of ownership of the project. JSON:API is not my baby anymore, instead we now have joint custody of our JSON:API baby.

As a result of this collaboration Gabe, Wim and I have tagged a stable release of the second version of the JSON:API module. This took a humongous amount of work, but we are very pleased with the result. This has been a long journey, and we are finally there. The JSON:API maintainers are very excited about it.

I know that switching to a new major version is always a little bit scary. You update the module and hope for the best. With major version upgrades, there is no guarantee that your use of the module is still going to work. This is unfortunate as a site owner, but including breaking changes is often the best solution for the module's maintenance and to add new features. The JSON:API maintainers are aware of this. I have gone through the process myself and I have been frustrated by it. This is why we have tried to make the upgrade process as smooth as possible.

What Changed?

If you are a long-time Drupal developer you have probably wondered how do I do this D7 thing in D8? When that happens, the best solution is to search a change record for Drupal core to see if it change since Drupal 7. The change records are a fantastic tool to track the things changed in each release. Change records allow you to only consider the issues that have user-facing changes, avoiding lots of noise of internal changes and bug fixes. In summary, they let users understand how to migrate from one version to another.

Very few contributed modules use change records. This may be because module maintainers are unaware of this feature for contrib. It could also be because maintaining a module is a big burden and manually writing change records is yet another time-consuming task. The JSON:API module has comprehensive change records on all the things you need to pay attention when upgrading to JSON:API 2.0.

Change Records

As I mentioned above, if you want to understand what has changed since JSON:API 8.x-1.24 you only need to visit the change records page for JSON:API. However, I want to highlight some important changes.

Config Entity Mutation is now in JSON:API Extras

This is no longer possible only using JSON:API. This feature was removed because Entity API does a great job ensuring that access rules are respected, but the Configuration Entity API does not support validation of configuration entities yet. That means the responsibility of validation falls on the client, which has security and data integrity implications. We felt we ought to move this feature to JSON:API Extras, given that JSON:API 2.x will be added into Drupal core.

No More Custom Field Type Normalizers

This is by far the most controversial change. Even though custom normalizers for JSON:API have been strongly discouraged for a while, JSON:API 2.x will enforce that. Sites that have been in violation of the recommendation will now need to refactor to supported patterns. This was driven by the limitations of the serialization component in Symfony. In particular, we aim to make it possible to derive a consistent schema per resource type. I explained why this is important in this article.

Supported patterns are:

  • Create a computed field. Note that a true computed field will be calculated on every entity load, which may be a good or a bad thing depending on the use case. You can also create stored fields that are calculated on entity presave. The linked documentation has examples for both methods.
  • Write a normalizer at the Data Type level, instead of field or entity level. As a benefit, this normalizer will also work in core REST!
  • Create a Field Enhancer plugin like these, using JSON:API Extras. This is the most similar pattern, it enforces you to define the schema of the enhancer.

File URLs

JSON:API pioneered the idea of having a computed url field for file entities that an external application can use without modifications. Ever since this feature has made it into core, with some minor modifications. Now the url is no longer a computed field, but a computed property on the uri field.

Special Properties

The official JSON:API specification reserves the type and id keys. These keys cannot exist inside of the attributes or relationships sections of a resource object. That's why we are now prepending {entity_type}_ to the key name when those are found. In addition to that, internal fields like the entity ID (nid, tid, etc.) will have drupal_internal__ prepended to them. Finally, we have decided to omit the uuid field given that it already is the resource ID.

Final Goodbye to _format

JSON:API 1.x dropped the need to have the unpopular _format parameter in the URL. Instead, it allowed the more standard Accept: application/vnd.api+json to be used for format negotiation. JSON:API 2.x continues this pattern. This header is now required to have cacheable 4XX error responses, which is an important performance improvement.

Benefits of Upgrading

You have seen that these changes are not very disruptive, and even when they are, it is very simple to upgrade to the new patterns. This will allow you to upgrade to the new version with relative ease. Once you've done that you will notice some immediate benefits:

  • Performance improvements. Performance improved overall, but especially when using filtering, includes and sparse fieldsets. Some of those with the help of early adopters during the RC period!
  • Better compatibility with JSON:API clients. That's because JSON:API 2.x also fixes several spec compliance edge case issues.
  • We pledge that you'll be able to transition cleanly to JSON:API in core. This is especially important for future-proofing your sites today.

Benefits of Starting a New Project with the Old JSON:API 1.x

There are truly none. Version 2.x builds on top of 1.x so it carries all the goodness of 1.x plus all the improvements.

If you are starting a new project, you should use JSON:API 2.x.

JSON:API 2.x is what new installs of Contenta CMS will get, and remember that Contenta CMS ships with the most up-to-date recommendations in decoupled Drupal. Star the project in GitHub and keep an eye on it here, if you want.

What Comes Next?

Our highest priority at this point is the inclusion of JSON:API in Drupal core. That means that most of our efforts will be focused on responding to feedback to the core patch and making sure that it does not get stalled.

In addition to that we will likely tag JSON:API 2.1 very shortly after JSON:API 2.0. That will include:

  1. Binary file uploads using JSON:API.
  2. Support for version negotiation. Allows latest or default revision to be retrieved. Supports the Content Moderation module in core. This will be instrumental in decoupled preview systems.

Our roadmap includes:

  1. Full support for revisions, including accessing a history of revisions. Mutating revisions is blocked on Drupal core providing a revision access API.
  2. Full support for translations. That means that you will be able to create and update translations using JSON:API. That adds on top of the current ability to GET translated entities.
  3. Improvements in hypermedia support. In particular, we aim to include extension points so Drupal sites can include useful related links like add-to-cart, view-on-web, track-purchase, etc.
  4. Self-sufficient schema generation. Right now we rely on the Schemata module in order to generate schemas for the JSON:API resources. That schema is used by OpenAPI to generate documentation and the Admin UI initiative to auto-generate forms. We aim to have more reliable schemas without external dependencies.
  5. More performance improvements. Because JSON:API only provides an HTTP API, implementation details are free to change. This already enabled major performance improvements, but we believe it can still be significantly improved. An example is caching partial serializations.

How Can You Help?

The JSON:API project page has a list of ways you can help, but here are several specific things you can do if you would like to contribute right away:

  1. Write an experience report. This is a Drupal.org issue in the JSON:API queue that summarizes the things that you've done with JSON:API, what you liked, and what we can improve. You can see examples of those here. We have improved the module greatly thanks to these in the past. Help us help you!
  2. Help us spread the word. Tweet about this article, blog about the module, promote the JSON:API tooling in JavaScript, etc.
  3. Review the core patch.
  4. Jump into the issue queue to write documentation, propose features, author patches, review code, etc.

Photo by Sagar Patil on Unsplash.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web