Apr 22 2019
Apr 22
Seattle skyline

Photo by MILKOVÍ on Unsplash

Another DrupalCon is in the books and our team had a fantastic time gathering with so many members of the worldwide Drupal community in Seattle. Getting together in person with a large portion of our team is always a treat, but it makes it all the sweeter when our team has the chance to share their expertise by presenting at DrupalCon. I was fortunate to be able to share my thoughts immediately following the event on an episode of the Lullabot Podcast and I have included videos and links below to all of Chromatic’s sessions. We are already counting down to next year.

If you are interested in attending DrupalCon with us in the future, check out the open positions on our Careers page!

Chromatic Sessions

Preprocessing Paragraphs: A Beginner's Guide

Larry Walangitan presented Preprocessing Paragraphs: A Beginner's Guide

Introduction to Drupal 8 Migrations

Clare Ming shared an Introduction to Drupal 8 Migrations to an exceptionally packed (and large) room.

Configuration Management: A True Life Story

Nate Dentzau teamed up with a former colleague to present Configuration Management: A True Life Story.

Saving The World From Bad Websites

Last but not least, Dave Look presented Saving The World From Bad Websites. Sadly, no recording is available so you’ll just have to keep an eye out for the next event Dave presents at.

Mar 15 2019
Mar 15

Migrations are fraught with unexpected discoveries and issues. Fighting memory issues with particularly long or processing heavy migrations should not be another obstacle to overcome, however, that is often not the case.

For instance, I recently ran into this no matter how high I raised the PHP memory_limit or lowered the --limit flag value.

$ drush migrate:import example_migration --limit=1000

Fatal error: Allowed memory size of 4244635648 bytes exhausted (tried to allocate 45056 bytes) in web/core/lib/Drupal/Core/Database/Statement.php on line 59

Fatal error: Allowed memory size of 4244635648 bytes exhausted (tried to allocate 65536 bytes) in vendor/symfony/http-kernel/Controller/ArgumentResolver/DefaultValueResolver.php on line 23

It should be noted that the --limit flag, while extremely useful, does not reduce the number of rows loaded into memory. It simply limits the number of destination records created. The source data query has no LIMIT statement, and the processRow(Row $row) method in the source plugin class is still called for every row.

Batch Size

This is where query batch size functionality comes in. This functionality is located in \Drupal\migrate\Plugin\migrate\source\SqlBase and allows for the source data query to be performed in batches, effectively using SQL LIMIT statements.

This can be controlled in the source plugin class with via the batchSize property.

 * {@inheritdoc}
protected $batchSize = 1000;

Alternatively, it can be set in the migration yml file with the batch_size property under the source definition.

  batch_size: 10
  plugin: example_migration_source
  key: example_source_db

There are very few references that I could find in existing online documentation. I eventually discovered it via a passing reference in a Drupal.org issue queue discussion.

Once I knew what I was looking for, I went searching for how this worked and discovered several other valuable options in the migration SqlBase class.


 * Sources whose data may be fetched via a database connection.
 * Available configuration keys:
 * - database_state_key: (optional) Name of the state key which contains an
 *   array with database connection information.
 * - key: (optional) The database key name. Defaults to 'migrate'.
 * - target: (optional) The database target name. Defaults to 'default'.
 * - batch_size: (optional) Number of records to fetch from the database during
 *   each batch. If omitted, all records are fetched in a single query.
 * - ignore_map: (optional) Source data is joined to the map table by default to
 *   improve migration performance. If set to TRUE, the map table will not be
 *   joined. Using expressions in the query may result in column aliases in the
 *   JOIN clause which would be invalid SQL. If you run into this, set
 *   ignore_map to TRUE.
 * For other optional configuration keys inherited from the parent class, refer
 * to \Drupal\migrate\Plugin\migrate\source\SourcePluginBase.
 * …
abstract class SqlBase extends SourcePluginBase implements ContainerFactoryPluginInterface, RequirementsInterface {

Migration Limit

Despite the “flaws” of the --limit flag, it still offers us a valuable tool in our effort to mitigate migration memory issues and increase migration speed. My anecdotal evidence from timing responses from the --feedback flag shows a much high migration throughput for the initial items, and a gradually tapering speed as a migration progresses.

I also encountered an issue where the migration memory reclamation process eventually failed and the migration ground to a halt. I was not alone in this issue, MediaCurrent found and documented this issue in their post Memory Management with Migrations in Drupal 8.

Memory usage is 2.57 GB (85% of limit 3.02 GB), reclaiming memory. [warning] Memory usage is now 2.57 GB (85% of limit 3.02 GB), not enough reclaimed, starting new batch [warning] Processed 1007 items (1007 created, 0 updated, 0 failed, 0 ignored) - done with 'nodes_articles' The migration would then cease to continue importing items as if it had finished, while there were still several hundred thousand nodes left to import. Running the import again would produce the same result.

I adapted the approach MediaCurrent showed in their post to work with Drush 9. It solved the memory issue, improved migration throughput, and provided a standardized way to trigger migrations upon deployment or during testing.

The crux of the solution is to repeatedly call drush migrate:import in a loop with a low --limit value to keep the average item processing time lower.

Our updated version of the script is available in a gist.

So the next time you are tasked with an overwhelming migration challenge, you no longer need to worry about memory issues. Now you can stick to focusing on tracking down the source data, processing and mapping it, and all of the other challenges migrations tend to surface.

Mar 14 2019
Mar 14

When loading or interacting with entities in Drupal 8, we often use the EntityTypeManagerInterface interface, which is the brains behind the entity_type.manager service that is provided in many of the Drupal core base classes.

This often appears in one of the following ways:



Either approach returns an instance of EntityStorageInterface. Each entity type can define a class that extends EntityStorageBase and adds additional custom methods that are applicable to a given entity type.

The node entity type uses this pattern in \Drupal\node\NodeStorage to provide many of its commonly used methods such as revisionIds() and userRevisionIds().

The benefits of adding custom storage methods becomes more apparent when you begin to work with custom entities. For example, if you have a recipe entity type, you could have a loadAllChocolateRecipes() method that abstracts the query and conditions needed to load a subset of Recipe entities.

The resulting call would look like this:

/* @var $recipes \Drupal\recipe_module\Entity\Recipe[] */
$recipes = $this->entityTypeManager

A custom storage handler class is integrated with an entity via the annotated comments in the entity class.


 * Define the Recipe entity.
 * @ContentEntityType(
 *   id = "recipe",
 *   label = @Translation("Recipe"),
 *   handlers = {
 *     "storage" = "Drupal\recipe_module\RecipeStorage",

Then in the storage handler class, custom methods can be added and existing methods can be overridden as needed.

 * Defines the storage handler class for Recipe entities.
class RecipeStorage extends SqlContentEntityStorage {

   * Load all recipes that include chocolate.
   * @return \Drupal\example\Entity\Recipe[]
   * .  An array of recipe entities.
  public function loadAllChocolateRecipes() {
    return $this->loadByProperties([
      'field_main_ingredient' => 'chocolate',

Manual SQL queries can also be performed using the already provided database connection in $this->database. Explore the Drupal\Core\Entity\Sql\SqlContentEntityStorage class to see the many properties and methods that you can override or leverage in your own methods.

Again, the NodeStorage and TermStorage offer many great examples and will demystify how many of the “magic” methods on these entities work behind the scenes.

For example, if you ever wondered how the Term::nodeCount() method works, this is where the magic happens.


 * {@inheritdoc}
public function nodeCount($vid) {
  $query = $this->database->select('taxonomy_index', 'ti');
  $query->addExpression('COUNT(DISTINCT ti.nid)');
  $query->leftJoin($this->getBaseTable(), 'td', 'ti.tid = td.tid');
  $query->condition('td.vid', $vid);
  return $query->execute()->fetchField();

The next time you need to write a method that returns data specific to an entity type, explore the use of a storage handler. It beats stuffing query logic into a custom Symfony service where you are likely violating single responsibility principles with an overly broad class.

This potentially removes your dependency on a custom service, removing the need for extra dependency injection and circular service dependencies. It also adheres to a Drupal core design pattern, so it is a win, win, win, or something like that.

Mar 11 2019
Mar 11

DrupalCon Seattle is about a month away, and we're putting the finishing touches on this year's plans. Drupal's biggest annual conference affords us the opportunity to support the project, share our expertise, and connect with our colleagues from far and wide. We love DrupalCon. Here's what we've got in store this year.

Our Booth & Swag

Come by the Chromatic booth, #516 in the exhibit hall, to pick up some free Chromatic swag. No kidding, our t-shirts are the softest/coolest and we've got an awesome new vintage design this year:

And we made some kick-ass stickers for this year's conference too:

Our Sessions

Introduction to Drupal 8 Migrations - Clare Ming

Migrations can be intimidating but with Migrate modules now in core, it’s easier than ever to upgrade or migrate legacy applications to Drupal 8. Let's demystify the process by taking a closer look at how to get started from the ground level.

In this session we’ll cover:

  • A brief overview of the Migrate APIs for importing content to Drupal 8, including Migrate Drupal's capabilities to move content from a Drupal source.
  • Understanding your migration pathway.
  • Getting your site ready for migrated content.
  • Sample migration scripts and configuration files for migrating nodes and field mapping.
  • Consideration of media entities, file attachments, and other dependencies.
  • Using Migrate Drupal UI and Migrate Tools for managing migrations.

For those new to Drupal, this session will introduce the basic concepts and framework for setting up your Drupal 8 migration path successfully.

Time: 04/10/2019 / 11:45 - 12:15
Room: 609 | Level 6

Configuration Management: A True Life Story - Nathan Dentzau

Long gone are the days of copying databases, creating a custom module, or creating features to push new functionality to your Drupal Website. Those days and arcane methods are a thing of the past with Drupal 8. Why or how you ask? Read on my friend, read on!

Managing configuration in Drupal 8 has become much easier with the introduction of configuration management. In this talk, we will review “good practices” Oomph and Chromatic have established for configuration management, what tools we use on all of our Drupal 8 projects and how to use them. We will also discuss how configuration management ties into a Continuous Integration pipeline using Github and Travis-CI.

We will discuss the following list of modules:

  • Configuration Manager
  • Configuration Split
  • Configuration Read-only mode
  • Configuration Installer

What you can expect to learn from this talk:

  • Automating configuration management in a Continuous Integration pipeline
  • How to export and import configuration in Drupal 8
  • How to manage configuration in version control
  • How to manage configuration for multiple environments
  • How to install a new instance of Drupal with a set of existing configuration

This talk is for all skill levels and aims to make everyone’s life easier through the magic of Drupal Configuration Management. We look forward to sharing our experience with you and answering any questions you may have.

Time: 04/10/2019 / 12:30 - 13:00
Room: 612 | Level 6

Saving the world from bad websites. - Dave Look

"Saving the world from bad websites." This is our compelling saga at Chromatic. We've spent years growing and evolving as a team and it has taken time for us to land on this phrase as our compelling saga. In this talk, we'll explore the idea of a compelling saga for an agency, what it is, and why it's important. This concept comes from a book titled "High Altitude Leadership" where the author explores leadership principles learned from mountaineering expeditions. We'll discover how these same leadership principles can be applied to our industry.

We will cover:

  • What a compelling saga is and why you should have one for your Agency.
  • How these change and evolve as your agency grows.
  • The cultural impact of all team members being able to articulate the compelling saga.
  • How buy-in or lack of can mean life or death of your agency.

Time: 04/10/2019 / 13:00 - 13:30
Room: 6C | Level 6

Preprocessing Paragraphs: A Beginner's Guide - Larry Walangitan

Paragraphs is a powerful and popular contributed module for creating dynamic pages in Drupal 8. Preprocessing allows us easily to create and alter render arrays to generate tailored markup for any paragraphs type.

In this session you will learn about:

  • Getting started with preprocessing paragraphs and structuring your preprocessing methods.
  • Creating custom render arrays and overriding twig templates.
  • Referencing nested entities and pulling data into paragraph twig templates.
  • How to debug your preprocessing and twig files using contributed modules or composer packages without running out of memory.

You'll leave this session ready to preprocess paragraphs and have a plan of action to reference when debugging any issues. This session is perfect for site-builders or back/front-end devs that are new to preprocessing in Drupal 8 and Twig.

Time: 04/10/2019 / 16:00 - 16:30
Room: 608 | Level 6

If you're coming to the conference, we'd love to meet you. Come say "hello", grab some swag, and tell us how you use Drupal....oh, and if you're looking for a job, our booth is a great place to make an impression on us!

Jan 31 2019
Jan 31

It is easy to see Twig code samples and just accept that label is some magic property of an entity now and use it in your template.

<h3>{{ node.label }}</h3>

Then you come across an example that calls referencedEntities, and it quickly becomes apparent some magic is going on.

{% for entity in node.example_reference_field.referencedEntities %}
  <li>{{ entity.label }}</li>
{% endfor %}

The secret is that in Drupal 8, methods are now exposed in Twig templates.

However, not all methods are exposed (including referencedEntities in the example above). The TwigSandboxPolicy class defines the whitelisted method names and prefixes that are allowed. It also allows for these settings to be overridden via settings.php if needed.

$settings['twig_sandbox_whitelisted_methods'] = [
$settings[twig_sandbox_whitelisted_prefixes] = [

Note that additional items cannot be appended to the array due to the logic in TwigSandboxPolicy. The existing defaults must be included in the custom override.

See Discovering and Inspecting Variables in Twig Templates for additional information and background.

Example Use Case

Let’s use an example to further explore how this could be used. Suppose we have a user entity available in our template and want to retrieve some additional data.

The first step is overriding the core user class so we can add custom methods.

 * Implements hook_entity_type_build().
function example_entity_type_build(array &$entity_types) {
  // Override the core user entity class.
  if (isset($entity_types['user'])) {

Next we can add custom methods that allow retrieval of data following some business rules.

namespace Drupal\example\Entity;

use Drupal\user\Entity\User;

 * Extends the core user class.
Class ExampleUser extends User {

   * Load custom data associated with the user.
  public function getExampleData() {
    // Return the entity returned by some business logic.
    return $entity;

With the user class override in place, we can pass a user to our template.

return [
  ‘#theme’ => ‘example_template’,
  ‘#user’ => \Drupal::currentUser(),

Then in our template, that has a user entity passed in with the variable name user, we can access our custom method.

{% set entity = user.exampleData %}
<h4>{{ entity.label }}</h4>
<p>{{ entity.example_field.value }}</p>

Note that we defined our method as getExampleData, but we called exampleData’. This is due to thegetprefix being whitelisted byTwigSandboxPolicy. Had we named our methodloadExampleDatait would not have worked. However, addingloadExampleData` to the whitelisted method names would fix it.

Sticking to methods that strictly retrieve data is best, and it honors the intentions set forth in TwigSandboxPolicy.

$whitelisted_methods = Settings::get('twig_sandbox_whitelisted_methods', [
  // Only allow idempotent methods.

The ability to call and expose methods on entities, fields, etc allows for simple theme hook definitions, while still allowing the template easy access to nearly any data it needs without bloated preprocess hooks. The real power begins when you chain these methods and entity references fields together to drill into your data model all from your template.

It should be noted that this can begin to blur the lines in the MVC pattern. The view layer can quickly begin taking over logic previously owned by the controller. It’s up to you how far to take it, but either way, it is another powerful tool that Drupal 8 and Twig offer to make development easier.

Jan 23 2019
Jan 23

This coming weekend is the Drupal Global Contribution Weekend where small local contribution groups volunteer their Drupal development at the same time throughout the world. This year there are local groups gathering in Canada, England, Germany, India, Russia, Spain, and the United States. You can check out the Drupical map to see if there is a local group near you, but if there isn’t, the contribute to accessibility group is meeting remotely.

If you have not contributed to Drupal core before, you’ll need need a couple things to get started:

Helpful Resources for Contributing to Drupal

Setting up a Local Environment with Lando

To have a successful contribution weekend you need a solid local development environment. You can use any local environment you are comfortable with, but I recommend a local development environment named Lando which uses Docker containers under the hood.

Installing Lando

Lando works on latest versions of Linux, macOS, and Windows. As of writing this article, I recommend installing the latest release Beta 47 due to some regressions in the pre-release RC1.

Setting up Drupal

First you’ll need to clone the latest development branch of Drupal core locally using git. In your terminal, run:

$ git clone --branch 8.7.x https://git.drupal.org/project/drupal.git
Cloning into 'drupal'...
remote: Counting objects: 663588, done.
remote: Compressing objects: 100% (140811/140811), done.
remote: Total 663588 (delta 480786), reused 652577 (delta 471060)
Receiving objects: 100% (663588/663588), 144.13 MiB | 7.85 MiB/s, done.
Resolving deltas: 100% (480786/480786), done.
Checking out files: 100% (13558/13558), done.

Next move into into the newly created drupal directory, create a .lando.yml file, and add the following to the file:

name: drupal
recipe: drupal8

  webroot: .
  php: "7.2"
  xdebug: true

      - cd $LANDO_MOUNT && composer install
    type: node:9
    type: phpmyadmin

    service: nodejs
    service: nodejs

To break down what’s happening here. We are running a local environment with Apache 2.4, PHP 7.2 (with xdebug), MySQL 5.6, NodeJS 9.x, and PHPMyAdmin. You can use the nodejs service to work with Drupal’s frontend components and the phpmyadmin service for a GUI to view the database schema.

Lando provides tooling which wraps command line utilities with the lando command to run them in a Docker container. You can see what tooling is available by typing lando in the terminal:

$ lando
Usage: lando <command> [args] [options] [-- global options]

  composer                 Run composer commands
  config                   Display the lando configuration
  db-export [file]         Export database from a service
  db-import [file]         Import <file> into database service
  destroy [appname]        Destroy app in current directory or [appname]
  drupal                   Run drupal console commands
  drush                    Run drush commands
  info [appname]           Prints info about app in current directory or [appname]
  init [method]            Initialize a lando app, optional methods: github, pantheon
  list                     List all lando apps
  logs [appname]           Get logs for app in current directory or [appname]
  mysql                    Drop into a MySQL shell on a database service
  node                     Run node commands
  npm                      Run npm commands
  php                      Run php commands
  poweroff                 Spin down all lando related containers
  rebuild [appname]        Rebuilds app in current directory or [appname]
  restart [appname]        Restarts app in current directory or [appname]
  share [appname]          Get a publicly available url
  ssh [appname] [service]  SSH into [service] in current app directory or [appname]
  start [appname]          Start app in current directory or [appname]
  stop [appname]           Stops app in current directory or [appname]
  version                  Display the lando version

Global Options:
  --help, -h  Show help
  --verbose, -v, -vv, -vvv, -vvvv  Change verbosity of output

You need at least one command before moving on

Next, let’s start Lando:

$ lando start

After Lando successfully starts, it will output all the URLs for your local environment.

Great, we have Lando setup and ready to run Drupal. Now we just need to define some settings for Drupal to run with Lando. Create a settings file located at sites/default/settings.php with the following code:


if (getenv('LANDO_INFO')) {
 $lando_info = json_decode(getenv('LANDO_INFO'), TRUE);

 $databases['default']['default'] = [
   'database' => $lando_info['database']['creds']['database'],
   'username' => $lando_info['database']['creds']['user'],
   'password' => $lando_info['database']['creds']['password'],
   'prefix' => '',
   'host' => $lando_info['database']['internal_connection']['host'],
   'port' => $lando_info['database']['internal_connection']['port'],
   'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
   'driver' => 'mysql',

 // Config sync directory for Lando.
 $config_directories[CONFIG_SYNC_DIRECTORY] = '/app/config/common';

 // Default hash salt for Lando.
 $settings['hash_salt'] = 'BfHE?EG)vJPa3uikBCZWW#ATbDLijMFRZgfkyayYcZYoy>eC7QhdG7qaB4hcm4x$';

 // Allow any domains to access the site with Lando.
 $settings['trusted_host_patterns'] = [

Finally, let’s install Drupal. We’re going to add Drush to the project and use it to install Drupal with the standard installation profile:

$ lando composer require drush/drush:^9.4 && lando drush site:install standard --account-pass=admin --yes

You can now access your local installation of Drupal at https://drupal.lndo.site. The username is admin and the password is admin.

Oct 12 2018
Oct 12

When I began using Drupal, I was a site builder. I could find a combination of contrib modules to do just about anything without writing a line of code. I made Views that did the seemingly impossible with crazy combinations of filters and field formatters. I built Contexts that placed my blocks exactly where I wanted them, no matter how complex the requirements.

This all changed when I began writing modules and became a developer.

You have a problem? I’ll have a custom hook, a class override, and an event listener created shortly. Custom code to the rescue.

However, owning custom code, which is what having custom code in your repo really is, is much like owning anything else that is custom. It offers the incredible freedom of building exactly what you need, but it comes at a cost.

Take a car for example. A custom car can be built exactly to your specifications with no compromises. However when it comes time for maintenance, there will be no readily available parts. No one else is trained in repairs of your car, so the local auto shop can’t help when it breaks down. Diagnosing and fixing the car is all on you. All of a sudden that generic car that doesn’t meet all your needs, but will suffice with a few modifications isn’t looking so bad.

Enough with the car analogy. Let’s examine the costs specifically related to owning custom code.

  • Upfront cost to write, revise, and review the code.
  • Regression tests must be written to cover the custom functionality.
  • New developers are not familiar with it and take longer to ramp up.
  • Documentation needs to be written, updated, and maintained.
  • Security issues must be monitored and fixed quickly.
  • Code must be updated when the site is migrated to a new platform.
  • Compatibility must be maintained with other libraries as they are updated.

Despite these costs, custom code may still be the right answer depending on your needs. When evaluating the best approach, asking the following questions will often help.

  • Are the downsides of an existing solution that doesn’t exactly meet your needs less substantial than the costs of a custom solution?
  • Can functionality from an existing library or framework be extended or overridden to provide a small targeted change that limits the scope of custom code needed?
  • Would a contribution to an existing module/library solve the issue and allow future development to be done with assistance by a open source community?

If after a review of all of the options, a custom solution is truly needed, adhering to the following will help minimize the custom costs.

  • Strictly adhere to community code standards to make collaboration easier.
  • When extending functionality, only override the methods needed to make your change. Leverage the parent class as much as possible.
  • Architect code to allow for loose couplings, and leverage loose couplings when integrating with others’ code.

None of this is to say that custom code is wrong. It is often needed based upon a project’s requirements. This is simply an appeal to stop and analyze the needs at hand, instead of defaulting to a custom solution that may provide a quick fix, but come with many hidden long-term costs.

So perhaps I was on to something during my site building days by avoiding code. As a developer’s goal is not to write more code, it is to solve problems, build out functionality, and if necessary—write code. Ultimately it’s about delivering value, whether that is using existing tools, writing custom code, or a happy medium somewhere in between.

Aug 22 2018
Aug 22

A pull request (PR) is like a product looking for a buyer. The buyer, or pull request reviewer in this analogy, is trying to make an informed decision before they purchase (approve) the product. Thus as a PR creator, you are a salesperson and understanding the steps of the sales process will help you sell your product and allow buyers to make an educated decision on a product with features they may not be familiar with.

The Sales Process


Identifying customers and their need is often a prerequisite for a code change. The ticket or user story often already define the need, so the customer comes to you. So far, this sales process is easy!


Before writing code, it is important to research the codebase, evaluate existing solutions on the market, i.e. code that already does what you need. Understand the context of your code changes and the need behind them, similar to understanding a product or market before you go on a sales call.


As the solution is implemented, be constantly thinking about the needs of the buyer, or the pull request reviewer in this case. Code comments should be left with intentionality, always with the perspective of how can I best help the reviewer of this PR understand that my solution is correct and make them buy that my changes are needed. Remember that you may also be selling this same code to yourself months later when you are trying to figure out why it is there and how it works.


Get your slide deck ready! The code changes are complete and it’s time to sell everyone on the changes you made. Others are now relying upon you to give them the information and context they need to make an informed purchase. Use the preparation, work, and knowledge from the previous steps to document the benefits of your code and why someone should buy it.

Some concrete examples of this are:

  • Determine whether this should really be multiple smaller PRs.
  • Perform an initial self-evaluation of your PR and try to find edge cases and flaws.
  • Find the obvious mistakes/major oversights or development code left behind.
  • Explain how this PR changes this because of that.
  • Leave comments on your own PR explaining why one approach was chosen over another.
  • Evaluate if those PR comments really should be code comments. More often than not, the answer is yes.
  • Remember that changes without context are very difficult to review.
  • Include screenshots of your code in action.
  • Provide links where reviewers can demo your code and see it in action.
This PR comment should be moved to the code.

This PR comment should be moved to the code.

Handling objections

Often there will still be questions on the features your product provides or how it responds for a given use case. Sometimes you might be trying to sell the wrong product all together. Either way, treat questions and feedback as opportunities to improve your product or the presentation process to make future sales go more smoothly.

The buyer has an important role to play in the process too. The buyer’s feedback will help future sales processes go more quickly and improve the quality of future products. For those who often find themselves reviewing pull requests, we have a lot of thoughts on improving this practice as well.


With feedback in hand, it’s now time to close the sale. Don’t rush through this process though, be sure to ask questions and understand why changes were requested, always with an eye on making future sales easier. Then address any concerns, refactor or fix your product, explain the changes, and make a final push to get the PR approved and merged, in other words: sold!


Hit the like button on some of the best feedback and send some thank you emojis on Slack. Note the areas of expertise of each reviewer so you can ask yourself what they would like to see in a future product and prepare to do it all over again, as the next ticket is surely not far behind!


This “sales” approach is not about process for the sake of process, nor is it intended to be a rigid workflow. The value of selling your code is in the benefits it brings to the pull request workflow within your development process. Some of these benefits are:

  • Better initial code through self reflection and review.
  • Decreased time spent by others reviewing pull requests.
  • Fewer bugs and regressions that slip through approval process as reviewers are better informed.
  • Increased code quality as reviewers are able to focus on higher level code architecture due to increased understanding.

What other benefits do you see to this process? Do you have a favorite PR selling trick we missed? Most importantly, did we sell you on the process?

Jun 29 2018
Jun 29

I've written before about my search for a way to create a satisfactory local development environment. Among the pre-configured solutions I've tried, I still really like Drupal VM, but there's an issue that no out-of-the-box solution I've encountered can solve. That issue is that we often treat production deployments as something to be developed and implemented separately from development. I'd like to see that change.

One reason I'd like to see this change is because while the problem of building a local environment for code to run in has largely been solved with virtual machines and containers (there are many local development options), the configuration of the application (Drupal, for example) on the local machine still often incorporates manual steps such as importing databases, or pasting database credentials into configuration files.

Another reason I'd like to see it change is because it's a solved problem. We already routinely use automation to perform all of the steps that are needed to get a Drupal site up and running on a server as part of CI/CD processes or to provision new servers. From beginning to running site, the basic requirements to get a Drupal site running in a new server are roughly:

  1. Provision a server
    • Install & configure packages and services,
    • Start services
  2. Deploy Drupal
    • Deploy codebase to server
    • Run build tasks (e.g. Composer/npm/Gulp/Grunt)
    • Create or customize settings file
    • Import database dump
    • Run Drupal tasks (e.g. Drush)

One way or another, we already use automation to do all of these things (and more) on our production servers.

So knowing that all the component parts for a solution to the problem already existed, I created a proof-of-concept to illustrate one way to solve the problem (but please note: this is not a complete solution, or a tool you can simply begin using). To do this, I used tools that I'm familiar with, Vagrant and Ansible (note that as I mentioned above, there are multiple solutions to each part of this problem—I'm sure the same thing could be achieved with e.g. Chef and Docker).

Requirements gathering

The basic approach I wanted to try is quite simple: treat the local environment exactly like production. In other words, I wanted to build a development environment that not only resembled production, but was similar enough to it that I could use exactly the same process to set up, modify, or rebuild my development server as I would for one of our in-production servers.

So given that we use Ansible for many of our automation needs at Chromatic, I decided to use Ansible and Vagrant so that I could build as close a copy of a production environment as possible, and use our usual tools to provision that environment and deploy Drupal.

Putting all the parts together, our requirements are:

  • An application to build
  • A Vagrantfile that provides:
    • A development server
    • A simulated production server (to simplify the demo)
  • An Ansible playbook that can:
    • Run server configuration and/or application deployment
    • Run tasks on development and/or production
    • Provide host-specific values (including secrets!) to tasks

Creating a Vagrantfile to provide two servers (one for development, and one to simulate a remote production server) is straightforward, and this post is long enough, so I won't spend any time on that here (see Vagrant’s extensive documentation). It's the three specific requirements of the Ansible playbook that make or break the solution. Fortunately—spoiler alert!—Ansible includes features that make all of these things not only possible, but reasonably easy to achieve. Let's consider each in turn.

Meeting the requirements

Run server configuration and/or application deployment

It's not necessarily desirable to revisit the entire server provisioning process every time we publish a small code change to the site. So this means we need a way to target only the server configuration tasks or only the deployment tasks. And of course we should also be able to target both at once. Ansible's tags enable this by making it possible to limit the tasks run to one or more specified tasks.

The deployment.yml playbook contains two "plays," one to provision the hosts, and another to deploy the site to them. Each play contains a block like this:

  - deploy

Specifically of note is that the "Provision hosts" play uses the "provision" tag and the "Deploy application" play uses the "deploy" tag. These two tags alone provide the following options for us:

  • Deploy and provision both hosts:
    ansible-playbook deployment.yml
  • Only provision both hosts:
    ansible-playbook deployment.yml --tags=provision
  • Only deploy to both hosts:
    ansible-playbook deployment.yml --tags=deploy

So it's possible for us to target either or both plays in the playbook at will.

Run tasks on development and/or production

Each play in the deployment.yml playbook, also contains the following identical block:

  - deploy_dev
  - deploy_prod

deploy_dev and deploy_prod are groups of hosts defined in our hosts.yml file (in this instance, each group only contains one host). Similarly to tags, Ansible's groups allow us to limit what the ansible-playbook command does, except in this case, we can limit what hosts the playbook runs on:

  • Deploy and provision both hosts:
    ansible-playbook deployment.yml
  • Provision and deploy development only:
    ansible-playbook deployment.yml --limit=deploy_dev
  • Provision and deploy production only:
    ansible-playbook deployment.yml --limit=deploy_prod

Again, using the --limit flag, we can run the playbook against any combination of hosts.

Provide host specific values to tasks

Finally we come to what is arguably the most crucial of our three main requirements. There are many sorts of environment-specific variables that we may want to set on different hosts. For example in Drupal, we may want to provide different values for $settings['trusted_host_patterns'], or a different set of database credentials.

Using a feature rather like PHP's class autoloading, Ansible is able to automatically load variables specific to a single host. If a file or folder with the name of one of the groups defined in hosts.yml exists in the group_vars directory, its contents are automatically made available to the playbook when it runs on hosts in that group. Besides custom groups defined in inventory, Ansible also includes two default groups, all and ungrouped.

This means we can target variables very precisely to individual groups (and also to nested groups, though that's beyond the scope of this post). In the repo, the group_vars folder has the following structure:

├── all
│   ├── common.yml
│   └── vault.yml
├── deploy_dev.yml
└── deploy_prod.yml

So this means, for example, that when running tasks on the deploy_dev group, variables from all/common.yml, all/common/vault.yml, and deploy_dev.yml are available for use in tasks, but the variables in deploy_prod.yml are not. To see what differs between dev and prod in the repo, just compare the deploy_dev vars file to the deploy_prod vars file.

What about the secrets?

In the original list, I specified that our solution needed to "provide host-specific values (including secrets!) to tasks." If you inspect the contents of vault.yml, you'll find something a bit odd—apparently an encrypted blob!


We're using this file to store site-related secrets in the repository using Ansible Vault. This isn't necessary, but to achieve the one-step build described here and demonstrated in the repo, Ansible must have a way to read and write secrets such as database credentials or (as in this case) basic authentication passwords.

Deployment tasks and what would Drupal do?

Having theoretically satisfied the requirements, let’s run a deployment. The fastest way to do this is to use the quickstart.sh script in the repo (note that you must have all of Ansible, Vagrant, VirtualBox, vagrant-hostsupdater, and vagrant-vbguest installed). Follow these steps:

  1. git clone [email protected]:ctorgalson/deploy-everywhere.git
  2. cd deploy-everywhere && ./quickstart.sh
  3. ansible-playbook deployment.yml --limit=deploy_prod
  4. Visit http://dev-01.deploy.local in the browser (use deploy-dev and depl0y to enter)
  5. Visit http://prod-01.deploy.local in the browser

Once those tasks are complete (on an older Macbook Air, this takes around six minutes for me), you should see two similar but different pages:

image alt text

Screenshot of loaded dev and prod index.html pages

To finish up our look at deployments, we'll take a short look at the contents of the deploy-app.yml tasks file. This file runs eight tasks:

  1. Check if node_modules exists (on local machine)
  2. Run npm install if it doesn't (on local machine)
  3. Run gulp build (on local machine)
  4. Use rsync to copy files to prod only (Vagrant handles this on dev)
  5. Create customized index.html file on each host
  6. Download the new index.html file from each host
  7. Verify that the http status code from step 6 was 200 for each host
  8. Verify that the timestamp in index.html matches Ansible's for each host

Even though this is a simple, static site, the build-and-deploy process is very similar to database-driven systems like Drupal. At the beginning of this post, I summarized a Drupal build in five steps:

  1. Deploy codebase to server
  2. Run build tasks (e.g. Composer/npm/Gulp/Grunt)
  3. Create or customize settings file
  4. Import database dump.
  5. Run Drupal tasks (e.g. Drush)

Steps 1-4 in app-deploy.yml map neatly onto steps 1-2 of the list above, though the order is different (in app-deploy.yml, we're simulating the process of running a complete build on a build server and then deploying the resulting code; sometimes it's just as easy to build on the destination server). Ansible also provides the composer module and the command module which are both useful in Drupal builds.

Step 5 in app-deploy.yml is identical to step 3 above in every respect but the specific file. In combination with group_vars and Ansible Vault, it's easy to see how we could use the same strategy to vary the content (or output separate versions of) settings.local.php in a Drupal context.

In app-deploy.yml, we don't do anything comparable to importing a database, but Ansible has tools that make this fairly easy. Use the get_url module or the copy module to move a sql dump onto the server, then use either the command module or the mysql_db module:

# Use mysql_db module:
- name: Import db dump
    name: drupal_db
    state: import
    target: /tmp/drupal_db-20180622.sql

# Use command module (requires drush!):
- name: Import db dump.
  command: "drush sql-cli < /tmp/drupal_db-20180622.sql"
    chdir: "/var/www/drupal/web"

Finally, though app-deploy.yml doesn't need to run any shell commands (since it's just an html page...) Drupal does. We need to update the database, clear caches etc. Fortunately, as the above database import command shows, adding drush commands to a playbook is very simple.

Bonus—git hooks!

There's one final feature that I've added to the proof-of-concept repo to show one of the real benefits of setting up projects to use production deployment processes on local development machines; I've added a pre-push Git hook. If you haven't used them before, Git hooks provide "…a way to fire off custom scripts when certain important actions occur." They can be extremely useful for performing tasks such as linting before commits, or running tests.

Git hooks however are not copied when repositories are cloned (among other reasons, one should consider the security implications). But it's possible to add them to a repository and then manually enable them with a symlink. In this repo, there is a .githooks/ directory containing one hook—an ordinary bash script—that runs at the beginning of the git push command.

The hook essentially consists of this command which tells Vagrant to re-provision the dev server, passing the tag deploy to Ansible:

VAT="deploy" vagrant up --provision dev

In other words this script, which runs every time a git push command is issued, runs the same deployment tasks as are run on production against the dev server, and prevents the push if the script fails. So in our search for a way to 'treat local exactly like prod', we've also found a way that the production deployment can be tested with every important code change (note: a hook can be skipped by passing -n or --no-verify to the relevant command).

To test it (you'll need to fork the repository first), symlink it into place with ln -s .githooks/pre-push .git/hooks/pre-push, make a change to the repository, commit it, and run git push.

Concluding remarks

I'm pretty enthusiastic about this method not just of structuring development environments, but of setting up entire projects. The server code can be committed to the project repository and shared with the entire development team, the modification to the workflow provides an early warning to developers when code changes threaten the production build, and the workflow completely eliminates certain classes of problems (e.g. using this workflow with encrypted secrets, changing a production server's database password requires a one line code change, and a one-line deployment). On top of all that, it doesn’t interfere if one or more stakeholders wants to use some other system.

But we need to consider the possible objections to adopting this as a workflow: is it:

  • possible?
  • effective?
  • practical?
  • universal?
  • secure?

Regarding possibility, I think the proof-of-concept repo, and the day-to-day use of similar Ansible tasks in production show that it's at least possible.

In terms of effectiveness, I also think that the proof-of-concept tidily solves my initial problem of having to set up and run deployments on local and remote servers with different tools and processes.

In my opinion, this solution also scores well for practicality, though here is where I see some of the more convincing objections. First, it's arguably somewhat more complex than other local development solutions (there are a lot of files in the repo's devops/ directory!) To this objection I answer we have to create the deployment automation anyway. Frequently to two or more remote environments once QA and Prod are considered. Adding local into the mix is not a lot of extra work.

Still on the subject of practicality I admit to not being completely satisfied with the local software stack. Though I use this same stack daily, the interaction of Vagrant, VirtualBox, NFS and other components is sometimes prone to unexpected problems (certain Vagrant boxes for instance seem to provoke kernel panics). These kinds of problems haven't actually kept me from working on this in spite of the myriad of other local development solutions available though, so this might not be that strong an objection. And besides there are VirtualBox alternatives that I could explore.

With respect to universality, as far as I can tell, this is not a universally applicable pattern. It's best suited to the more or less monolithic LAMP server pattern. The greater the degree to which a project departs from that pattern, the less applicable I think this will be.

However it's worth noting that for projects where this does work, it has a good chance of literally working everywhere: development, production, CI, tests (though Vagrant can't run in Travis CI), and in git hooks.

Lastly, I don't see any particular reason to doubt the security of this as a workflow. The most unrealistic thing about this repository is the idea that individual developers would or should deploy directly to production from a workstation. But we already handle this class of problems with deploy keys at Github or in other services. Otherwise, this workflow is simply not very different from what we do already.

If you've made it all the way through this post, congratulations! I hope you've found it useful or at least informative. If you have questions or comments, reach out to us on Twitter at @ChromaticHQ or to me at @bedlamhotel.

Jun 21 2018
Jun 21

Drupalgeddon. Most Drupal developers roll their eyes at the term, but the truth is, catastrophic things can and will happen if you neglect this, or other critical security updates for days or even hours after they are released. Malicious hackers are ready and waiting to exploit unpatched Drupal sites.

We recently had an organization come to us who was dealing with just this very problem, and we were able to repair their exploited site. It was a time-intensive, painstaking process, which could have been avoided by applying regular security updates.

The State of Things

This site had been infiltrated so that every piece of content (node) had some JavaScript inserted into it, redirecting the visitors to a malicious site. The site had not been updated since Drupal 7.44 – nearly two years ago – putting it at an enormous amount of risk. The first thing we did was patch the codebase and get that deployed to the production site. They had already swapped out a week-old database to get rid of the malicious links, at the expense of losing a week’s worth of content.

Next, we spun up a totally clean server. Once the hackers were in, we really don’t know what they could have done, so creating a new server was much faster than trying to scan every last file. We manage our server creation with Ansible, so this was relatively quick and painless.

We set up the new server with the clean code and the tainted database to see what we could do. First, we enabled “Advanced Content Filtering” on CKEditor so that the inserted scripts wouldn’t be rendered on the page. This solved the redirect issue, which was a quick win. However, we still needed to get the malicious code out of the database. To accomplish this, we created an update hook to load the body of every node, look for the malicious string, remove it, and re-save the node.

Luckily, this site’s nodes were in the hundreds; if it had been tens or hundreds of thousands, a different approach likely would have been needed.

We then took a look through the Drupal files directory to make sure there wasn’t anything malicious added to the file system. Thankfully, everything looked normal. With a clean codebase, database, and file system, we pointed the DNS to the new server, and decommissioned the old one. Problem solved. Did I mention the organization had two sites with the same problem? So we did all of this twice.

The Cost

We don’t know what this cost the organization in terms of their customers’ trust, but we imagine there was an impact there; as a user, I’d be hesitant to revisit a site that sent me to a suspicious link. We can tell you that it took exponentially more time to clean up than if we had been tasked to regularly keep the site up to date.

How to Avoid It

Updating your site isn’t a particularly complex undertaking if you do it regularly; sign up for security alerts, know when the release windows are, and be prepared to apply the update. In fact, you may have someone on your team already who is capable of this. If not, or you’re unsure, keep a professional at the ready. A team like Chromatic can make sure your site is safe and always up-to-date, along with any other security concerns you may have. If you’re not sure if your site is vulnerable, contact us today!

Jun 07 2018
Jun 07

If you search “why choose Drupal”, many compelling reasons for adopting one of the world’s most powerful CMS solutions pop to the surface: robust and innovative functionality, high extensibility, API-driven flexibility, built-in web services, speed and performance, multilingual support, rich content authoring experiences, 3rd-party integrations, mobile-first responsiveness, and all the other perks that come with using an open source, highly customizable, scalable, secure, and community supported platform like Drupal.

One of the most compelling reasons we think everyone should consider Drupal as the robust enterprise content management framework of choice remains the ease and swiftness with which content can be migrated whether it’s from a legacy database, some other platform, or from an older version of Drupal itself. Along with the host of other great functionality you get out-of-the-box, making content migrations relatively painless lowers the bar immeasurably in making the switch to a platform like Drupal. Depending on the complexity of translating a legacy data model into one that works for a modern web application, the Migrate modules that now ship with Drupal core can make that process a virtual breeze.

On a recent project, we migrated a client’s Drupal 7 site to Drupal 8. With Migrate in core, it was surprisingly straightforward and dare I say easy to move content, massage the data, and populate an empty shell of a site with years of accumulated articles, files, and media. What blew my mind (and our clients’ minds as well) in working with the migration scripts we wrote, is the administrative user interface that works out-of-the-box to allow monitoring of the status of their custom migrations as well as even enable non-technical folks to execute the running of scripts.

A screenshot of the migrations user interface. A screenshot of the migrations user interface.

This feature came in extremely handy when we had to solve a workflow/editorial problem wherein our client wanted the ability to manually import content from an RSS feed. Migrate to the rescue!

Some context: there are a number of contributed modules still catching up to the version leap to Drupal 8. Feeds is one of them. In Drupal 7, Feeds is a powerful and popular solution for aggregating content from external sources. While the open source community is hard at work porting that module’s features to Drupal 8, we happily discovered that Migrate in Drupal 8 core can handle the task with surprising ease.

To give proper attribution, this post by Campbell Vertesi was the first inkling that we could use Migrate to do the heavy lifting. With just a few mapping of fields in a simple yaml file, our client was thrilled to see the admin UI that empowered them to import content from an external RSS feed. Once the configuration was exported, they could navigate to the migration interface, click a button, and all their content from the feed showed up as content on their site. It was a win for everyone.

A screenshot of the Drupal "execute migration" page

Here's the yaml file for the above migration:

id: rss_community
label: 'RSS feed for Community'
status: true
  - RSS
migration_group: rss

  plugin: url
  data_fetcher_plugin: http
  urls: 'https://rssfeedurl'
  data_parser_plugin: simple_xml

  item_selector: /rss/channel/item
      name: guid
      label: GUID
      selector: guid
      name: title
      label: Title
      selector: title
      name: pub_date
      label: 'Publication date'
      selector: pubDate
      name: link
      label: 'Link'
      selector: link
      name: description
      label: 'Description'
      selector: description
      name: tag
      label: 'Tag'
      selector: category

      type: string

  plugin: entity:node
  bundle: story

  title: title
    plugin: default_value
    default_value: full_html
  'body/value': description
  'body/summary': link
  field_tags: category
    plugin: format_date
    from_format: 'D, d M Y H:i:s O'
    to_format: 'U'
    source: pub_date
    plugin: default_value
    default_value: 0
    plugin: default_value
    default_value: story
    plugin: default_value
    default_value: 1

Ultimately choosing the right CMS for your application depends on a lot of factors. Here at Chromatic, Drupal remains one of our favorite enterprise solutions and it’s easier than ever to make the switch (or upgrade your existing Drupal site) with the help of Migrate in core.

Let us know why you love Drupal @ChromaticHQ !

May 23 2018
May 23

It’s hard to believe DrupalCon Nashville was over a month ago! We have been busy here at Chromatic ever since, but we wanted to give a recap of the conference from our point of view.


Monday was a day of summits! Alanna attended the non-profit summit, which is always an informative and enjoyable day. She came away with some insights on accessibility and working with RFPs. Adam and Alfonso attended the decoupled Drupal summit, during which they learned how others in the industry are tackling many of the same challenges they’ve faced working on a multi-site decoupled platform for one of our publishing clients. Clare went to the Higher Education summit where she listened to the pain points of universities trying to upgrade their hundreds (thousands?) of sites to Drupal 8.

We went out for a great team dinner at Skull’s Rainbow Room, enjoyed some cocktails, live music, and amazing food, and caught up on the day.


Tuesday the sessions began, and it was a huge day for Chromatic! We had a third of our company presenting this year!

We kicked it off with Alanna giving “Women in the Tech Workplace” and a rousing Q&A.

[embedded content]

Next up was Chris giving his first DrupalCon session, “So You Want To Start Your Own Digital Agency?” which was a huge hit.

[embedded content]

The afternoon continued with Mart delivering “Drupal 8 Cache API and Tacos - a Delicious Real-World Example!” (Sadly, this video isn't available, but the slides are!)

Märt presenting at DrupalCon Nashville Märt presenting at DrupalCon Nashville

The last Chromatic session of the day was Gus with “Everybody Loves Performance: Easy Audits and Low-Hanging Fruit.

[embedded content]

Tuesday also included a BoF led by Adam and Alfonso, “Decoupling Drupal.” Developers of varying levels of experience with decoupled architectures shared some of their struggles while others shared insights and lessons learned.


On Wednesday, Kim ran a really fun BoF, “Stranger than Fiction: Life + Work Lessons from in Pop Culture.”

Kim leading a Bof at DrupalCon Nashville Kim leading a Bof at DrupalCon Nashville

Alanna was part of two panels, both of which were really well-received and productive. First up was the “Community Convos: Governance Retrospective” panel:

[embedded content]

Next up was the “Community Convos: Drupal Diversity & Inclusion” panel. Both of these had excellent discussions.

[embedded content]


Thursday we had our final Chromatic session, with Ryan giving a blow-your-hair-back talk on “Ethereum and React: An introduction to building your first web dApp.”

[embedded content]

After the closing session on Thursday, we met back up for dinner, drinks, team bonding, and some much-needed relaxing! It was a busy, fun, and productive week.

The team relaxing after a week of fun and work in Nashville. The team relaxing after a week of fun and work in Nashville.

See you in next year in Seattle!

May 15 2018
May 15

The Taming Twig series highlights common problems encountered when using Twig and how to fix them.

Twig is a wonderful tool when building templates for your Drupal 8 themes. It is easy to dive in and use basic patterns and conventions to create templates quickly. But it can be a pain figuring out what data is available and how to access it. In order to take out the guesswork, you need a solid debugging strategy. Let’s lay the groundwork for this series by surveying the debugging environment for Twig.


Setting up debugging requires updating the services.yml file. Under twig.config we will want to make sure debug and auto_reload are set to true and cache is set to false.

// sites/default/services.yml
 debug: true auto_reload: true cache: false

This will inject commented markup when we inspect our pages that show what template is being used, designated by the x.

<!-- THEME DEBUG -->
<!-- THEME HOOK: 'node' -->
   * node--1--full.html.twig
   * node--1.html.twig
   * node--article--full.html.twig
   x node--article.html.twig
   * node--full.html.twig
   * node.html.twig

More detail about out why a specific template is being loaded can be found here.

Remember: Only enable debugging on local or dev environments.

Debug Functions


dump() is a default Twig function that accepts a variable as a parameter and outputs its value on the page. Under the hood, dump() is just a Twig wrapper for the var_dump() function. For example, {{ dump(variable) }} outputs:

array (size=17)
    '#theme' => string 'field' (length=5)
    '#title' => string 'field_example' (length=7)
    '#label_display' => string 'hidden' (length=6)
    '#view_mode' => string 'full' (length=4)
    '#language' => string 'en' (length=2)
    '#field_name' => string 'field_example' (length=13)
    '#field_type' => string 'text' (length=4)
    '#field_translatable' => boolean true
    '#entity_type' => string 'node' (length=4)

This is a decent start when debugging a variable. We can inspect our variables in the output above printed at the top of the page. Yet often times what’s returned can be difficult to navigate. That’s where Kint comes in!


Kint is a tool that helps organize the values returned in a debug call. Kint can be installed with the devel module.

Devel module setup boxes

Enabling devel and devel_kint will give us the ability to call

{{ kint(variable) }}

within our templates:

Devel Kint Output

This is more usable than dump() as the values returned are nested under their array keys. This makes it easier to isolate a particular value within a variable or object.

While dump and Kint are good tools, I find that kint() often runs into issues with memory. Lucky for us, there's a contributed module that combines the performance of dump() with the usability of kint()twig_vardumper.

Twig VarDumper

This module gives us the {{ vardumper() }} function from the vardumper symfony component. For those who are familiar with dpm in Drupal 7, the output is similar, letting us easily dig into objects and multidimensional render arrays.


Working with Twig gets that much easier when we use these tools.

Next up in the Taming Twig series, we'll take a closer look at the memory issues that occur when using debug functions in Twig, how to fix them and how to solve a white screen of death. If you’re impatient and want a peak at the whole series, check out my talk from MidCamp 2018.

May 08 2018
May 08

With the advent of the media initiative, Drupal now has a clear best practice way of handling images, audio, and other forms of media. Many sites (including ours) were previously using simple file fields to store images. To update our content model a migration was in order. However a traditional migration using the Migrate API seemed like overkill for the scope of our content. This is likely the case for many other small sites, so let’s walk through our solution step by step.

Create Media Entities

The first step is to create the new media entities. For this example we will be using a media bundle named image. A file field named field_media_image should be added to this new entity.

Create Media Reference Fields

Next we need to create entity reference fields to replace our existing file fields on all of our content types. Leave the file fields in place for now, as we will use them as the source of our migration.

Migrate Image Files

With our limited content, we were able to do this process in an update hook without the aid of the batch process. This same process could be easily adapted for batch integration if needed though.

 * Create media entities from existing file fields.
function example_update_8401() {
  // Load all of the article and page nodes.
  $nodes = \Drupal::entityTypeManager()
    ->loadByProperties(['type' => ['article', 'page']]);
  foreach ($nodes as $node) {
    // Verify that the file field has a value.
    if (!empty($node->field_file_image->entity)) {
      // Create the new media entity and assign it to the new field.
      $node->field_media_image->entity = example_create_media_image_entity(
        sprintf('Updated image for node "%s".', $node->getTitle())

To aid in the creation of our media image entity, we created a helper function to encapsulate its creation.

use Drupal\media_entity\Entity\Media;

 * Creates a media image entity from a file entity.
 * @param \Drupal\file\FileInterface $file
 *   The existing file object.
 * @param string $alt
 *   The image alt text.
 * @return \Drupal\media_entity\Entity\Media
 *   The media entity.
function example_create_media_image_entity(FileInterface $file, $alt = NULL) {
  $media_entity = Media::create([
    'bundle' => 'image',
    'uid' => '1',
    'name' => $file->alt,
    'status' => Media::PUBLISHED,
    'field_media_image' => [
      'target_id' => $file->id(),
      'alt' => $alt,
  return $media_entity;


With the new field in place and the data successfully migrated, the file field can now be deleted from our nodes. This can be done via the admin UI or with another update hook depending on your site’s workflow.

Shiny New Toy

With the new media entity content model, additional data can be easily associated with each image by simply adding fields to the media entity. Now functionality such as caption text, copyright information, or alternate crop data can be handled with ease!

Apr 30 2018
Apr 30

We recently helped implement a Japanese translation for a client’s site - which was a pretty sizable challenge! The site was already broken down by country, but all the countries were in English. We ran into some unexpected challenges and wanted to break down how we overcame them.

How We Did It

We used a handful of modules to implement the basic translation functionality: locale, i18n, I18n_block, I18n_menu, I18n_string, and i18n_translation. These modules got us most of the way - read on for how we handled the rest!



One major frustration was that much of the built-in url detection in Drupal for functionality such as placing blocks ignores the language code in the url, which is how we were handling the translations. We could have likely written custom code for this, but for most blocks, we solved by making a context that detected both the language of the page and the language-neutral/English url. This got extra complex with pages for the Japan site that were in English and Japanese. We were not using “en” in the urls - the English urls had nothing added for the language, and the Japanese urls had “ja.” So the English url for the Japan page would be www.example.com/jp/my-content and the Japanese url for the Japan page would be www.example.com/ja/jp/my-content. Phew!


Menus are a translation problem all their own, and there are several ways to tackle them. Did we want separate menus for each language? Or did we want to translate the existing menu? We started with the former approach, thinking that the content could differ. This was difficult to implement and we wound up with a bug we could never find the root cause of or fix. If you switched from a Japanese language page to an English language page, or vice-versa, the menus would disappear. If you cleared the cache, they’d come back. It was bizarre, and despite a good amount of digging, we never figured it out (if you’ve run into this, we’d love to know! Give us a shout on twitter @ChromaticHQ!). With the launch date looming, we decided to take a different approach and translate the existing menus (thus having only one instance of each menu, in two languages). This went much more smoothly!

Custom Language Toggles

We used the i18n suite of modules, which has a ton of excellent built-in functionality. It also added some things we didn’t want, like some language toggle links and buttons that weren’t quite right, and in places we didn’t want. We had to alter a couple of views to get rid of these, and create our own buttons with the proper links and in the proper places.


One odd problem we had was with Views. The client had a view on a page that showed all of a certain content type, and naturally, wanted the English page to only show results in English, and Japanese results when on the Japanese page. This was easy enough - except that most of the English results had been left as “Language Neutral.” We’d hoped to be able to get the view to show all of these results, but quite a bit of debugging left us puzzled as to why we couldn’t make the default language (en) and “language neutral” both appear. Since those neutral nodes were, in fact, English, we solved the issue by running an update hook to change them to English. Problem solved!

Translating Various Strings

The bulk of translation on this site involved nodes, but there were a handful of other strings to be translated, as well. Thanks to good coding practices, these were all wrapped in t() and available to us to translate. We did run into some frustration with translating blocks, however. After setting up the block to be translatable, when a user clicks “translate,” they are directed to a form with a greyed-out field. The solution to that was going back to the string translation interface and searching for the string. This is on my list of possible contrib issues to go look into!

Lessons Learned

We spent most of our time getting the translation piece from 85% done to 100% done - it’s that final push that can be the hardest! When dealing with some unknowns, always pad out extra time. Be flexible, and don’t be afraid to abandon your current approach and try something completely different instead of spending hours in the weeds troubleshooting. You’re almost never the first person to try something in Drupal - so let the community help you! We used custom code to put the pieces together, but we didn’t reinvent the wheel for this project. We utilized existing contrib modules and core functionality to get everything working smoothly.

There was also a good deal of trust involved in this project as no one on our team, nor our client liaison, spoke or read Japanese! The client had an internal review team for that, and we simply had to trust all of the content we were given. It was a little unnerving working on something in a completely different alphabet and having no idea if there were any mistakes, but everything worked out. I’ve since worked on more multi-language sites and gotten more used to the feeling of looking at a page without knowing what the content says. It takes some getting used to!

Apr 03 2018
Apr 03

At Chromatic, when we are collaborating with our clients on a website or product, we typically work in an agile, iterative process. As part of that process, it is important for all stakeholders to be able to easily review and approve changes to a site as they are being made, but this can frequently be a pain point. There are often members of the team who are less technical, may not have a development instance of the website, or an interest in checking out git branches. Frankly, even for users that are willing and able, this process is often an inefficient use of everyone’s time.

Enter Tugboat

Tugboat describes itself simply as “a complete working website for every pull request.” The service integrates with your Github, Gitlab, or Bitbucket repositories and generates a preview environment whenever a pull request is created allowing you to review every feature or bug fix with automatic builds. This preview will live for the life of the pull request, gets updated when new code is pushed, and is torn down when the pull request is merged. On top of the convenience of having everyone review changes in a shared environment, this also gives you a chance to test the deployment process itself. Your deployments are automated, right?

[embedded content]

The Base Preview

The base preview is Tugboat’s secret sauce for efficiency. Building each Tugboat preview from scratch could take a while depending on your build steps, but Tugboat accelerates this process by allowing you to specify one of your preview instances as the “base preview”. For us, this is usually the “master” branch preview since all of our branches and pull requests are created off of this branch. When you have a base preview enabled, a new pull request preview will “clone” the base preview with its database and configuration intact, then pull in your pull request code changes, and run the build steps you have specified. In practice, this means that your pull request preview environments are often built in a matter of seconds instead of minutes.

Streamlined and Out of the Way

Tugboat does something extremely valuable; it allows us to iterate more quickly and flag a greater number of issues earlier. Without a tool like Tugboat, even our rigorous code review process will have blind spots by excluding stakeholders who are not willing or able to check out git branches to their own development instance of the site. Better still, Tugboat empowers our clients to feel like they have their hand more firmly on the ship’s tiller during the development process.

Mar 26 2018
Mar 26

DrupalCon Nashville is coming up and Chromatic will be showing up in full force this year! Catch one of us around the halls for some swag (our shirts are the softest!), tweet us @ChromaticHQ, or come to one of our sessions!

We’ll be attending a few of the summits on Monday, so if you see us, introduce yourself! Make sure to stop by the Drupal Diversity & Inclusion booth again this year where you might spot one of us, and take your picture at the photobooth!

Here’s where you can find us:


Tuesday, April 10:

Women in the Tech Workplace

Alanna Burke - 12:00 - 1:00 PM in Room 101D – Business

Clearly there are not enough women in Tech. Let's discuss why that is, and learn from companies that are getting it right.

So You Want to Start Your Own Digital Agency?

Chris Free - 2:15 - 2:40 PM in Room 205C – Business

Real world advice on what it takes to go from a fledgling freelancer to a successful agency owner. A guide covering the essentials.

Drupal 8 Cache API and Tacos - a Delicious Real-World Example!

Märt Matsoo - 4:20 - 4:45 PM in Room 202 – Backend Development

Understand Drupal 8’s Cache API with the help of tacos! Learn the whys and hows of caching as we build a custom module.

Everybody Loves Performance: Easy Audits and Low-Hanging Fruit

Gus Childs - 5:00 - 6:00 PM in Room 101D – User Experience

How to audit page performance through the lens of user experience, measure results, and create realistic goals.

Thursday, April 12:

Ethereum and React: An introduction to building your first web dApp

Ryan Hagerty - 2:15 - 3:15 PM in Room 205AB – Horizons

Jump into the blockchain with this introduction to building decentralized web applications with Ethereum and Solidity.

Birds of a Feather:

Tuesday, April 10:

Decoupling Drupal

Adam Zimmerman & Alfonso Gómez-Arzola - 12:00 - 1:00 PM in Room 102A

This BoF will is an opportunity to connect and discuss the benefits and challenges of a decoupled architecture using Drupal.

Wednesday, April 11:

Stranger than Fiction: Life + Work Lessons from in Pop Culture

Kim Sarabia - 3:45 - 4:45 in Room 102B

This BoF is for anyone who wants to take a break to have a lighter, fun discussion about how technology work has been depicted in film and TV and if there could be any insights gained from it that could be applied to our daily work.

Community Convos:

Wednesday, April 11:

Community Convos: Governance Retrospective

Alanna Burke (panel discussion) - 2:15 - 3:15 in Room 101E

The volunteer group that facilitated Community Governance discussions invites you to share your thoughts and collaborate on next steps.

Community Convos: Drupal Diversity & Inclusion

Alanna Burke (panel discussion) - 5:00 - 6:00 in Room 101E

DD&I works on diversity initiatives in the Drupal community. Attend this convo & have your voices heard to affect change in the community!

Mar 21 2018
Mar 21

Managing Drupal sites with composer brings a number of benefits. However, when installing Drupal dependencies from source (an option offered by composer), you also lose the functionality provided by Drupal core’s “Available Updates” page. Thankfully Composer will allow you to keep tabs on the available updates for all of your project’s dependencies, including Drupal core/contrib.

Tracking Dependency Updates

Running composer outdated from the top level of your composer-managed repository produces output similar to the screenshot below. The results include dependencies of your dependencies (such as those of Drupal core), but you can limit the checks to those dependencies that are directly required by the root package by running composer outdated --direct.

Updating Dependencies

If you are following composer best practices and avoiding exact version constraints in composer.json using the ^ or similar constraints, then running composer update with no arguments or flags could result in a large number of dependencies updated at one time. I recommend limiting updates to a single dependency, or at least a group of related dependencies at any given time.

composer update drupal/token --with-dependencies

For example, if you wanted to update Drupal’s token module, you would use the command composer update drupal/token --with-dependencies and it would update it to the latest available version that matches your version requirements defined in composer.json. Limiting updates to a single dependency at a time has the practical benefit of allowing you to more easily trace a bug to its origin if one is introduced via an update.

Rebuilding composer.lock – A Bonus

composer update --lock

If your reaction to my recommendation to never run composer update on its own was to break into a cold sweat thinking about “lock file out of date” warnings, composer update --lock is for you. Occasionally you may want to rebuild the lock file, without making any changes to your dependencies; this option is especially useful when trying to resolve merge conflicts in composer.lock.

[--lock]: Only updates the lock file hash to suppress warning about the lock file being out of date.

Mar 07 2018
Mar 07

“Decoupled Drupal” sounds cool and just about everyone else seems to be either doing it or talking about it, so it must be the best solution for you, right? Well, maybe. As with most things, the answer is more nuanced than one might think. In this post, we’ll explore some of the questions to ask along the way.

What is “decoupled Drupal”?

In a conventional web project, Drupal is used to manage content and to present it. In this sense, it is monolithic because your entire project sort of “lives” inside Drupal.

In contrast, decoupled Drupal — also known as “headless Drupal” — is a strategy by which Drupal is used strictly as a content management system without a presentation layer. That is, there are no public-facing themes and templates, there is only content and the administrative UI. In this strategy, Drupal exposes an API (application programing interface) for other applications to consume. This opens the door for a broad gamut of languages, frameworks, and platforms to ingest and present your content.

What Are the Benefits Of Decoupling?

This is, perhaps, the most important question. Decoupled architectures alleviate many of the problems that often accompany monolithic content management systems. Some of these are:

  • Simplified upgrade paths. A CMS upgrade doesn’t have to affect the presentation layer, only the CMS.
  • Easier troubleshooting. Bugs are easier to locate when your platform is composed of smaller, discrete parts instead of one giant application.
  • Easier to harden against regressions. Back-end developers can introduce changes that break front-end functionality more easily in a monolithic application. In a headless strategy, the API can be tested easily, helping you catch potentially breaking changes earlier in the process.
  • Focused tooling; granular deployments. Developers need only concern themselves with the setup and tooling specific to their part of the stack. Back-end devs don’t need to compile CSS; front-end devs don’t have to worry about composer installs. Likewise, deployments in decoupled projects can be much smaller and more nimble.
  • Clearer separation of scope and concerns. Since the CMS is just an API, it is easy to establish clear roles based on your content strategy. In contrast, a monolithic CMS can make it difficult to draw a line between syndication of content and serving a website.
  • Improved performance. While a monolithic Drupal site can perform very well, the need to frequently bootstrap Drupal for user requests can slow things down. Depending on your project’s needs and your caching strategy, these requests might be inevitable. A headless architecture allows you to build a front-end application that handles these requests more efficiently, resulting in better performance.
  • Divide and conquer development. Teams are able to develop in a decoupled manner. Development of a new feature can happen simultaneously for both the front-end and back-end.

By now you’ve probably noticed that headless Drupal has many advantages. If you’re ready to get on the decoupled bandwagon, hold your horses! As great as it sounds, this strategy introduces paradigm shifts on several levels. We’ll walk you through some high-level considerations first, and then go into some implementation challenges that we’ve faced on our own decoupled projects.

What To Consider Before Decoupling

Before you get into the nitty gritty details, you must look at all the factors that can influence this decision. Here are the high-level concerns you should consider before making your decision:

Your Team

The first thing to consider when examining whether Decoupled Drupal is right for your project is your team. Having your project split into multiple codebases means that your developers will naturally coalesce around their part of the tech stack. These subteams will need to communicate effectively to agree on roles and negotiate request/response interaction contracts.

Also consider developer skill levels: Are your most experienced problem-solvers also very strong Drupal developers? If so, you might need to hire a subject matter expert to help strengthen your front-end application team.

Project managers and operations staff will also find themselves supporting two or more teams, each with their own resourcing and infrastructure needs.

Decoupled is great if…

  • Team members’ experience levels and expertise is evenly distributed, or…
  • You are able to hire subject matter experts to offer guidance where your team needs it.
  • Project managers can handle multiple projects with separate teams.
  • Your DevOps team can handle the added complexity of satisfying the infrastructure needs of different applications.

Interacting With Your Data

When decoupling Drupal from the presentation layer, you’ll need to decide whether Drupal should be the owner and sole arbiter of all your content. This is a no-brainer if all of your content is in Drupal, but when other services complement your data, should your front-end application talk directly to those other services, or should it go through the Drupal API? There is no one correct answer to this question.

For example, you might have comment functionality for your project provided by a third-party service. From a performance perspective, it makes sense for your front-end application to make a separate and parallel request to the comments API while it requests data from your Drupal API. However, if multiple client applications need access to the comments, it might make sense to fetch this content from Drupal so as to reduce the number of implementations of this functionality.

Decoupled is great if…

  • You have a strong grasp on your data needs.
  • You establish clear roles for which applications/services are responsible for each piece of data.

Hosting Architecture

Separating the content from the display has serious implications. Evaluating if your current hosting provider can even support this architecture is the first step. If a change is needed, the feasibility of this needs to be evaluated. Are there existing long term contracts, security restrictions, data migration restrictions, etc. that may impede this process? With the separated hosting, a hidden benefit is the ability to put the backend application behind a VPN or firewall to further prevent unauthorized access.

Decoupled is great if…

  • You have a hosting provider capable of hosting a presumably non-Drupal front-end.
  • The hosting provider is capable of adding caching and other tooling between the various parts of your new architecture.
  • You want the added security of restricting editorial access to users within an internal network.

Serving Content to Multiple Clients

Decoupling is often a means to enable creating content once and displaying it on multiple sites. This is an inherent benefit, but this benefit adds complexity. For example, is there a need to allow editorial control for a custom color scheme per client site? Will content display the same title, published date, promo text, etc. on all sites and will content use the same URL alias on all sites? If it does, how will duplicate content affect SEO, and how will canonical meta tag values be set across sites?

Decoupled is great if…

  • Content needs to be created and managed in a single application and distributed to multiple platforms.
  • The content can remain unencumbered by per site design and metadata requirements.
  • You have an editorial process that does not expect a high degree of design/layout control.
  • You are prepared to handle the complexity of managing canonical alias values and other metadata to meet SEO needs.

URL Aliases

The URL alias is the backbone of a website. In a decoupled world, the alias could be considered a front-end only concern, but that is a simplistic approach that ignores several complicating factors. For instance, is there a unique identifier in the alias or can we find the content associated with the request without a unique identifier? Additionally, where are alias-to-entity mappings stored and managed? If editors have traditionally held control over aliases, or entity field values were used to generate an alias, pushing this logic out of the back-end might not be feasible.

Decoupled is great if…

  • The URL alias values have a unique identifier that makes API requests easy.
  • Editors are are not expecting control over the alias from the authoring platform.
  • Content attributes are not used to build the alias


A web page is more than the content on the page. In a modern web application, there is a plethora of metadata used to construct each page. This can include ad targeting values, meta tag values, and other microdata such as JSON-LD. Again, these may be considered web issues, i.e a front-end concern, but if these values are to be consistent or if content is used on multiple platforms, they suddenly become a shared issue, i.e. a back-end concern.

Decoupled is great if…

  • Content will need to share ad targeting data logic and other metadata across multiple consuming platforms.
  • The metadata logic that powers meta tags, JSON-LD, analytics, etc. can be generated with standardized rules.

Menu Items

The ease of menu management in a monolithic site is often taken for granted. The complexity of managing menus in a decoupled architecture is evident when you answer questions such as Where are menus created, ordered, and managed? How are menus shared across multiple sites? Will the menu paths work across sites?

Decoupled is great if…

  • The site has a static menu that can be created and managed in the front-end application.
  • A single menu can be shared across multiple clients without the need for per site menus.


Redirect logic is another element of the routing architecture that is complicated by going decoupled. This is especially true on sites with multiple or complex redirect needs that must be handled in a single redirect, or where editors expect a high level of control over the redirects.

Decoupled is great if…

  • There is no need for redirect control by the editorial staff, with all redirects being managed by broad server-level rules.
  • You have an architecture that supports combining multiple redirect rules into a single redirect.

Is Decoupled Right for Me?

A decoupled architecture offers a wide array of benefits, but some requirements handled easily by a monolithic infrastructure are suddenly born again by decoupling. Your business leadership must understand the implications of a decoupled architecture and the site’s architecture should mirror the business priorities. Finally, your team should possess the necessary technical leadership, management and communication skills, and web architecture expertise to tackle decoupling.

None of these challenges are insurmountable, they simply require a thorough analysis of your business needs and an understanding of the tradeoffs incurred with the available solutions. Depending on your specific needs, many of the benefits can outweigh the cost of increased complexity.

So should you decouple?

As you can tell, it’s complicated, but we hope this post helps you determine if it’s right for you. We love helping organizations navigate these questions, so please do reach out to us if you need a closer look at your particular situation.

Mar 06 2018
Mar 06

We’ll be attending and presenting at this year’s MidCamp in Chicago. If you’re also going to be in there, be sure to check out our sessions:

Taming Twig

with Larry Walangitan – Friday, March 9th 11:00am-12:00pm

Twig is a wonderful tool to build templates for your Drupal 8 themes. It can be easy to pick up, but certain problems can leave you frustrated and unsure of what to do. Don't fret, we'll be talking through some straightforward solutions to most of the problems that you'll encounter.

Not on Our Watch: an Introduction to Application Performance Monitoring

with Clare Ming – Saturday, March 11th 4:00-5:00pm

Take the mystery out of your application’s performance and squash small problems before they balloon into bigger messes by monitoring your site’s resources and runtime with an option that fits your needs. From free, open-source tools to full-on enterprise solutions, there's something for everyone no matter what your size and budget.

Chromatic is also a proud sponsor of this great Drupal event.

Feb 27 2018
Feb 27

Some time ago, we posted a how-to about backing-up Drupal databases to Amazon's S3 using Drush and Jenkins and the command-line tool s3cmd. Recently, we needed to revisit the database backup script, and we took the opportunity to update the S3 connection portion of it to use Amazon's official command-line tools, aws-cli, which they describe as "...a unified tool to manage your AWS services." At the same time, as part of our ongoing effort to automate all routine systems administration tasks, we created a small Ansible role to install and configure aws-cli on servers where we need to use it.

The backup script

Like the simple backup script we explained in the original post, the new variant still has the same three main tasks to perform:

  1. Create a gzipped dump of the database in a specified dump directory.
  2. Upload the new dump to an S3 bucket.
  3. Delete all database dumps in the dump directory older than ten days.

Creating a database dump hasn't changed; we still use drush to create a gzipped sql file whose name includes a human-readable date that will automatically sort with the oldest files at the top of the list:

  drush sql-dump --gzip --result-file=/home/yourJenkinsUser/db-backups/yourProject-`date +%F-%T`.sql

Likewise, deleting the older files has not changed; we use find to find all of the contents of the dump directory that are files (not directories or links), named something.sql.gz, and ten or more days old:

  find /home/yourJenkinsUser/db-backups/ -type f -name "*.sql.gz" -mtime +10 -delete

What has changed is that:

  1. We are now using the s3 subcommand of the aws-cli. library.
  2. We decided that we only needed to store ten days of database backups on S3 too (in our original script, we didn't prune the offsite backups).

With s3cmd, we used the put command to upload the latest db dump to an S3 bucket, then deleted out-of-date dump files. With aws s3, we could use the aws s3 cp command to copy the most recent dump file to our S3 bucket and then use the aws s3 rm command to remove out-of-date backup files from the S3 bucket much like we use the find command above to remove out-of-date files on the server.

However, doing this would have increased the complexity of what we intended to be a simple tool. That is, we'd have needed to list the contents of the S3 bucket, and then intelligently decide which to remove.

But the aws s3 command has more options than cp and rm, including one called aws s3 sync. This command allows us to simply synchronize the contents of local database backup directory and the S3 bucket. To make this work in the context of our script, we had to change the order of operations such that we delete local out-of-date database dumps, and then synchronize the directory and our S3 bucket.

Since the script deletes at least one database dump file each time the it runs (e.g. if it runs once per day, it will remove one file on each run), and since this happens before we copy the dump offsite, it's important to make sure that a) the job stops as soon as any error occurs, and b) that somebody is notified when or if this happens.

In the script below, we accomplish this with set -e which will cause the script to fail if any of the commands that run inside it return an error. For longer, or more complex scripts it would be worthwhile to include more sophisticated error-handling or checking.

The resulting script--which can be run daily using cron or a tool like Jenkins--looks something like this:


    set -e

    # Variables.

    # Switch to the docroot.
    cd /var/www/yourProject/docroot/

    # Backup the database.
    drush sql-dump --gzip --result-file=/home/yourJenkinsUser/db-backups/yourProject-`date +%F-%T`.sql

    # Delete local database backups older than $MAXDAYS days.
    find /home/yourJenkinsUser/db-backups/ -type f -name "*.sql.gz" -mtime +${MAXDAYS} -delete

    # Sync the backup directory to the bucket, deleting remote files or objects
    # that are not present here.
    aws s3 sync /home/yourJenkinsUser/db-backups s3://yourBucketName --delete

The aws-cli role

Installing and configuring aws-cli is not difficult but since we use Ansible for other similar tasks, we created an Ansible role to do this too. It installs the library, and creates fully-customizable versions of the required config and credentials files, making setting up the tool as easy as using the backup script. Automation FTW!

Dec 21 2017
Dec 21

For several years now, I’ve been a big fan of writing tests when I code. My recent involvement in a Node.js-powered front-end project powered by headless Drupal has provided me with the opportunity to brush up on my testing skills and introduce fellow developers to the practice. This has led to a lot of thinking about what makes tests effective, what makes them valuable, and what keeps them from falling by the wayside. The result is a set of principles which encapsulate my testing philosophy.

Prelude: What is TDD?

Test-driven development is the practice of developing software by first writing tests and then producing the minimum amount of code required to pass those tests. That statement pretty much captures it: you write clear requirements then you write enough code to meet them, and nothing more. TDD encourages other good practices, like writing functional code whenever possible and using dependency injection, because these practices make your code easier to test. They also have the added benefit of making your code more reusable, while tests will make future refactors significantly less daunting.

This post applies also to the practice of writing tests after having written code. Although this is —strictly speaking— not test-driven development, most of the same principles still apply. After all, delivering code that is tested after the fact is still worlds better than shipping code that is not tested at all, and when a test is written has little bearing on most of the principles that make it valuable to your project.

Finally, this post isn’t a tutorial, but I’ll be using some technical terms which can be a little vague, so let’s clarify them right off the bat:

  • Test: A test is a block of code that declares a provable statement (e.g. the function addOne() returns a number value) and runs a callback function which proves that the statement is correct. This callback contains one or many assertions.
  • Assertion: An assertion is a single statement which checks whether a value meets our expectations (e.g. expect(addOne(3)).to.be.a('number')). Sometimes that value is a direct object, while others it might be a meta value (e.g. expect(addOne).to.be.calledOnce).

With all of that out of the way, here are the basic principles I keep in mind when writing tests:

Aim for confidence.

Test the things that will help you feel confident about your code. It’s easy to fall into the trap of testing everything that is testable, but that’s a rabbit hole you want to avoid. Instead, focus on testing the behaviors on which other parts of your code will rely.

Use tests as antibodies.

A test suite doesn’t make your project impervious to bad code, but it can improve how you respond to bad code. For example, accompany your hotfixes with tests that prove their effectiveness. It demonstrates your fix today and ensures its continued effectiveness tomorrow. In this way, your test documents the bug and serves as an antibody against it.

Test one thing and assert many.

If is often said that tests should focus on one thing. This is good advice because it encourages you to ask yourself what’s really important and think about that one thing critically. It’s not unlike an athlete that isolates a specific movement or muscle group to improve their overall performance: we isolate key parts of our code and ensure they work as expected, thus improving our overall reliability.

From any one test you will usually need to make several assertions. For example, if we’re testing that a function invokes a callback with a given object, you might need to assert that an object is passed and that its properties meet certain expectations. With every test, include as many assertions as needed to demonstrate that what the test claims to prove is true.

Write tests before code.

Writing tests before code can take some getting used to, but once you’ve had experience with the practice, you’ll realize that a great deal of the time you spend writing tests you’re also figuring out the problem. This cuts down on development time and often results in better, more robust solutions.

This practice also helps you get a good sense of how to solve the problem. In my experience, by the time I’m finished writing a few tests, I not only have a firmer grasp of the challenges ahead, but I often have a good sense of what the solution should look like.

Use your tests!

In addition to writing useful tests, it is important to run them purposefully. Here are some practices that have helped me stay on top of my test-running game.

Learn to run a subset of tests.

Test suites get pretty big pretty fast. They can sometimes take a bit long to execute, which can lead to fatigue and —eventually— letting yourself not run tests at all. To avoid this, a good test runner will allow you to run a single test file or even a single test. You can find plugins for many IDEs and text editors that facilitate this kind of granular test running.

Automate test running.

Running tests should not be a burden. I’m a strong believer that we should make the right thing be the easy thing, and few things are as effective at that as automation. Thankfully, test runners are designed to facilitate this. Identify key points in your workflow at which running tests increases confidence, and take steps to ensure that tests are run automatically and relevant parties are notified if any of them fail. Common points are git pre-push hooks, pull request submissions, and deployment scripts.

Review tests as you do code.

I’ve gotten into the habit of running tests as part of my code review process. I don’t know how common this is, but I will often begin reviewing a pull request by checking out the relevant branch and running tests locally. Then I review the tests themselves to see if they can be improved. This also helps me understand the requirements and edge cases that the PR aims to address, and makes me a more informed reviewer of the actual code.

When incorporated purposefully and with care, tests can have a profoundly positive effect on developer productivity and satisfaction. They improve your team’s confidence that today’s working code will continue to work when new features, dependencies, and environments are introduced tomorrow. I hope the principles I’ve laid out above help you achieve better code by writing better tests.

Dec 14 2017
Dec 14

Chromatic is proud to announce the relaunch of FamilyCircle.com! This replatforming, a collaboration with with our long-time partner Meredith Corporation, marks the publishing organization’s first venture into headless Drupal, and paves the way for other Meredith brands to follow suit as they get on-boarded onto a new decoupled platform designed to support brands company-wide.

The redesigned FamilyCircle.com Homepage. The redesigned FamilyCircle.com homepage.

The Multi-Tenant platform, as it is officially called, kicked off with an on-site meeting back in July of this year. During this kickoff, it immediately became clear that Meredith had great expectations for this platform. Not only did it need to support multiple brands, but it was expected to do so in a nimble manner, such that features could be shared yet customizable per brand, but not require simultaneous multi-site deployments. The result is an Express-based Node.js application backed by a headless Drupal-powered API, with Varnish caching in-between. While the platform is still in active development, it has already demonstrated its worth in FamilyCircle.com performance gains and speed of development.

Chromatic’s role has been equal parts subject matter experts and team enhancement. We helped architect both applications with flexibility firmly in mind. Some of the more challenging issues included resolving URL routes and redirects between Express and Drupal, creating filterable and sortable cross content type listings, and storing data in a way that enables cross platform publishing. We leveraged the json:api spec through the JSON API Drupal module to provide us a well documented and extensible API that we customized to meet the project’s unique needs.

For the Express application, Chromatic developed a content-mapping layer that further decouples the front-end by reducing its dependence on the JSON-API data model. It does this by defining schemas that transform the data, allowing developers to get exactly what is needed from a given content type without needing to drill down into deeply nested JSON objects. This helps to keep our templates agnostic and our middleware as clean as possible. It also reduces the impact of API swaps that might occur in the future, as any changes would be limited to those schemas.

A diagram illustrating a bird’s eye view of the Node.js/Drupal request workflow, with Varnish Cache to reduce the load on Drupal. Varnish sits in between Node.js and Drupal, reducing the load on the API and minimizing latency.

The immediate result is a relaunch of FamilyCircle.com powered by a flexible headless architecture with vastly improved response times, as well as improved front-end accessibility and SEO. Yet the greater benefit is a decoupled platform that extends these features to all brands under the Meredith umbrella. We are proud to have helped our partner achieve this goal and excited to extend this platform to other brands in the future.

Nov 21 2017
Nov 21

Whether you are a Drupal newcomer or a seasoned Drupal developer, you're bound to run into one, some, or all of the issues outlined below. Some are obvious, some not so obvious, but we'll show you how to troubleshoot them all regardless.

Some of these issues took a while to troubleshoot, so if you use Drupal as much as we do, make sure you bookmark this page for easy reference in the future. There is nothing worse than spending hours on a problem that can be solved within minutes with the right information (we've all been there).

  1. If you’re working on a Drupal 8 site and you get the message "The provided host name is not valid for this server," you’re not alone. This is a result of a feature added to Drupal 8 to protect against HTTP Host Header attacks.

    • To fix, set the $settings['trusted_host_patterns'] variable in your settings file.
    • If you are running from one hostname, like www.mysite.com, you’d set it like so: $settings['trusted_host_patterns'] = array( '^www\.mysite\.com$', );
    • For more information, check out this stackExchange post.
  2. Configuration management in Drupal 8 is great! But you might run into one thing thing that bugs you - for example, if you override a configuration value within settings.php there is no indication in the UI that the values were actually overridden. This is a complete 180 from Drupal 7, which displayed overridden values within the UI. Right now, there is no official solution to this problem, unfortunately, but here are two things you can do:

  3. Kint is a great and detailed new debugging tool in Drupal 8, but it very often runs out of memory when you’re debugging within a Twig template, which defeats the purpose of using it.

    • The first option is to just output one variable if that’s all you need to see, like so - but be aware that even just one variable can be a giant object, so try to be as direct as possible:

    {{ kint(content.field_name['#items'].getValue()) }}

    • The next option is to limit the number of output levels. You can do so in a number of ways:

      • In settings.php:
      require_once DRUPAL_ROOT . '/modules/contrib/devel/kint/kint/Kint.class.php';
      Kint::$maxLevels = 3;
      • Create a /modules/contrib/devel/kint/kint/config.php file and add / modify the line $_kintSettings['maxLevels'] = 3;
      • In a preprocess function, add the following:
      Kint::$maxLevels = 3;
    • When Kint loads the object you’re debugging onto the page, don’t click the plus sign! That expands the whole tree. Click the block instead, and click what you need from there.

H/T Stackexchange

  1. Many developers rely on Drush to do a variety of tasks, such as drush uli to log in to sites. If the site uri isn’t specified correctly, you’ll get a url of http://default instead of the correct url. Setting the $base_url of the site in settings.php doesn’t affect Drush. There are two options.

    • When you run a Drush command that requires the uri, such as drush uli, specify the uri, like so: drush uli --uri=example.com.
    • Set the uri in drushrc.php. If you don’t have this file already, you can simply create it in sites/default and put the following:
    * @file
    * Provides drush configuration to example.com.
    $options['uri'] = 'http://example.com';
  2. Generally, when you’re developing, you don’t want your CSS and JS cached, so that you can debug it. An easy way to make sure caching is turned off is to put these lines in your settings file:

     * Disable CSS and JS aggregation.
    $config['system.performance']['css']['preprocess'] = FALSE;
    $config['system.performance']['js']['preprocess'] = FALSE;
  3. MAMP is a fantastic local development tool, but it can sometimes be tricky to set up with Drupal and Drush. Are you getting an error similar to this when using Drush, but the site works fine?

    exception 'PDOException' with message 'SQLSTATE[HY000] [2002] No such file or directory' in core/lib/Drupal/Core/Database/Driver/mysql/Connection.php:146

    There’s an easy fix. Add the following to your database credentials within your settings.*.php file:

    'host' => '',
    'unix_socket' => '/Applications/MAMP/tmp/mysql/mysql.sock',

    H/T to Modules Unravelled for this fix!

  4. A lot can go wrong if your files and directories are not properly secured. The extended documentation can be found here.

    • sites/default/files: This directory should have the permissions rwxrwx--- or 770
    • You can use the File Permissions module to correctly set up your file permissions, especially if you are seeing errors about your sites/default/files and sites/default/private directories being incorrectly set up.
  5. You’re setting up a new Drupal installation, or doing something new in an existing one, and boom, a white screen. What do you do? There are a handful of options for debugging. Here are a few to get you started:

    • Check your error logs! Look at your Drupal, PHP, and Apache logs.
    • Make sure you have error reporting on. Follow these directions to turn it on and in many cases, get the errors printed on the WSOD: https://drupal.stackexchange.com/a/54241
    • Here’s a debugging tactic to tell you which module may be hanging up your site and causing the issue: https://drupal.stackexchange.com/a/105194
    • Increase your PHP memory limit !

Did we miss anything?

Have a troublesome common bug? Reach out to us on twitter @ChromaticHQ!

Here are the first 15 Drupal problems we published in 2013 - you may still find some of these useful, though some may be out of date.

  1. Users with 'edit page content' access cannot edit simple pages. Chances are the nodes that the users are trying to edit have an input format that they're not permitted to use. Try this:
    • Check the input format for the body field. If its "Full HTML" or "PHP Code" for example, and that user role cannot create content of that input type, they won't even see an edit tab for that node. Either change the input format to one they can access, or grant access to that input format at: admin/settings/filters
    • Double check that their role has permission to edit that particular node type at admin/user/permissions
    • [Editor's Note 2017: These input formats may be insecure, so if you are running into this issue, reconsider using them, and use Filtered HTML instead!]
  2. My client cannot see content he/she has created after logging out. This is likely a caching issue. They can see the content when they are logged in because some caching instances are based on user roles. Check the following:
    • Clear cached data at admin/settings/performance.
    • Clear your browser's cache.
    • Adjust the "Minimum Cache lifetime" setting also under admin/settings/performance.
  3. Images within posts are disappearing after publishing the node. This is likely related to the "Input Format" (again). If the node is using the default settings, "Filtered HTML," input format tags such as img, object, script, etc. will be stripped out. Try the following:
    • Grant the role in question access to the "Full HTML" input format.
    • Create a custom Input Format that includes the tags you want.
    • [Editor's Note 2017: These input formats may be insecure, so if you are running into this issue, reconsider using them, and use Filtered HTML instead!]
  4. My theme (CSS/template) changes aren’t showing up.
    • Is CSS caching turned on? If so, turn it off while your theme is still under development. You can do so at admin/settings/performance.
    • If that still doesn't work, try clearing your browser's cache
    • If you're using Drupal 6, you may also need to clear out the theme registry if you have added new theme functions or new templates. While you're at admin/settings/performance, you can hit the "Clear cached data" button. Check out a full write-up about the new theme registry.
  5. I’ve lost all my anonymous user content! (comments).
    • When was the last time you imported/exported your database? This issue seems to happen when MySQL creates the user's table from a batch file (or database transfer via Navicat) – the user id from the table is auto-incremented and the required ‘0’ value is replaced. Try the following:
    • Manually reset the uid value for the anonymous visitors in the users table. More info found at drupal.org.
  6. I'm getting the dreaded white screen of death! There are many possible causes for this: PHP error reporting settings, memory exhaustion, etc. Try the following:
  7. My web pages take forever to load. What’s the deal? Obviously, there could be many factors at play with this one. Try using the caching capabilities of Drupal. Caching can drastically improve the load times. Especially compressing CSS and JavaScript files - this will help reduce the number of header requests.
  8. It’s a pain to try and develop a theme starting with Garland. Isn’t there a better way to theme from scratch? Yes, there is. Install the Zen theme starter kit. Zen makes it easy to theme from scratch, and best of all, Zen is a standards-compliant theme.
  9. My blog gets hit with a ton of spam, what can I do? 1 word: Mollom; install the Mollom spam module, configure it, and you’ll forget that spam ever existed. Mollom has a free and a paid version - the free will be sufficient for most sites and it even includes some impressive statistics reporting.
    • [Editor's Note 2017: Mollom will no longer be supported as of 2 April 2018]
  10. How can I figure out which theme function (or template file) I need to override in different places? Install the Devel module. The Devel module was created specifically for Drupal developers. It will streamline your Drupal development process by showing you which functions/templates were used to render parts of the page.
  11. My layout looks “broken” all of a sudden, what happened? This may be a CSS issue, it may be a caching issue, or it may be something else. Try emptying the cache (admin/settings/performance) or try rebuilding the theme registry. Side note: you can easily empty Drupal's cache and rebuild the theme registry using the menu provided by the Devel module mentioned above.
  12. CSS images disappear when caching CSS caching is turned on. There are a number of potential causes for this:
    • Check the permissions to the files and CSS folders at (sites/default and sites/default/files/css respectively) - the server needs read and write access.
    • Is your CSS file importing another with @import? This could be breaking things. Try embedding the imported CSS directly.
    • Are you using relative or absolute paths? There seems to be an issue with this as well.
    • Do you have any funny characters in your URL? While working on a local version of one our sites, we had parenthesis in a directory name; this was breaking the link.
  13. My custom URL alias keeps being overridden! If you have the Pathauto module installed, it might be overriding your custom URL. To fix this, uncheck “Automatic URL Alias” under the URL alias fieldset – this will allow you to use your custom URL in conjunction with the Pathauto module. Also, there is a patch that apparently fixes this issue - though we have yet to test this.
  14. Users cannot view/edit custom CCK field(s). Do you have the “Content Permissions” helper module enabled that comes packaged with CCK? If so, check the field permissions for that user role. By default, only users with administer CCK privileges can edit/view each field.
  15. A specific Drupal user isn’t able to perform a necessary task (such as editing a specific content type). Double check the permissions for the user’s Role(s); they probably don’t have sufficient privileges to carry out the task.
Nov 14 2017
Nov 14

Recently, I was tasked with changing the taxonomy terms applied to a large number of nodes, and to output the updated nodes to a file. In the interest of expediting someone else’s undertaking of a similar exercise, here are some code snippets to help the cause.

Before getting into the code, one piece that our hook will need is a map of which nodes require reassignment from one set of taxonomy IDs to another. Since this requires an editor’s discretion to reshuffle the mapping of taxonomy terms, we opted to get those mappings into a variable by reading them from a CSV file which contains two columns: the old taxonomy ID and the new taxonomy ID.

Here is an outline of the steps taken to accomplish the task at hand:

Upload a formatted CSV file and copy the rows into an array in the database.

We used a file upload field to upload the CSV file and then read each row into an array (the full code block is available in a gist). For this specific task, the CSV needs to contain one column that holds the old term ID and an adjacent column that maps to the new term ID.

Batch process the form submission to read each line of the CSV and save the row into an array in the database.

We had many terms to be deleted and re-mapped so we batch processed the CSV form submit to read in the rows and save the data into an array using variable_set(). The batch callback starts here in the gist.

Use a hook_update_N() to ingest the variable created by the CSV.

Once we have that mapping in place and saved to an array, we can use hook_update_N() to process that value. We create an array of old tids (taxonomy IDs) and an array of new tids from the CSV variable in order to know how to reassign nodes from the old terms to the new. The indexes of each array are important for remapping tids of relevant nodes.

Implement entity_metadata_wrapper to get the nodes attached to old terms and re-assign new terms to them.

Our hook_update_N() implementation runs a query that pulls the nodes we need to update based on the old tids array and uses entity_metadata_wrapper to save the new term tids to each node. The following code snippets live in the .install file of our custom module.

Get the nodes attached to old terms and re-assign new terms to them.

Our hook update runs a query that pulls the nodes we need to update based on the old tids array and uses entity_metadata_wrapper to save the new term tids to each node. The following code snippets live in the .install file of our custom module.

Here’s how we grab the variable and save the row columns into arrays for the old and new tids:

$csv_rows = variable_get('{csv_variable_name}', array());
// ...
$tids_old = $tids_new = array();
foreach ($csv_rows as $row) {
  $tids_old[] = $row[0];
  $tids_new[] = $row[1];

Then we loop through the old tids to find all the nodes that have these tids attached to them and save the nids (node IDs) to an array:

foreach ($tids_old as $tid_old) {
  // Get all the nids that have this term attached.
  $these_nids = taxonomy_select_nodes($tid_old, FALSE, FALSE, $order = array('t.nid' => 'ASC'));
  // Merge array of attached nids to master array of nids.
  if (!empty($these_nids)) {
    $nids_all = array_merge($these_nids, $nids_all);

In another batch process that removes the old tids of a node and saves the node with the new tids, we use entity_metadata_wrapper to find the node’s current tids:

$this_node = node_load(array_shift($sandbox['nids']));

// Load and wrap the node if it exists.
if (!empty($this_node)) {
  $node_wrapper = entity_metadata_wrapper('node', $this_node);
  // Get the term object values of this node's field_categories.
  $node_terms = $node_wrapper->field_categories->value();
  // Get the raw tids of this node's field_categories.
  $node_tids_raw = $node_wrapper->field_categories->raw();

  // Create new array to store updated tids per node.
  if (!empty($node_tids_raw)) {
    // Make sure $node_tids is an array.
    $node_tids = is_array($node_tids_raw) ? $node_tids_raw : array($node_tids_raw);
    $node_tids_update = $node_tids;

We then loop through the node’s tids and compare them with the old tids to see which matching tids should be removed and then save the node:

// Loop through the node's field_categories tids.
foreach ($node_tids as $delta => $tid) {
  if (in_array($tid, $sandbox['tids_old'])) {
    // Update the node's field_categories array of tids by removing duplicate to-be-deleted tids.
    $node_tids_update = array_diff($node_tids_update, array($tid));

    // Grab the corresponding key from the to-be-deleted terms array.
    $key = array_search($tid, $sandbox['tids_old']);
    // If the new tid is not in the nodes categories, add it.
    if (!in_array($sandbox['tids_new'][$key], $node_tids_update)) {
      $node_tids_update[] = $sandbox['tids_new'][$key];

// Return all the values from the updated field_categories array indexed numerically.
$node_tids_update = array_values($node_tids_update);

// Save the updated field_categories array to the node.

Save data about updated nodes for reporting.

With each iteration of the loop in our batch process, we want to provide some reporting to track which nodes were updated with the new tids so we save the updated tids into a sandbox variable:

$sandbox['csv_nids'][] = array(
  drupal_get_path_alias('node/' . $this_nid),
  '"' . implode(" / ", $node_tids) . '"',
  '"' . implode(" / ", $node_tids_update) . '"',

Set up redirects from the old taxonomy terms to the new taxonomy terms.

Once the batches are complete, we set up 301 redirects from the old tids to the new tids. Here is the the code block in the gist handling the redirects.

Add reporting on the redirects.

We also want to provide reporting on these new redirects so we save the data to another sandbox array for outputting to CSV. Here is the code snippet that saves the redirects to a sandbox variable.

Once all the variables for this new CSV are ready, we can use a custom function to take those sandbox variables as parameters and generate the new CSV. Here’s how it’s called:

// Output updated tids/nids to csv file.
_my_module_create_csv($sandbox['csv_tids'], $csv_tid_headers, 'tids-to-delete');
_my_module_create_csv($sandbox['csv_nids'], $csv_nid_headers, 'nids-updated');

And here is the custom function in the gist for reference.

To wrap up, we ingested a CSV file by importing each row as an array of its columns into a variable in the database. Then we ran a hook update to parse the variable derived from the CSV into other variables needed to remap new taxonomy IDs to nodes attached to old taxonomy IDs marked for deletion using Batch API. Next we created redirects of the old taxonomy terms to the new taxonomy terms. Finally, we wrote the changes of updated nodes and taxonomy term redirects to CSV for reporting. The net result is all the nodes tagged with the old taxonomy terms were successfully reassigned to the new taxonomy terms.

Sep 25 2017
Sep 25

This past summer we started seeing a higher frequency of alerts from New Relic for one of our clients. Although we are constantly working on improving performance, we were perplexed by what could be causing the sudden onslaught of warnings. It turns out that the problem stemmed not from the application itself, but from our client’s files system.

Screenshot of New Relic reporting. Moving in the right direction.

The application is a high-traffic publishing site running on Drupal 7, with many editors and writers uploading thousands of images per month. This means that unless the application’s file system is pre-emptively saving those images into distributed subfolders, the root of the files/ directory will become flooded over time which, if unchecked, leads invariably to severe performance issues. The details of the why are below.

Too Many Files == Performance Implications

Our client’s infrastructure is hosted by Acquia, which provides a lot of best practice documentation. For sites on their cloud platform, Acquia asserts that “over 2500 files in any single directory in the files structure can seriously impact a server's performance and potentially its stability” and advises customers to split files into subfolders to avoid the problem. This is a long-standing issue for Drupal 7 that is still being hashed out. Drupal 8 takes care of files distribution by default.

Migration Scripts to the Rescue

When we realized the client’s files/ directory was pushing 50,000 files, we were on a mission to move their existing and future files into subfolders. Luckily for us, Acquia developed a drush script that works out of the box to move a site’s files into distributed dated subfolders.

Anatomy of a Files Directory Vivisection

Our first run of the Acquia script shaved the files/ root directory down to about 10,000 files. This was a great start, but we wanted to get the number of files down to well below 2500 per Acquia’s recommendation. We deduced that the 10,000 remaining files leftover after the initial run of the drush script were mostly unmanaged files referenced by hard-coded image and anchor tags written into body fields, as well as legacy files from the previous migration that were missing file extensions.

Further investigation revealed another similar problem. During a previous content migration from a legacy CMS into Drupal, approximately 60,000 images were saved to a single subfolder (in our case called migrated-images) that all had the same timestamp - the date of the migration. By default the Acquia script creates subfolders by year/month/date as needed and moves files into them based on their timestamp. When we ran the drush script again on migrated-images, we found that all the files were indeed moved but they were all moved into a single dated subfolder 2017/07/21 - the date we ran the Acquia script. So we were back to square one.

Files, Subfolders, and Triage

So now we had two more problems to solve:

  1. How to shave down the root files/ directory which still had ~10k files in it after the first run of the Acquia script.
  2. How to reduce the file counts of the subfolder migrated-images which had ~60k images all with the same timestamp.

To tackle the problem of the remaining ~10k files in the root files/ directory, we ran a series of update hooks to do the following:

  • Create nodes from the images to get them into the managed files table so Drupal knows about them (in our use case, media is added as nodes - photos, videos, etc).
  • Update the hard-coded references in WYSIWYG body fields to their new file paths.
  • Move the legacy files with missing extensions into manageable subfolders.

Shout out to my clever colleague who wrote this hook_update and a bash script to deal with the remaining ~10k files. After running all these updates, we got our files count in the root files/ folder down to a few hundred. Phew.

To re-distribute the ~60k+ images that all landed in a subfolder from a previous migration, we refactored the drush script. It creates a parent folder with a variable number of child subfolders into which the source directory files are evenly distributed. The refactored script takes an integer as an argument to determine the number of child subfolders that should be created, as well as a source directory option which is relative to the files/ directory.

drush @site.prod migration_filepath_custom 100 --source=migrated-images

Running the command above took all the files in the source directory migrated-images and saved them into a parent directory evenly distributing them between 100 subfolders. The naming convention for the newly created directory structure looks like this inside files/:

migrated-images (source directory)
    |__ migrated-images_1
    |__ migrated-images_2
    |__ migrated-images_3
    |__ migrated-images_100

So when the refactored script finished, migrated-images (which originally held ~60k files) was empty, migrated-images_parent contained 100 subfolders whose directory names were differentiated by _N, and each migrated-images_N folder held approximately 600 images. Score!

The Future Depends on the Decisions Made Now

The other vital piece to resolving the too-many-files-in-a-single-directory problem was how to save new files going forward. Acquia recommends some contributed modules to address this problem. One such module we tried was File (Field) Paths. At first it seemed like a good simple solution - set the relative file paths in the configuration options and we were good to go. But we soon discovered that it tanked when using bulk uploading functionality. With the amount of media the content editors generate, maintaining zippy bulk uploading of assets was essential. The bulk uploader running slower than molasses was a deal-breaker.

After more research into the issues around this in core, we opted to patch core. As of this writing, the latest proposed core patch moves files into dated subfolders using the following tokens:

'file_directory' => '[date:custom:Y]-[date:custom:m]'

For our client’s site, it’s entirely possible that in any given month, there could be well over 2500 files uploaded into the Drupal files system. So to get around this issue, we applied an alternate patch to save files using the following tokens to add a day folder:

'file_directory' => '[date:custom:Y]/[date:custom:m]/[date:custom:d]'

The bulk uploading of files ran swiftly with this solution! So now legacy, as well as future files, all have a place in the Drupal files system that will no longer threaten the root files/ directory and, most importantly, the application’s performance.

After hustling to make all these changes, we’re delighted to report that the application has been running smoothly. It was a formidable lesson in files management and application performance optimization that I’m relieved will not be an issue in future iterations of Drupal.


Sep 18 2017
Sep 18

Multiple people frequently collaborate to author content in Drupal. By default, Drupal only allows a single author. This makes crediting multiple users in a byline impossible with core functionality.

The author data is stored in the uid column of the node_field_data table, which stores an INT value. Considering that this functionality is deeply integrated in Drupal core we’ll have to use a different approach.

To solve this, we simply create a new field that allows for multiple User entity references. We now have the ability to easily associate one or many users with a piece of content.

With our new “Authors” field in place, new content will use our new user reference field, however we will need to update existing content to use this field to provide consistent data. Alternatively, we could create logic to check both fields values when constructing the author data to display.

We opted for updating existing content to maintain data parity. A simple update hook in my_module.install does the trick:

 * Set value for new author field.
function my_module_update_8001() {
  $nodes = \Drupal::entityTypeManager()
    ->loadByProperties(['type' => 'article']);
  foreach ($nodes as $node) {
    \Drupal::logger('my_module')->notice(sprintf('Updated node %s', $node->getTitle()));

The final step is updating content to use our new field. The steps required here will vary widely depending on your templating and site building approach. However the basics come down to:

  • Verify that the existing Author field on the needed content types is hidden.
  • Verify that the new field displays the correct data from the referenced User entities.

With these simple steps, your content is now ready for storing and displaying the names of every user who helped with the authoring process. Now the only thing left to do is sort out fights over who gets to be listed first!

Aug 24 2017
Aug 24

We’re happy to announce today that we’ve released our popular Drupal Coding Standards guide as a free series on Drupalize.me! By opening up these how-to posts to our friends at Drupalize.me, we hope that even more folks will be able to learn from our experience and ultimately, understand the right ways to code for Drupal.

The folks at Drupalize.me provide the best Drupal training materials on the web, so we were more than happy to oblige them when they asked if they could release our Coding Standards guide, carefully crafted by our very own Alanna Burke, as a free series on their platform. This is the spirit of Drupal and open source.

Aug 21 2017
Aug 21

I recently ran into a situation where I needed to link directly to a Drupal page for which there was no explicit route specified in a routing YML file. In this case, I was trying to link to the entity creation page for a media entity, at media/add/image. Technically, I could’ve used Url::fromUri(), but this is not best practice. The documentation page for the Url::fromUri() method is clear about this:

This method is for generating URLs for URIs that:

-- do not have Drupal routes: both external URLs and unrouted local URIs like base:robots.txt

-- do have a Drupal route but have a custom scheme to simplify linking. Currently, there is only the entity: scheme (This allows URIs of the form entity:{entity_type}/{entity_id}. For example: entity:node/1 resolves to the entity.node.canonical route with a node parameter of 1.)

For URLs that have Drupal routes (that is, most pages generated by Drupal), use Url::fromRoute().

The correct way is to use the Url::fromRoute() method, but first we need to find the route name. It makes perfect sense that there wouldn’t be an explicit route in a routing.yml file because there can be any number of entity types on a Drupal site. Thus, there must be a dynamic means of handling routing in these types of situations. But how do we use this method when we don’t know the route name?

After exploring the node and entity modules, I began to wrap my head around how these modules provide dynamic routes. In short, the key bits can be found in: core/lib/Drupal/Core/Entity/Routing/DefaultHtmlRouteProvider.php. This class implements EntityRouteProviderInterface and has a getRoutes() method, which does the following:

Returns a route collection or an array of routes keyed by name, like route_callbacks inside 'routing.yml' files.

Here’s the code:

  * {@inheritdoc}
 public function getRoutes(EntityTypeInterface $entity_type) {
   $collection = new RouteCollection();

   $entity_type_id = $entity_type->id();

   if ($add_page_route = $this->getAddPageRoute($entity_type)) {
     $collection->add("entity.{$entity_type_id}.add_page", $add_page_route);

   if ($add_form_route = $this->getAddFormRoute($entity_type)) {
     $collection->add("entity.{$entity_type_id}.add_form", $add_form_route);

   if ($canonical_route = $this->getCanonicalRoute($entity_type)) {
     $collection->add("entity.{$entity_type_id}.canonical", $canonical_route);

   if ($edit_route = $this->getEditFormRoute($entity_type)) {
     $collection->add("entity.{$entity_type_id}.edit_form", $edit_route);

   if ($delete_route = $this->getDeleteFormRoute($entity_type)) {
     $collection->add("entity.{$entity_type_id}.delete_form", $delete_route);

   if ($collection_route = $this->getCollectionRoute($entity_type)) {
     $collection->add("entity.{$entity_type_id}.collection", $collection_route);

   return $collection;

That explains how these modules implement dynamic routes. We can now use the above to begin figuring out the route name for our example path: media/add/image.

In the case of Drupal entities, the route name is a machine name, which is constructed of three pieces:

Name of module providing the route (entity) Entity bundle ID (media) Route name (canonical, edit_form, add_form, etc.)

Thus, the route name we need is entity.media.add_form.

In my case, I was using the media module (which is moving into core in 8.4) with a custom bundle type of "image". So I also needed to pass that along to fromRoute(), like so:

Url::fromRoute('entity.media.add_form', ['media_bundle' => 'image']);

This gave me a Drupal\Core\Url object. I then used its toString() method to get the path directly.

$some_url = Url::fromRoute('entity.media.add_form', ['media_bundle' => 'image'])->toString();

This will output:

string(16) "/media/add/image"

I was then able to use the path string to create my link:

'#description' => $this->t('New images can be <a href="https://chromatichq.com/blog/how-link-dynamic-routes-drupal-8/@add_media">added here</a>.', [
  '@add_media' => Url::fromRoute('entity.media.add_form', ['media_bundle' => 'image'])->toString(),

Using route names as opposed to paths is not only best practice, but provides consistent means for developers to link to pages throughout Drupal. Beyond that, route names are more stable, as paths are more likely to change. Hopefully, this post helps others who want to properly link to dynamic routes in Drupal 8. It certainly had me confused.

May 23 2017
May 23

What did we bring to DrupalCon Baltimore?

Aside from some awesome stickers and super-soft shirts to give out, we came with knowledge to share! Chromatic presented a total of 4 sessions, as well as moderating a Birds of a Feather session. If you missed any of them, you can read the slides or watch them online!

Culture is Curated

Dave presents "Culture is Curated" Dave presents "Culture is Curated"

Work/Life Balance - You CAN Have it All!

Alanna presents "Work/Life Balance - You CAN Have it All!" Alanna presents "Work/Life Balance - You CAN Have it All!"

JavaScript ES6: The best vanilla you’ve ever tasted

Ryan presents "JavaScript ES6: The Best Vanilla You've Ever Tasted" Ryan presents "JavaScript ES6: The Best Vanilla You've Ever Tasted"

Code Standards: It's Okay to be Yourself, But Write Your Code Like Everyone Else

What did we do at DrupalCon Baltimore?

Since Chromatic is a distributed company, DrupalCon is an important chance for team bonding. We went out for some great team dinners and drinks, hung out at the Lullabot/Pantheon party at the Maryland Science Center, and enjoyed the company of dinosaurs and a lot of great folks!

We also attended a few of the Monday summits: the Non-Profit Summit, the Media and Publishing Summit, and the Higher Ed Summit. We also had a volunteer at the Drupal Diversity & Inclusion booth.

What did we take away from DrupalCon Baltimore?

A lot of our team is excited about GraphQL after attending a session on it, as well as the ever-expanding world of decoupled Drupal. Pinterest’s case study on integrating Pattern Lab into a Drupal 8 theme got our front-enders excited, and some of our back-enders can’t wait to dig into the 35 Symfony components. JSON API has a lot of us pretty intrigued - overall, it’s fun to see what Drupal can do when you think outside of the box.

Märt thinks it’d be fun to code a bot so that his family has someone to talk to when he’s away at DrupalCon.

Christopher was happy to see that on the ops side of things, we’re up to date on best practices and doing what other people in the space are doing.

Chris and Elia at Camden Yards Chris and Elia at Camden Yards

We learned about debugging tools for front-enders, when to use {% embed %} in Twig templates and how powerful it can be in reusing structures/grids within another template. The live-coding demonstration of the back-end/front-end bits in a decoupled React app was really cool and informative. We learned about HTTP/2’s preconnect hint, which is an awesome performance win. We also heard about the importance of addressing mental health in the tech community.

Overall, we had a great time and learned a lot. We always come back from DrupalCon energized and ready to dig into new technologies, brush up our skills, and spruce our website - so keep an eye out!

Apr 20 2017
Apr 20

Many Drupal 7 sites relied upon hook_boot and hook_init for critical functionality that needed to be executed on every page, even pages cached by Drupal. Oftentimes these hooks were incorrectly used and an alternate approach would have been much more performant by triggering logic exactly where it was needed instead of on every page. For example, drupal_add_js could be used to add JavaScript for a particular block or theme hook globally, but using a targeted preprocess function is often the correct approach. However, in some cases, these hooks were indeed the correct solution. Both hook_boot and hook_init were deprecated in Drupal 8, so an alternate approach will be needed.

Cacheable Logic

In Drupal 7, hook_init offered a hook that was fired on every page that Drupal did not cache. Drupal 8 offers this same functionality, using the Event Subscriber pattern as detailed on the hook_init change notice. These pages provide detailed examples, which walk you through the process of setting one up. How to Register an Event Subscriber in Drupal 8 also provides examples of event dispatching code.

Uncacheable Logic

Uncacheable logic should seldom be needed, as the solution is often to use proper cache settings on your render arrays. In the rare case that it is indeed needed, there are two viable options depending on the situation.

Using settings.php

The change notice page states:

A module that needs to run on cached pages should prompt its users to add code in settings.php

Note that Services are not instantiated yet when this code runs, thus severely limiting the available functionality. So while this approach is by no means ideal, it is available. The StackMiddleware approach below is a better approach that offers deeper integration with Drupal functionality.

Using StackMiddleware

This comment on the hook_boot change notice page provides an example of using StackMiddleware. It provides 95% of the functionality needed to run logic on cached pages by utilizing a tagged service with the http_middleware tag. Since the new class is a service, it will have full access to other core and contrib services, allowing for much greater functionality. The example shows the following for a module’s *.services.yml file:

    class: Drupal\mymodule\StackMiddleware\MyModule
      - { name: http_middleware, priority: 180, responder: true }

This is a pretty standard service definition, but note the items added to the tags property that register our service with the http_middleware tag and also set a priority. In order to bypass the page cache, understanding the page_cache.services.yml file is helpful. There, a similar definition can be found, but with a higher priority value.

    class: Drupal\page_cache\StackMiddleware\PageCache
    arguments: ['@cache.render', '@page_cache_request_policy', '@page_cache_response_policy']
      - { name: http_middleware, priority: 200, responder: true }

Higher priority services are run first. So to trigger logic before the page cache module takes over the request, a priority greater than 200 is needed.

    class: Drupal\mymodule\StackMiddleware\MyModule
      - { name: http_middleware, priority: 210, responder: true }

With this change in the services files, and proper setup of the service as described by the comment, the http_middleware.mymodule service should now be called on every page load, even on fully cached pages.

namespace Drupal\example\StackMiddleware;

use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpKernel\HttpKernelInterface;

 * Performs a custom task.
class ExampleStackMiddleware implements HttpKernelInterface {

   * The wrapped HTTP kernel.
   * @var \Symfony\Component\HttpKernel\HttpKernelInterface
  protected $httpKernel;

   * Creates a HTTP middleware handler.
   * @param \Symfony\Component\HttpKernel\HttpKernelInterface $kernel
   *   The HTTP kernel.
  public function __construct(HttpKernelInterface $kernel) {
    $this->httpKernel = $kernel;

   * {@inheritdoc}
  public function handle(Request $request, $type = self::MASTER_REQUEST, $catch = TRUE) {
    // Custom logic goes here.

    return $this->httpKernel->handle($request, $type, $catch);

Verifying the Results

A quick and easy way to test all of this is to simply add \Drupal::logger('test')->notice(‘not cached’); into the functions triggered by each of the approaches above. Ensure that the Drupal cache is enabled, and simply refresh a page while watching your log (drush ws --tail). Then verify the logic is being called as expected.

Apr 19 2017
Apr 19

It’s that time of year again - DrupalCon North America! The Chromatic team is excited to get together for a week of Drupal and fun in Baltimore, Maryland. We’ll be presenting several sessions throughout the week, so don’t miss out!

We’ll be attending a few of the summits on Monday, so if you see us, make sure to say hello. You might also spot one of our team members pitching in at the Drupal Diversity & Inclusion Booth, so stop by & take a picture at their photo booth!

Once again, Chromatic is sponsoring the conference via a Birds of a Feather room. As we have in the past, our room will be stocked with free Chromatic swag that you can pick up at any time. We’ll have stickers and some new tee shirts to share!


This year, Chromatic will be presenting 4 sessions during the conference:

Tuesday, April 25

Dave Look
Culture is Curated
10:45 AM - 11:45 AM
Community Stage - Exhibit Hall

Alanna Burke
Work/Life Balance - You CAN Have It All!
2:15 PM - 2:45 PM
Community Stage - Exhibit Hall

Wednesday, April 26

Ryan Hagerty
JavaScript ES6: The best vanilla you’ve ever tasted
3:45 PM - 4:45 PM
Room 309

Alanna Burke
Code Standards: It's Okay to be Yourself, But Write Your Code Like Everyone Else
3:45 PM - 4:45 PM
Room 314

Birds of a Feather

Chromatic is also hosting one BoF session on Tuesday:

Tuesday, April 25

Alanna Burke
Managing Our Stress
3:45 PM - 4:45 PM
Room 313

Apr 17 2017
Apr 17

So, you’re heading to DrupalCon this year - awesome! (Or maybe another huge conference). If it’s not your first time, you know that something you’re likely to bring home with you along with the swag, business cards, and a bunch of new Drupal knowledge is the dreaded DrupalFlu. This is a nasty cold that seems to strike almost everyone after - or even during - a huge conference like DrupalCon. Put that many people in one place and you’re bound to get sick.

I’ll never forget missing half of DrupalCon Portland 2013 because I managed to get sick by Wednesday and was curled up in my hotel room, drinking cold medicine and orange juice and being miserable. Things I learned that week included: Portland is a terrible place to get a cold. You can’t get Sudafed without a prescription. So if you’ve been ingesting cold medicine all week in a desperate attempt to feel better, you can swab positive for explosive residue by the TSA. And you never want to fly with clogged ears. Trust me.

If none of this sounds fun, never fear - I’ve developed a method that has kept me well for several DrupalCons as well as many smaller conferences!

Step 1: Preparation

Get a flu shot. If you haven’t yet gotten a flu shot this year, go get it. It takes a couple weeks to kick in, so don’t wait til right before you leave. Whether or not The Flu is what’s going around, don’t risk it.

Buy a box of sanitizing wipes (I like these) and some hand sanitizer. I like to get a handful of the little travel bottles and put them on my purse and my backpack. Just make sure you have enough sanitizer to get you through the week - more than one little travel bottle, for sure.

Consider taking an immune supplement before, during, and after the conference. They may be a bunch of hooey, but if confidence boosts my immune system, I’ll take it.

Step 2: Avoidance

Here’s where you need to start thinking like a germaphobe. From the moment you leave your house, think about what you touch. Don’t touch anything and then touch your face. Don’t touch anything and then eat without cleaning your hands. Try to avoid shaking hands where possible. This is really hard, and I’ve been trying to do it for years. The best I can do sometimes is be conscious that I shook someone’s hand, and make sure I wash or sanitize my hands. Also, be mindful of your own germs - if you sneeze or cough, make sure you do it into your elbow!

Step 3: Decontamination

Keep up the germaphobe thinking. I don’t mean this lightly or to make fun of germophobia - to keep from getting sick, you have to be afraid of germs. And don’t worry about looking weird. You won’t be the only one, and you won’t feel weird when you’re not sick later! Here’s what to do with those wipes and sanitizers:

  • Wipe down your devices and hands at the start of every session. Thoroughly wipe your phone, your laptop keyboard - anything you touch.

  • Wipe your table space and hands before you eat.

  • If you carry a water bottle - sanitize it throughout the day.

  • Wash/sanitize your hands throughout the day.

  • On planes/trains - wipe your seat area and tray, as well as your hands, your device, and anything else you touch.

Step 4: Mitigation

It can be easy to forget to drink a lot of water when you’re away from home, so don’t skimp on that, especially if you’re partaking in evening conference fun. Make sure you’re eating decently, too. And not to be a killjoy, but if you are drinking heavily each night of the conference, do be aware that you’re not doing your immune system any favors - and you’re probably not being careful about germs, either.

Rest as much as you can - this can also run contrary to conference fun, but try to find a balance! Of course it’s a once a year event, but do try to get some sleep in - or at least a quality nap. Lack of sleep will leave you vulnerable to illness as well.

If you do start to get sick, these last two tips are pretty important.

Wrapping Up

I hope these tips help you to stay well - it’s a germy world out there! If you do get sick, don’t despair. Get some soup, some of those nice soft lotion-infused tissues, stay hydrated, and catch up on DrupalCon session videos from your couch.

What are your best tips for staying healthy at conferences? Let us know on Twitter @ChromaticHQ!

Mar 28 2017
Mar 28

Introducing Redis

Caching is a frequent topic of discussion when attempting to make Drupal, or any modern web application, fast. Drupal relies on database tables to cache content such as markup by default. While this is certainly better than not caching, the database is a major choke-point that we would like to bypass to speed up the serving of pages.

One of the most recommended options for in-memory caching is memcached:

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

Redis, which is an advanced key-value store, can be thought of in this context as a drop-in replacement for memcached.

Is It Time to Ditch Memcached for Redis?

Memcached has served us well, but Redis offers what I see as two big benefits over memcached:

Either one of these could be the killer feature for you, but in my experience, for sites with large amounts of editorial data and not-insignificant caching concerns, the possibility of persisting that cached data is what gets people excited.

To contrast this behavior, if you are using memcached for your cache and you restart it, the data store is going to be empty when it returns. While this can sometimes be convenient when troubleshooting, in the real world this could bring your site to a crawl if it occurs unexpectedly. The ability to restart Redis, and not worry about having an empty cache is worth the price of admission on its own.

Note: It is possible to disable persistence in Redis if you so choose.

Implementing Redis for Drupal

So how do we get this going? Here’s a secret, it’s even easier than setting up memcached.

  1. Install Redis. Redis can be configured on the same box as your web server or on its own. To install Redis on Ubuntu run the following commands:
sudo apt-get update && sudo apt-get upgrade

sudo apt-get install software-properties-common

sudo add-apt-repository ppa:chris-lea/redis-server

sudo apt-get update && sudo apt-get upgrade

sudo apt-get install redis-server

  1. Configure Redis. Redis is ready to run and is configured to persist data out of the box, but if you wish to tweak its settings you can start with Linode’s guide for installing and configuring Redis on Ubuntu and other distros.
  2. Install the PhpRedis library. Download and install the Redis Drupal module. You do not need to enable this module.
  3. Configure your Drupal site to use Redis for caching instead of the Drupal cache database tables. In settings.php or settings.local.php add:
 * Redis Configuration.
$conf['chq_redis_cache_enabled'] = TRUE;
if (isset($conf['chq_redis_cache_enabled']) && $conf['chq_redis_cache_enabled']) {
  $settings['redis.connection']['interface'] = 'PhpRedis';
  $settings['cache']['default'] = 'cache.backend.redis';
  // Note that unlike memcached, redis persists cache items to disk so we can
  // actually store cache_class_cache_form in the default cache.
  $conf['cache_class_cache'] = 'Redis_Cache';

Let It Fly

Between its powerful features and its simple setup, Redis can be an attractive alternative to memcached for Drupal caching, not only in production, but everywhere. If you are still skeptical, you could even run Redis in parallel with memcached for an existing site. You can switch between the two with just some small changes to your settings file until you are comfortable moving to Redis completely.

Mar 23 2017
Mar 23

In my last blog post, I set out to learn more about D8's cache API. What I finally wrote about in that blog post made up only a small part of the journey taken.

Along the way I used:

  • drush cutie to build a stand-alone D8 environment
  • a custom module to build a block
  • Twig templates to output the markup
  • dependency injection for the service needed to obtain the current user's ID

It was the last one, dependency injection, that is the inspiration for this post. I have read about it, I feel like I know it when I see it, but when Adam pointed out that my sample code wasn't using dependency injection properly, I set out to right my wrong.

Briefly, Why Dependency Injection?

Using dependency injection properly:

  • Ensures decoupled functionality, which is more reusable.
  • Eases unit testing.
  • Is the preferred method for accessing and using services in Drupal 8.

So, Where Was I?

In the aforementioned D8 Cache API blog post, I was building a block Plugin and grabbing the current user's ID with the global Drupal class, based on an example from D.O.

$uid = \Drupal::currentUser->id();

With a single line of code placed right where I needed it, I could grab the current user's ID and it worked a charm. However, as Adam noted, that is not the preferred way to do things.

"Many of the current tutorials on Drupal 8 demonstrate loading services statically through the \Drupal class. Developers who are used to Drupal 7 may find it faster to write that way, but using dependency injection is the technically preferred approach." - From Acquia's Lesson 8.3 - Dependency injection

Oops, guilty as charged! That global Drupal class is meant to be a bridge between old, procedural code and D8's injected, object-oriented ethos. It’s located in core/lib/Drupal.php and begins with this comment:

   * Static Service Container wrapper.
   * Generally, code in Drupal should accept its dependencies via either
   * constructor injection or setter method injection. However, there are cases,
   * particularly in legacy procedural code, where that is infeasible. This
   * class acts as a unified global accessor to arbitrary services within the
   * system in order to ease the transition from procedural code to injected OO
   * code.

Further down you find the Drupal::service() method, which is a global method that can return any defined service. Note the comment Use this method if the desired service is not one of those with a dedicated accessor method below. If it is listed below, those methods are preferred as they can return useful type hints.

   * Retrieves a service from the container.
   * Use this method if the desired service is not one of those with a dedicated
   * accessor method below. If it is listed below, those methods are preferred
   * as they can return useful type hints.
   * @param string $id
   *   The ID of the service to retrieve.
   * @return mixed
   *   The specified service.
  public static function service($id) {
    return static::getContainer()->get($id);

To determine what it means by “...those with a dedicated accessor method below.”, refer to the methods mentioning the return of services, like Returns the time service. from Drupal::time() or Returns the form builder service. from Drupal::formBuilder(). These dedicated accessor methods are preferred because they can return useful type hints. For instance, in the case of returning the time service, @return \Drupal\Component\Datetime\TimeInterface.

Okay, so that’s some explanation of the global Drupal class, but I wanted to use proper dependency injection! I figured I would find a quick dependency injection example, alter it for my needs and finish off the blog post. Boy, was I in for a surprise.

The Journey

I found examples, read articles, but nothing seemed to work. Adam even sent me some sample code from another project, but it didn't help me. I focused on the wrong things or couldn't see the forest through the trees. Based on the examples, I couldn't figure out how to use dependency injection AND get the current user's ID! What was I doing wrong?

Maybe I was passing the wrong type of service into my __construct method?

public function __construct(WrongService $wrong_service) {

Maybe I was missing a crucial use statement?

use Drupal\Core\Session\RightService;

I always felt one step away from nirvana, but boy, was I taking a lot of wrong steps along the way. The structure of my code matched the examples, but the services I tried never returned the user ID I needed.

Troubleshooting in the Dark

My troubleshooting focused on all the use statements. I thought I hadn't pulled the right one(s) into my code - use ... this, use ... that. I always got the error ...expecting this, got NULL. I would Google the error message, find some other examples, try them, same error or different error. Which use statement was I missing?! I was working with block Plugin code. Wait, did the dependency injection have to be in the Controller and not the Plugin? Why not just in the Plugin? When did the room get dark? How long have I been standing in this dark room? What time is it?

Finally, I took a deep breath and focused in on the fact that I was building a Plugin. I Googled "dependency injection in plugins". Eureka! The very first result was titled Lesson 11.4 - Dependency injection and plugins (link below) and the results blurb had these crucial sentences:

"Plugins are the most complex component to add dependency injection to. Many plugins don't require dependency injection, making it sometimes challenging to find examples to copy." - From Acquia's Lesson 11.4 - Dependency injection and plugins

I reiterate: ...making it sometimes challenging to find examples to copy.

"The key to making plugins use dependency injection is to implement the ContainerFactoryPluginInterface. When plugins are created, the code first checks if the plugin implements this interface. If it does, it uses the create() and __construct() pattern, if not, it uses just the __construct() pattern." - From Acquia's Lesson 11.4 - Dependency injection and plugins

“In order to have your new plugin manager use the create pattern, the only thing your plugins need is to implement the ContainerFactoryPluginInterface.” - From https://www.lullabot.com/articles/injecting-services-in-your-d8-plugins

(Speaking of the plugin manager and its create pattern, you can also take a gander at FormatterPluginManager::createInstance)

So, I was incorrectly using this standard Block definition:

class HeyTacoBlock extends BlockBase {

When I should have also been adding implements ContainerFactoryPluginInterface as follows:

class HeyTacoBlock extends BlockBase implements ContainerFactoryPluginInterface {

And this must be why I get paid the big bucks; not because I know everything, but because I submit myself to the sublime torture of failing at the simplest of things!

In the vein of keeping it simple, when you are dealing with injecting services in plugin classes, you need to implement an additional interface - ContainerFactoryPluginInterface - and if you look at that interface, you see the create() method has extra parameters.

The Plugin create() Method With Extra Parameters from ContainerFactoryPluginInterface

public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition)

Compare the Above With a Typical Controller’s create() Method

public static function create(ContainerInterface $container)

You Thought I was Done?

Just to drill the point home and help it sink in, here’s a full example of dependency injection in a Block Plugin. Note the use statements, the implements ContainerFactoryPluginInterface and the four parameters in the create() method. At the end of the day, this was all done to get the user’s account information into the __construct() method, using the $account variable, like so: $this->account = $account;. After that, I can grab the user ID anywhere in my class with a simple $user_id = $this->account->id();


namespace Drupal\heytaco\Plugin\Block;

use Drupal\Core\Session\AccountProxy;
use Drupal\Core\Session\AccountProxyInterface;
use Drupal\Core\Block\BlockBase;
use Drupal\Core\Plugin\ContainerFactoryPluginInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;

 * Provides a Hey Taco Results Block
 * @Block(
 *   id = "heytaco_block",
 *   admin_label = @Translation("HeyTaco! Leaderboard"),
 * )
class HeyTacoBlock extends BlockBase implements ContainerFactoryPluginInterface {

   * @var $account \Drupal\Core\Session\AccountProxyInterface
  protected $account;

   * @param \Symfony\Component\DependencyInjection\ContainerInterface $container
   * @param array $configuration
   * @param string $plugin_id
   * @param mixed $plugin_definition
   * @return static
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) {
    return new static(

   * @param array $configuration
   * @param string $plugin_id
   * @param mixed $plugin_definition
   * @param \Drupal\Core\Session\AccountProxyInterface $account
  public function __construct(array $configuration, $plugin_id, $plugin_definition, AccountProxyInterface $account) {
    parent::__construct($configuration, $plugin_id, $plugin_definition);
    $this->account = $account;

Compare all of the above with this example of dependency injection in a Controller (as opposed to a Plugin), where only AccountInterface is injected directly into the __construct() method. The reason it seems simpler is because ControllerBase already implements ContainerInjectionInterface for you and there are fewer inherited arguments to pass through create().


namespace Drupal\heytaco\Controller;

use Drupal\Core\Session\AccountInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Drupal\Core\Controller\ControllerBase;

 * Controller routines for HeyTaco! block.
class HeyTacoController extends ControllerBase {

  protected $account;

  public function __construct(AccountInterface $account) {
    $this->account = $account;

  public static function create(ContainerInterface $container) {
    return new static(

To Summarize

I wrote this blog post for two reasons:

  1. To get more Drupal 8 Plugin dependency injection content out there to help the next person find what he/she's Googling for.

  2. To vent the frustration of being a Drupal expert and feeling like a dolt.

So remember, when you're dealing with dependency injection in Drupal 8 plugins, there's an additional interface to implement and some extra parameters in the create() method!

p.s. Oct 2017 - I added parent::__construct($configuration, $plugin_id, $plugin_definition); to my HeyTacoBlock's __construct method after the Twitter-verse asked why it was missing.

Mar 08 2017
Mar 08

For visualizing data, D3 rules. To show an example of how to integrate D3 with Drupal 8, let’s take a custom module created in Drupal 7 and refactor it for Drupal 8.

The original D7 module was an exercise in sorting a range of integers from user entered form inputs. To see it in action, install and enable the custom module in a vanilla Drupal 7 site. Here's the module in an animated gif taking the form inputs with D3 handling the graphing/visualization:

The challenge was to take the total number of integers (represented by the purple bars) and generate a random set of numbers ranging from the first delimiting integer to the second delimiting integer. The Step and Play buttons show the animation of sorting the randomly generated numbers in descending order.

In Drupal 7, the module file contains the typical form elements and menu callbacks using hook_menu(). The heavy lifting is done by the custom JavaScript file that leverages the D3 library to load the bar chart and to animate the sorting.

Converting Drupal 7 hook_menu() items

In Drupal 8, we have to convert hook_menu() to use the Drupal 8’s routing system by adding a MODULE.routing.yml file. To breakdown the pieces, let’s start with creating a route for the path/callback in a new file called bubblesort.routing.yml. In this case, there are two routes that need defining: the path to the form itself and a path to the JSON data generated from the form inputs needed by the custom JavaScript.

 path: '/bubblesort'
   _title: 'Bubble Sort'
   _form: '\Drupal\bubblesort\Form\BubblesortForm'
   _permission: 'access content'
 path: '/bubblesort/json/{data}'
   _title: 'Bubble JSON'
   _controller: '\Drupal\bubblesort\Controller\BubblesortController::bubblesort_json'
   _permission: 'access content'


Next, let’s take a closer look at the controller classes. The form controller lives in the src/Form directory and is called BubblesortForm.php. This code handles how the form is built, validated, and submitted. In Drupal 7, this is the equivalent of a form function passed as a page argument to hook_menu().


namespace Drupal\bubblesort\Form;

use Drupal\Core\Form\FormBase;
use Drupal\Core\Form\FormStateInterface;

 * Builds the bubblesort form.
class BubblesortForm extends FormBase {

   * {@inheritDoc}
  public function getFormId() {
    return 'bubblesortform';

   * {@inheritDoc}
  public function buildForm(array $form, FormStateInterface $form_state) {
    global $base_url;
    // Use hook_theme to generate form render array.
    $form['#theme'] = 'bubblesortform';
    $form['#attached']['library'][] = 'bubblesort/bubblesort-form';
    $form['#attached']['drupalSettings']['baseUrl'] = $base_url;

    $form['numbers_total'] = array(
      '#type' => 'number',
      '#title' => $this->t('Total number of bars:'),
      '#required' => true,
      '#min' => 1,
      '#max' => 35,

    $form[integer_min] = array(
      '#type' => 'number',
      '#title' => $this->t('First number:'),
      '#required' => TRUE,
      '#min' => 1,
      '#max' => 99,

    $form[integer_max] = array(
      '#type' => 'number',
      '#title' => $this->t('Second number:'),
      '#required' => TRUE,
      '#min' => 1,
      '#max' => 99,

    $form['submit'] = array(
      '#type' => 'submit',
      '#value' => $this->t('Shuffle'),

    $form['step_button'] = array(
      '#type' => 'button',
      '#value' => $this->t('Step'),

    $form['play_button'] = array(
      '#type' => 'button',
      '#value' => $this->t('Play'),

    return $form;

   * {@inheritDoc}
  public function validateForm(array &$form, FormStateInterface $form_state) {
    $values = $form_state->getValues();
    if ($values[integer_max]  <= $values[integer_min]) {
      $form_state->setErrorByName(integer_max, t('Second number must be greater than the first number.'));

   * {@inheritDoc}
  public function submitForm(array &$form, FormStateInterface $form_state) {
    drupal_set_message($this->t('@total bars between @first and @second', array(
      '@total' => $form_state->getValue('numbers_total'),
      '@first' => $form_state->getValue(integer_min),
      '@second' => $form_state->getValue(integer_max),

Attaching Libraries

The following two lines from the file above attach the custom JavaScript and CSS to the form and pass the base URL using drupalSettings (formerly Drupal.settings in Drupal 7) :

$form['#attached']['library'][] = 'bubblesort/bubblesort-form';
$form['#attached']['drupalSettings']['baseUrl'] = $base_url;

The library is defined by the bubblesort.libraries.yml file in the root of the module folder. Multiple libraries can be defined as entries detailing dependencies and assets such as CSS and JavaScript files. Here is the bubblesort.libraries.yml:

  version: 1.x
      css/bubblesort.css: {}
    js/d3.min.js: {}
    js/bubblesort.js: {}
    - core/drupal
    - core/jquery
    - core/drupalSettings

Notice that we have to include drupalSettings and jquery from core as dependencies. In Drupal 8, these dependencies aren’t included by default as they were in Drupal 7.

Passing JSON to D3

D3 enables the mapping of an arbitrary dataset to a Document Object Model (DOM), and then allows for manipulation of the document by data-driven changes. D3 can accept text-based data as a plain text file, CSV, or in JSON format. Drupal 8 provides RESTful web services and serialization out-of-the-box to create views that generate JSON data. In this particular use case however, we’re creating our own route and generating the data from form inputs in the controller.

In order for D3 to map all the randomly generated numbers in a bar chart, the data points are returned in a simple array of values as a JsonResponse object. To create a path that only displays JSON, a function in the controller takes the form parameters and returns the correct number of random integers delimited by minimum and maximum values in a JSON callback.

The controller class for generating the JSON needed by the custom JavaScript can be found in the src/Controller directory and is called BubblesortController.php:


namespace Drupal\bubblesort\Controller;

use Drupal\Core\Controller\ControllerBase;
use Symfony\Component\HttpFoundation\JsonResponse;

 * Controller for bubblesort routes.
class BubblesortController extends ControllerBase {

   * Returns JSON data of form inputs.
   * @return \Symfony\Component\HttpFoundation\JsonResponse
   *   A JSON response containing the sorted numbers.
  public function bubblesort_json($data) {
    if (empty($data)) {
      $numbers = numbers_generate();
    else {
      $data_parts = explode('&', $data);
      foreach($data_parts as $part) {
        $fields[] = explode('=', $part);
      // Loop through each field and grab values.
      foreach($fields as $field) {
        if (!empty($field[1])) {
          switch ($field[0]) {
            case 'numbers_total':
              $total = $field[1];;
            case integer_min:
              $range1 = $field[1];
            case integer_max:
              $range2 = $field[1];
      // Sort the numbers.
      $numbers = $this->numbers_generate($total, $range1, $range2, FALSE);
    // Return a response as JSON.
    return new JsonResponse($numbers);

   * Generates random numbers between delimiters.
   * @param int $total
   *   The total number of bars.
   * @param int $range1
   *   The starting number.
   * @param int $range2
   *   The ending number.
   * @return array
  private function numbers_generate($total = 10, $range1 = 1, $range2 = 100, $sort = FALSE) {
    $numbers = range($range1, $range2);
    $numbers = array_slice($numbers, 0, $total);
    if ($sort) {
    return $numbers;

The functions declared in the controller above are the same callbacks that are used in hook_menu() of the Drupal 7 version of the module.

Using hook_theme to create a render array

The bar chart for stepping through or playing the sorting animation of the randomly generated numbers needs a place to be appended in the form template. We use hook_theme() in the module file to generate a render array for the form:


 * @file
 * Contains bubblesort.module.

 * Implements hook_theme().
function bubblesort_theme($existing, $type, $theme, $path) {
  return array(
    'bubblesortform' => array(
      'template' => 'bubblesort',
      'render element' => 'form',

In the Twig template, there is a <div> element with class "chart" that will get populated with the sorting bars through jquery selectors:

<form{{ attributes }}>
    {{ form.form_build_id }}
    {{ form.form_token }}
    {{ form.form_id }}
        {{ form.numbers_total }}
        {{ form.integer_min }}
        {{ form.integer_max }}
        {{ form.submit }} {{ form.step_button }} {{ form.play_button }}
<div class="chart"></div>

Alternatively, you could create a markup form element and pass the necessary markup into the form.


The juice of the sorting animation lives in the custom JavaScript file called bubblesort.js. The functionality is almost identical to the JavaScript file in the Drupal 7 module, save for two instances. First is the syntax for passing a variable via drupalSettings.

Compare the following line in Drupal 7:

base_path = Drupal.settings.basePath;

In Drupal 8, it changes to:

base_path = drupalSettings.baseUrl;

HTML5 elements

The other minor difference lies in the selector for grabbing all the form inputs. In the Drupal 7 version, the selector looks like:

values = (!empty) ? $('form input:text').serialize() : '';

While in Drupal 8, the selector is:

values = (!empty) ? $('input[type="number"]').serialize() : '';

The reason for this is that in Drupal 8, we’re using a new HTML5 element '#type' => 'number' for the form elements, while in Drupal 7, the input fields were of type text.

Our Port to Drupal 8 is Complete!

The meat of the sorting functionality resides in a function called bubblesort_display(data) in the JavaScript file. This is where the D3 library is used to draw and sort the bar chart based on the data passed into it from the JSON callback. Parsing the JavaScript is beyond the scope of this article, but there are tons of examples and tutorials for how to implement the wild variety of D3 charts and graphs into an application (see Additional Resources below).

So this wraps up the conversion of a Drupal 7 custom module integrated with D3 to Drupal 8. The most challenging parts of such a migration center around the routing mechanism and extending the controller classes to provide the functionality that was defined in a procedural approach using hook_menu(). The method for passing data to a custom JavaScript file using the D3 library is the same for Drupal 8 as it is for Drupal 7 - using a JSON callback to return the data that can be consumed by D3 methods.

The Drupal 8 version of the bubblesort custom module is also available for download and can be readily installed in a vanilla Drupal 8 site by just enabling it. Here’s a gif of bubblesort in action in Drupal 8:

Additional Resources

Mar 06 2017
Mar 06

One of the best and most-touted pieces of Drupal 8 is the new configuration management system. This system makes it really easy for developers to export configuration into code. Before that, developers had to rely on a complicated system of Features, Strongarm, UUID, Features & UUID plugin modules. Even for seasoned developers, this was often a nightmare. There were overrides and locking, crossing fingers and hoping that features would actually revert, having them not revert for no reason at all, etc. It was a mess. Not to mention trying to put all of the features in logically named and organized modules, which could grow and grow as the site got bigger and added more functionality and don’t even get me started on dependencies Configuration Management in Drupal 8 aims to solve all of these problems.

If you've been on the fence about moving from Drupal 7 to Drupal 8, as a developer, this could be a reason to switch. Here’s a breakdown of key differences and considerations:

UI or Drush?

Drush can make your features workflow a lot easier, but this also depends on your site. If you're adding a lot of things to a feature, the UI can be easier than listing and typing or copy/pasting/tabbing components into commands. Personally, I've fallen into a pattern of using the Features UI because many of the sites I work on have complex features. There is also a UI for config in D8, but the drush import and export is pretty near flawless. I've yet to find a reason to want to use the UI, though I appreciate its existence as a backup. However, we will go through both the UI and Drush for Drupal 7 and Drupal 8.

Exporting Configuration In Drupal 7

Ok, so you've made the changes you need to export for the existing feature.

Using the UI

  1. So now we need to navigate to Structure > Features and the appropriate feature.
  2. We click Recreate and wait for quite some time for the feature to load. image_0
  3. We poke around all of the components, making sure we've found all of the appropriate bits of our feature. image_1
  4. We might update the version number, if this is a major change. image_2
  5. We set the file path and click generate, or download.
  6. If we downloaded the file, we find the file and extract it into the appropriate location.
  7. We go back into features and check that feature is in the correct state.

Using Drush

Now, if you are only updating existing components and not adding or deleting any components at all, drush features-update myfeature will do the trick. You can then check the state of the feature with drush fl.


When adding components, list components with drush features-components. image_4

Add the components as an argument to drush features-export, like drush features-export myfeature mycomponent1 mycomponent2


The above example exports the image component into a new feature called chq_images_test_feature.

In Drupal 8

Using the UI

Go to admin/config/development/configuration to access the UI. In the Configuration UI in Drupal 8, you have a few options. You can Import, Export, or Synchronize. You can also view a handy diff of the changes. The screen looks something like this, with an "Import All" button at the bottom:


The Import tab allows you to import a full configuration archive from a file:


or a single configuration item from pasted code:


The Export tab similarly allows you to export the entire configuration into one file:


or export a single item into code that you can copy:


At first, I missed the ability to group configuration as you could in Features, but the interface is so much simpler, and the dropdowns are much easier to use - not to mention they are more reliable than the Ajax-dependent checkboxes of Features.

Using Drush

While Drupal 8's config UI is simple, drush is simpler still.

To export configuration:

Run drush config-export -y. You will see a list of exported configuration changes like this:


To import configuration:

Run drush config-import -y. You will see a list of imported configuration something like this:


That's it. It's one of my favorite things about developing in Drupal 8. It's quick and streamlined and only takes seconds. You don't need companion modules to help export everything. Doesn't matter if you're adding, deleting, updating, creating, whatever. In the 12+ months that I've been working with a Drupal 8 production site, I've only had a couple small instances of config not syncing correctly - much less frequently than features, and it was easier to troubleshoot.

Wrapping Up

Now that we've passed the first anniversary of Drupal 8's release, more and more people and companies are considering moving to Drupal 8. There are dozens of things to consider, and this is just a small one, but it's a great improvement to the developer experience.

What are your thoughts on configuration management in Drupal 8? Comments, questions, concerns? Reach out to us on Twitter - @ChromaticHQ and tell us what you think!

Mar 01 2017
Mar 01

What is a Code Review?

A good code review is like a good home inspection. It ensures that the original architectural drawings were followed, verifies that the individual subsystems are built and integrated correctly, and certifies that the final product works and adheres to standards. In the world of software development, a “code inspection” often takes the form of a pull request. A pull request, by its nature, simply shows line by line information for all of the updated or added files. This allows developers, who didn't write the code, to perform a code review and approve the changes before they are merged into the codebase.

Why Code Reviews are Important

Code reviews are not just a quality assurance (QA) process with a different name. They offer a wide variety of benefits that go well beyond the inherent advantages of a QA process.

Training / Mentorship

Deadlines often mean that time is not set aside for mentoring developers, or when it is, finding scenarios to illustrate various ideas are hard to find. Code reviews offer a great way to organically integrate training and mentorship into everyday workflows. It allows developers to learn from each other and leverage the years of experience from everyone involved. Senior developers are able to impart background knowledge of past decisions and provide guidance on a variety of best practices.


Knowing that another developer will be looking over your changes line by line forces you to think more deeply about your code as you write it. Instead of thinking “what is the fastest way to accomplish this task,” you are forced to think about the right way to build functionality. Your end goal is code that not only fulfills the scope of the ticket, but that is easy for others to understand and review.

Sharing the Load / Burning Down the Silos

Software development often requires specialization. When everyone’s work is visible in a pull request, it offers others the chance to see how unfamiliar problems are solved. This allows everyone to ask questions that help others, encourage additional learning, and diversify developers’ knowledge. Also, when things do break, more members of the team will have know-how to quickly track down the bug.

Code Quality

Peer reviews help ensure that any code that is committed to the codebase is not only functional, but meets quality standards. Typically, this means that it:

  1. Works.
  2. Has good documentation.
  3. Meets code style guidelines and best practices.
  4. Is performant.
  5. Doesn't add unnecessary lines (uses existing APIs).
  6. Properly abstracts code where appropriate (adheres to D.R.Y. principles).
  7. Is future-proofed as well as possible.


Senior developers may know about needs for future planned functionality or the needs of another internal team and can help steer current architecture and code changes to support those efforts. Some additional work now may pay huge dividends in the future by simplifying the addition of new features and maintaining existing functionality.

Reducing Risk

A peer review requires a pull request that is able to be comprehended by another developer. By introducing a peer review workflow, pull requests often decrease in size to aid the review process. This reduces risk, as often this relies upon smaller more discreet change sets instead of monolithic changes in a single pull request. Additionally, by having developers with varying areas of expertise, potential security risks are reduced by catching issues during the peer review process instead of after they’ve made it out to production.

How to Do a Code Review

Creating a Pull Request

For a peer review process to reach its full potential, it requires a skillful reviewer, but also a well-prepared pull request. A pull request might receive many reviews, but the first review it should receive is from its author. When preparing your own pull request for review by others, there are many things that can be done. These include:

  • Provide context for the changes by including a summary.
  • Document outstanding questions/blockers if further progress is pending feedback.
  • If necessary, clear up any potential areas of confusion ahead of time with preemptive comments.
  • Realize that if confusion could exist, perhaps the code or the code comments need improvement.
  • Attach screenshots of the functionality in action to provide additional context.
  • Ensure reviewers are aware of the state of your pull request and are notified when their review is needed.

An ideal PR

Reviewing a Pull Request

Providing feedback on a pull request is a lot like coaching, it requires building trust, adjusting to the situation and the player, and treating everything as a teachable moment. Let’s walk through some of Chromatic’s tried and true techniques.

  • Cite requests with links to posts, code standards guidelines, etc. when pointing out lesser known things or to build rapport with a developer not aware of your expertise.
  • Ask leading questions that help other developers see things from your perspective instead of just saying "change this."
  • Keep it fun and where possible, introduce some self-deprecating humor. Bring up your own past mistakes as evidence for why an alternate approach should be considered.
  • Acknowledge when you are nitpicking to make feedback more palatable.
  • Avoid pointing out problems without providing solutions.
  • Denote a few instances of a repeated mistake or enough to establish a pattern, then request that they all be fixed.
  • Ask questions. Doing so allows you to learn something and the author is forced to think critically about their decisions.
  • Explain the why of your requested change, not just the what.
  • Avoid directing negativity at the developer, instead focus on how the code could be improved.
  • Enforce the use of a code sniffer to prevent wasting time nitpicking code style.
  • Make note of the good parts of the code code and celebrate when you or others learn new techniques through the process.
  • Avoid bikeshedding on trivial issues.
  • Realize when optimization is no longer useful and avoid over engineering solutions.
  • Remember that the goal isn’t to find fault, it is to improve the codebase and teach others along the way.
  • Written communication may not come off in the same tone as it was intended, so error on the side of friendliness.
  • Treat others how you want to be treated. This is a simple rule, but such an important one. Consider how you would feel receiving this review, and be sure to write your review accordingly.

Establishing a Workflow

A pull request won’t merge itself, request feedback, or implement the feedback it was given. Thus, having clear expectations for who creates, updates, reviews, and merges a pull request is crucial. For example, after a pull request has been created, is it the responsibility of the developer to solicit feedback until it is approved and merge it themselves, or does a development manager take ownership of every pull request and merge it when they deem it ready? A good workflow should have just enough rules to work, without overly burdening developers and will likely vary per project or per team. The correct workflow is the one that works for you, the important part is that you have one, and that it includes a code review.

In “Review”

Adding a code review step to your development workflow might slow things down at first. However, in short order it will invariably show itself to be a worthwhile investment. Code reviews are an invaluable tool for mentoring, improving accountability, sharing knowledge and enhancing overall code quality. Now that you have reviewed this post, if you approve it, then merge it into your team’s workflow.

The bad puns are done, but if you want to continue the discussion on code reviews, reach out to us on Twitter @chromatichq.

Feb 20 2017
Feb 20

Pay them in Tacos!

Say our partners who form the Chromatic brain trust, (Chris, Dave and Mark), do something crazy like base our reward system on the number of HeyTaco! emojis given out amongst team members in Slack. (Remember, this is an, ummmm, hypothetical example.) Now, say we wanted to display the taco leaderboard as a block on the Chromatic HQ home page. It's not like the taco leaderboard needs minute by minute updates so it is a good candidate for caching.

Why Cache?

What do we save by caching? Grabbing something that has already been built is quicker than building it from scratch. It's the difference between grabbing a Big Mac from McDonald's vs buying the ingredients from the supermarket, going home and making a Big Mac in your kitchen.

So, instead of each page refresh requiring a call to the HeyTaco! API, we can just tell Drupal to cache the leaderboard block and display the cached results. Instead of taking seconds to generate the page holding the block, it takes milliseconds to display the cached version. (ex. 2.97s vs 281ms in my local environment.)

Communicate with your Render Array

We have to remember that it's important that our render array - the thing that renders the HTML - knows to cache itself.

"It is of the utmost importance that you inform the Render API of the cacheability of a render array." - From D.O.'s page about the cacheability of render arrays

The above quote is what I'll try to explain, showing some of the nitty gritty with the help of a custom module and the HeyTaco! block it builds.

I created a module called heytaco and below is the build() function from my HeyTacoBlock class. As its name suggests, it's the part of the code that builds the HeyTaco! leaderboard block.

 * Provides a Hey Taco Results block
 * @Block(
 *   id = "heytaco_block",
 *   admin_label = @Translation("HeyTaco! Leaderboard"),
 * )
class HeyTacoBlock extends BlockBase implements ContainerFactoryPluginInterface {

   * construct() and create() functions here

   * {@inheritdoc}
  public function build() {
    $user_id = $this->account->id();
    return array(
      '#theme' => 'heytaco_block',
      '#results' => $this->returnLeaderboard($user_id),
      '#partner_asterisk_blurb' => $this->isNotPartner($user_id),
      '#cache' => [
        'keys' => ['heytaco_block'],
        'contexts' => ['user'],
        'tags' => ['user_list'],
        'max-age' => 3600,

For the purposes of the rest of the blog post, I'll focus on the above code's #cache property, specifically its metadata:

  • keys
  • contexts
  • tags
  • max-age

I'm going to go through them similarly to (and inspired by) what is on the aforementioned D.O. page about the cacheability of render arrays.


From Drupal.org: ...what identifies the thing I'm rendering?

In my words: This is the "what", as in "What entity is being rendered?". In my case, I'm just showing the HeyTaco! block and it doesn't have multiple displays from which to choose. (I will handle variations later using the contexts parameter.)

Many core modules don't include keys at all or they are single keys. For instance:

toolbar module

  • 'keys' => ['toolbar'],

dynamic_page_cache module

  • 'keys' => ['response'],

views module

After looking through many core modules, I (finally) found multiple values for a keys definition in the views module, in DisplayPluginBase.php:

  '#cache' => [
    'keys' => ['view', $view_id, 'display', $display_id],

So, in the views example above, the keys are telling us the "what" by telling us the view ID and its display ID.

I'd also mention that on D.O. you will find this tidbit: Cache keys must only be set if the render array should be cached.


From Drupal.org: Does the representation of the thing I'm rendering vary per ... something?

In my words: This is the "which", as in, "Which version of the block should be shown?" (Sounds a bit like keys, right?)

The Finalized Cache Context API page tells us that when cache contexts were originally introduced they were regarded as "special keys" and keys and contexts actually intermingled. To make for a better developer experience, contexts was separated into its own parameter.

Going back to the need to vary an entity's representation, we see that what is rendered for one user might need to be rendered differently for another user, (ex. "Hello Märt" vs "Hello Adam"). If it helps, the D.O. page notes that, "...cache contexts are completely analogous to HTTP's Vary header." In the case of our HeyTaco! block, the only context we care about is the user.

Keys vs Contexts

Amongst team members, we discussed the difference between keys and contexts quite a bit. There is room for overlap between the two and I netted out at keeping things simple: let keys broadly define the thing being represented and let contexts take care of the variations. So keys are for completely different instances of a thing (ex. different menus, users, etc.) Contexts are for varying an instance, as in, "When should this item look different to different types of users?"

Different Contexts, Different Caching

To show our partners (Chris, Dave and Mark, remember?) how great they are I have added 100 tacos to their totals without telling them. When they log in to the site, they see an unassuming leaderboard with impressive Top 3 totals for themselves.

Partners Don't Know Their Taco Stats are Padded!

Partner HeyTaco! leaderboard

However, I don't want the rest of the team feeling left out, so for them I put asterisks next to the inflated taco totals and note that those totals have been modified.

Asterisks Remind us of the Real Score

Nonpartner HeyTaco! leaderboard with asterisks

So, our partners see one thing and the rest of our users see another, but all of these variations are still cached! I use contexts to allow different caching for different people. But remember, contexts aren't just user-based; they can also be based on ip, theme or url, to name a few examples. There is a list of cache contexts that you can find in core.services.yml. Look for the entries prefaced with cache_context (ex. cache_context.user, cache_context.theme).


From Drupal.org: Which things does it depend upon, so that when those things change, so should the representation?

In my words: What are the bits and pieces used to build the markup such that if any of them change, then the cached markup becomes outdated and needs to be regenerated? For instance, if a user changes her username, any cached instances using the old name will need to be regenerated. The tags may look like 'tags' => ['user:3'],. For HeyTaco!, I used 'tags' => ['user_list'],. This means that any user changing his/her user info will invalidate the existing cached block, forcing it to be rendered anew for everyone.


From Drupal.org: When does this rendering become outdated? Is it only valid for a limited period of time?

In my words: If you want to give the rendering a maximum validity period, after which it is forced to refresh itself, then decide how many seconds and use max-age; the default is forever (Cache::PERMANENT).

In Cache-tastic Conclusion

So that's my stab at exploring #cache metadata and I feel like this is something that requires coding practice, with different use cases, to grasp what each metadata piece does.

For instance, I played with tags in my HeyTaco! example for quite some time. Using 'tags' => ['user:' . $user_id] only regenerated the block for the active user who changed his/her own info. So, I came upon an approach to use and pass all the team's IDs into Cache::buildTags(), like this Cache::buildTags('user', $team_uids). It felt ugly because I had to grab all the user IDs and put them into $team_uids manually. (What if we had thousands of users?) In my experimentation, that was the only way I could get the block updated if any user had his/her info changed.

However, after all that, Gus Childs reviewed my blog post and, since he knew of the existence of the node_list tag, he posited that all I needed to use as my tag is user_list, as in ('tags' => ['user_list'],). So, instead of manually grabbing user IDs, I just had to know to use 'user_list'. Thanks Gus!

Another colleague, Adam, didn't let me get away with skipping dependency injection in my sample code. He also questioned the difference between keys and contexts and made me think more about this stuff than is probably healthy.

Feb 10 2017
Feb 10

Drupal has an excellent field system with unique field types for storing just about every kind of data. When entering data that requires selecting an option from a predefined set of values with an optional default value the List field is often used.

The List field type provides a UI for entering allowed values and choosing a default value. This is great when the option set is small and the default value is always the same. However, often the editorial workflow calls for options that vary per content type and a default that is not consistent.

Despite the UI giving no indication of this functionality existing, the List field supports the setting of dynamic allowed values and dynamic default values.

static allowed values in UI

For our scenario, let’s suppose that there is an Alignment field (field_alignment), that had “Left” and “Right” as options. However, the Article content type was added, and it needed a “Center” alignment option, while the other content types needed to only have the original options. Additionally existing content defaults to “Right” alignment, but the Article content type should use “Center” as the default value.

With that in mind, let’s walk through how to do this in Drupal 8 using the allowed_values_function and default_value_callback field configuration values.

Allowed Values

First we will alter our allowed values to add “Center” as an alignment option for Article content.

In the field instance definition configuration file, set the allowed_values_function key. Note that the allowed_values entries can be removed once an allowed_values_function is in place.


type: list_string
  allowed_values: {  }
  allowed_values_function: 'example_allowed_values_function'
module: options
locked: false
cardinality: 1

Now setup the allowed values function and place it anywhere in the global namespace (e.g. a .module file):

use Drupal\Core\Entity\ContentEntityInterface;
use Drupal\field\Entity\FieldStorageConfig;

 * Set dynamic allowed values for the alignment field.
 * @param \Drupal\field\Entity\FieldStorageConfig $definition
 *   The field definition.
 * @param \Drupal\Core\Entity\ContentEntityInterface|null $entity
 *   The entity being created if applicable.
 * @param bool $cacheable
 *   Boolean indicating if the results are cacheable.
 * @return array
 *   An array of possible key and value options.
 * @see options_allowed_values()
function example_allowed_values_function(FieldStorageConfig $definition, ContentEntityInterface $entity = NULL, $cacheable) {
  $options = [
    'left' => 'Left',
    'right' => 'Right',
  // Add a custom alignment option for Article nodes.
  if ($entity->bundle() == 'article') {
    $options['center'] = 'Center';

  return $options;

Once the function is in place, the UI will reflect the change and disable manual editing of the allowed options.

dynamic allowed values in UI

Default Values

Now we will leverage our newly added “Center” alignment option and set it as the default value for this field when authoring Article content.

In the field base definition configuration file, set the default_value_callback.


translatable: false
default_value: {  }
default_value_callback: 'example_default_value_function'
settings: {  }
field_type: list_string

Next, setup the default value function anywhere in the global namespace as follows:

use Drupal\Core\Entity\ContentEntityInterface;
use Drupal\Core\Field\FieldDefinitionInterface;

 * Sets the default value for the alignment field.
 * @param \Drupal\Core\Entity\ContentEntityInterface $entity
 *   The entity being created.
 * @param \Drupal\Core\Field\FieldDefinitionInterface $definition
 *   The field definition.
 * @return array
 *   An array of default value keys with each entry keyed with the “value” key.
 * @see \Drupal\Core\Field\FieldConfigBase::getDefaultValue()
function example_default_value_function(ContentEntityInterface $entity, FieldDefinitionInterface $definition) {
  $default = 'right';
  // Article nodes should default to center alignment.
  if ($entity->bundle() == 'article') {
    $default = 'center';

  return [
    ['value' => $default],

With all of this in place, re-import the configuration and test it out. The authoring interface should now reflect the alternate allowed values per content type as well as the custom default value. So the next time the List field doesn’t quite meet your needs, give it a second chance, it might just be the right field for the job!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web