Dec 07 2018
Dec 07

I don't find a lot of time to get on the tools these days and sometimes I miss getting into code. Recent projects have seen me focus on strategy, architecture, data and systems integration. Despite that I am comfortable describing myself as an expert in Drupal 7, having spent years giving the D7 codebase a forensic examination. However, despite Drupal 8.0.0 being released three years ago on November 19, 2015 I have not yet looked at a single line of code or even taken in a demo of its feature set.

Today that changes.

For starters I would like to see just how D8 differs from D7 when we start looking at the code used to build it and the database behind it. The areas I am interested in are:

  • Tech stack
  • Server Setup
  • Installing Drupal
  • File System Layout
  • Page Lifecycle
  • Creating a Theme
  • Site Building
  • Database
  • Deployment process
  • Drush

To that end I am going to create a small, Drupal 8 brochure site for my consulting business, https://fullstackconsulting.co.uk and every step of the way I will dig into the code to see how Drupal 8 differs from Drupal 7.

Tech Stack

It was very tempting to get up and running quickly with one of the Docker containers, but while I would ultimately like to move to a containerised solution my priority today is to take a look at all the nuts and bolts, starting with building the stack that Drupal will run on top of. Therefore, while it is great to see this advancement I am not going to dig into it today.

Instead I am going to create a Puppet profile to handle the server configuration, which will let me tweak and rebuild the server when I need to as I learn more about D8.

First job, select a database server. Many years ago MySQL was the only viable option when spinning up a new Drupal installation, but today the list of recommended database vendors is: MySQL, MariaDB or Percona Server.

We can also choose between versions, for instance MySQL 5.5, 5.6, 5.7 or 8.0, or the corresponding versions of MariaDB or Percona Server.

Of course, MariaDB and Percona Server represent forks of MySQL, so how do we choose which one to select? First a quick recap of the history. MySQL started life under the Swedish company MySQL AB, Sun Microsystems acquired MySQL AB in 2008 and Oracle acquired Sun in 2010. Oracle is already a big player in the database market and that caused concerns, although during acquisition negotiations with the European Commission, Oracle committed to keeping MySQL alive.

Some MySQL folk had already left MySQL and founded Percona in 2006, while others jumped ship to create the MariaDB fork in 2008. General opinion seems to be that the MySQL community has contracted, now mainly relying on Oracle employees for commits, while the MariaDB community is thriving. 

The Percona and MariaDB forks get a lot of credit for being significantly more performant and efficient with memory, and this is appealing after running some big Drupal 7 sites with many entity types and lots of fields. But equally, MySQL 8 claims to be 2x faster than 5.7 with up to 1.8 million queries/second.

Percona aims to stay closer to the MySQL code base, meaning updates to MySQL surface quicker for Percona than they do for MariaDB, but MariaDB will tend to be more ambitious in terms of adding new features.

Pantheon, a major managed Drupal host has adopted MariaDB, which certainly gives a lot of confidence in that approach.

I am not going to benchmark this myself today as I am focussing on Drupal not the database engine it uses. That said, I would like to come back and revisit this topic to see which variant wins the performance contest with a heavyweight setup.

If you need to select a database server for a live project that you expect to be heavy on the database I would suggest you consider the following:

  1. Create some actual tests to establish performance supremacy in your own context.
  2. MariaDB's broader feature set could have more value in a custom build project, not a Drupal project, which ought to adhere more closely to standards for wide compatibility. But do you see a feature that could add value to your project that is not available in the others?
  3. Look at your neighbours. What are they using and why?
  4. People close to MySQL, MariaDB and Percona comment that Oracle is proving to be a good steward for the MySQL open source project, and so maintaining alignment is a positive thing.
  5. Does your OS have a preferred package? If not, would you be prepared to manage packages yourself in order to deviate? The ability to maintain your stack is paramount.

For starters, the stack for this project will look like this:

  • Ubuntu 18.04 LTS
  • Apache 2.4
  • MySQL 5.7 (because it is the one supported by Ubuntu 18.04 out of the box)
  • PHP 7.2

Server Setup

  1. Create a VM, I'm on Windows 10 so it will be Hyper-V
  2. Install Ubuntu 18
  3. Update package list
    1. apt update
  4. Upgrade all upgradable packages,updating dependencies:
    1. apt dist-upgrade
  5. Create a user account to own the files that will exist in the website docroot. This user should be in the www-data group, assuming a standard Apache install. This user will allow us to move files around and execute Composer commands - Composer will complain if you try to execute it as root, more on that later.
  6. Many VM consoles lack flexibility, such as copy/paste for the command line. It will save time if I setup keyless SSH access to the VM. But until that is setup I need an easy way to move data/files from our host machine to our VM. One easy way to achieve this is to create an iso. Most VM software will let you load this onto the VM via the virtual DVD drive.
    1. I will create a private and public key that will be used to:
      1. Access the VM from our host OS without password prompt
      2. Access the git repository holding our code
    2. To create the SSH key and add it to the iso I use the Linux subsystem in Windows 10 to execute the following commands:
      1. mkdir certs
      2. ssh-keygen -t rsa -b 4096 -C "[email protected]"
        1. When prompted change the path to the newly created directory)
      3. genisoimage -f -J -joliet-long -r -allow-lowercase -allow-multidot -o certs.iso certs
        1. In case you use non-standard names the flags in this command prevent the iso from shortening your filename.
    3. Via your VM software load the iso into the DVD Drive.
      1. Mount the DVD Drive
        1. mkdir /mnt/cdrom
        2. mount /dev/cdrom /mnt/cdrom
        3. ls -al /mnt/cdrom
          1. You should see your SSH key listed
    4. Copy the SSH private and public key to the relevant location
      1. mkdir /root/.ssh
      2. cp /mnt/cdrom/id_rsa /root/.ssh
      3. cp /mnt/cdrom/id_rsa.pub /root/.ssh
    5. Add the public key to the authorized_keys file, to facilitate login from your host OS. Using some output redirection makes this easy:
      1. cat ~/.ssh/id_rsa.pub ~/.ssh/authorized_key
    6. Tighten up file permissions to keep the SSH client happy
      1. chmod 0600 -R /root/.ssh
        1. You'll need to do this on both the VM and the host
    7. Find the IP address of your VM
      1. ifconfig
    8. Test your login from your host OS
      1. ssh [email protected]_IP_ADDRESS -i /path/to/your/ssh_key
  7. Setup Bitbucket
    1. Add the public key to your Bitbucket/GitHub/GitLab profile.
    2. If you use a non-standard name for your SSH key, you can tell the SSH client where to find it by providing SSH config directives in /root/.ssh/config on the VM (on your host you will likely not be using root):
    3. host bitbucket.org
          HostName bitbucket.org
          IdentityFile ~/.ssh/YOUR_PRIVATE_KEY
          User git
  8. Deploy DevOps scripts to VM
    1. I want to get my Puppet profile onto the VM. One day maybe there will be a team working on this so I follow a process that use Git to deploy DevOps related scripts, including Puppet, onto the VM. This means that any time the DevOps scripts are updated it will be simple for all team members to get those updates onto their own VMs. The Git repo is managed in Bitbucket so that means I need to get an SSH key onto the VM and then register it on a relevant Bitbucket account.
  9. Now I can deploy the Puppet profile from the git repo and install it on the VM
    1. mkdir /var/DevOps
    2. cd /var
    3. git clone [email protected]_REPO.git DevOps
  10. Puppet is not contained in the official Ubuntu package repositories, but Puppet do maintain their own package repository, which is what I am going to use for this setup:
    1. https://puppet.com/docs/puppet/5.4/puppet_platform.html
  11. No Drupal development environment is complete without XDebug. Again, this is not present in official Ubuntu package repositories, so I am going to enable the Universe repository by adding to /etc/apt/sources.list.d/drupaldemo.list
    1. deb http://archive.ubuntu.com/ubuntu bionic universe
      deb-src http://archive.ubuntu.com/ubuntu bionic universe
      deb http://us.archive.ubuntu.com/ubuntu/ bionic universe
      deb-src http://us.archive.ubuntu.com/ubuntu/ bionic universe
      deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
      deb-src http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
      deb http://security.ubuntu.com/ubuntu bionic-security universe
      deb-src http://security.ubuntu.com/ubuntu bionic-security universe
    2. I manage these repositories via the Puppet profile
    3. I won't need this on the production system, so I can keep the production OS to official packages only, not using the Universe repository

Installing Drupal

The options are:

  1. Download the code from the project page: https://www.drupal.org/download
  2. Use Composer - a PHP dependency manager: https://getcomposer.org/

The recommended option is to use Composer, that way dependencies on libraries besides Drupal can all be managed together.

Composer will warn if it is executed as root. Fortunately I already created a user in the www-data group, so I can use that user to execute the Composer commands. The reason root is not recommended is that some commands, such as exec, install, and update allow execution of third party code on our system. Since plugins and scripts have full access to the user account which runs Composer, if executing as root they could cause a lot of damage if they contained malicious, or even just broken code.

There is a kickstarter composer template at drupal-composer/drupal-project. Not only will this install the core project but it will also install utilities such as Drush and it will configure Composer to install Drupal themes and modules into appropriately named directories, rather than installing everything into the Composer standard /vendor directory . Using that project the Drupal codebase is installed with this command:

composer create-project drupal-composer/drupal-project:8.x-dev my_site_name_dir --stability dev --no-interaction

Another file layout point is that it will load the core Drupal files into a subdirectory named "web".

Because Drupal is being built with Composer there is a /vendor directory, which will host all of the libraries installed by Composer. This presents another choice to make, do I:

  1. Commit the contents of the vendor directory to git
  2. Add the vendor directory to gitignore.

The argument for committing it is that all the code that our project needs is stored, versioned and cannot change unless explicitly updated, making it stable. But the argument against it is that we will significantly increase the size of the versioned code base and we duplicate the history of the dependencies into our git repository. It is also possible to pin our project to specific versions of libraries via composer.json configuration meaning so we do not need to be concerned about stability.

I will follow the recommendation of Composer and ignore the vendor directory with the following in .gitignore, which is already added if using the Composer kickstarter project:
/vendor/

My Puppet profile has already taken care of creating the relevant MySQL database and user, so the next step is to run the installer in the browser. As with Drupal 7, since Drupal is not already installed I get automatically redirected to the installer:
https://DRUPAL/core/install.php

"Select an installation profile" - this feels very familiar from D7. I choose the Standard profile, which includes some fairly well used modules relating to administrative tasks, text formats etc. Minimal is a little too stripped back for most typical needs, but great for a barebones install.

I complete the install by heading to:
https://DRUPAL/admin/reports/status
This report will instruct us to tighten up file permission, define trusted hosts etc in order to secure our installation.

After completing the config screen I am redirected to the front page and…..it looks just like a fresh D7 installation. The admin menu looks a bit different and I see the layout responding to some screen width breakpoints. But that's enough about the user interface, let's see where the differences are under the bonnet.

File System Layout

The drupal-composer project has setup some directories:
/web - the document root as served by the web server is now a subdirectory of the project root.
/web/core - this is where the core Drupal project files are installed
/web/libraries - libraries that can be shared between core, modules and themes
/web/modules/contrib - modules maintained by the community
/web/profiles/contrib - profiles maintained by the community
/web/themes/contrib - themes maintained by the community
/drush/Commands - command files to extend drush functionality

If we write some code ourselves it should go here:
/web/modules/custom - modules maintained by the community
/web/themes/custom - themes maintained by the community

I can see that module and theme code location differs from the D7 standard of placing it at:
sites/all/modules/[contrib|custom]
sites/all/themes/[contrib|custom]

Composer will install non-Drupal, dependencies into:
/vendor

Core system files have moved. Under D7 we had:
/includes - fundamental functionality
/misc - jss/css
/modules - core modules
/scripts - utility shell scripts

Under D8 we have:
/web/core/includes
/web/core/misc
/web/core/modules
/web/core/scripts

Overall, this feels very familiar so far. But there are some new directories in D8:
/web/core/assets/vendor - js and cs for external libraries such as jquery

We have yaml based configuration scripts. The ones for core are here:
/web/core/config/install - only read on installation
/web/core/config/schema - covers data types, menus, entities and more

Modules can follow this same convention, defining both  the install and schema yaml scripts.

There is a new directory for classes provided by Drupal core:
/web/core/lib

A functional test suite can be found at:
/web/core/tests

Page Lifecycle

URL routing

As in D7, .htaccess routes requests to index.php. However, there is another script that looks like it could play a part in URL routing:
.ht.router.php

Upon close inspection though, .ht.router/php is used by the builtin web server and does not typically play a role in menu routing.

Request Handling

A fairly standard Inversion of Control principle is followed, as was the case with Drupal 7 and earlier. Apache routes the request to index.php which orchestrates the loading of the Drupal Kernel and subsequent handling of the request, executing any custom code we may implement at the appropriate point in the request lifecycle.

It is apparent right away that Drupal 8 is object oriented, loading classes from the DrupalKernel and Request namespaces, the latter being part of the Symfony framework.

use Drupal\Core\DrupalKernel;
use Symfony\Component\HttpFoundation\Request;

We won't have to manually include class files, as long as conventions are followed, because there is an autoload script that will scan the vendor directory:
$autoloader = require_once 'autoload.php';

Now it is time initiate an XDebug session so I can trace a request from start to finish and see exactly what this new DrupalKernel class does:

$kernel = new DrupalKernel('prod', $autoloader);

The first observation is that an environment parameter is being passed to the DrupalKernel constructor, this indicates that we can have different invocations for dev, staging and prod environments.

The $kernel object is also initialised with an instance of the autoloader, which we can expect to be used frequently in the newly object oriented D8. Next step is to create a $request object:

$request = Request::createFromGlobals();

This is standard Symfony code storing GPCES data on $request. Now the DrupalKernel class handles the request:

$response = $kernel->handle($request);
Within the handle() method we find that site settings are handled slightly differently, with the global $conf variable gone and variable_get() function replaced with the Settings class exposing a static get method to retrieve specific settings. But then things start to look familiar, with the inclusion of bootstrap.inc and the setup of some php settings. After some OO based config loading and PageCache handling we come across more familiar territory with the DrupalKernel->preHandle() method, which invokes loadLegacyInclude() to require_once many .inc scripts, such as common.inc, database.inc etc, this has a very familiar feel to it.

Module loading appears to have had an update, with the invocation of the Drupal\Core\DependencyInjection\Container class being used to load all enabled modules.

Where things have changed significantly is processing the URI to identify the menu_item which defined the callback function to invoke. The new approach is much more in keeping with object oriented concepts, favouring an event dispatcher pattern, invoked from within the Symfony HttpKernel class. There are listeners defined in Symfony and Drupal classes handling such task as redirecting requests with multiple leading slashes, authenticating user etc.

We haven't got as far as looking at modules yet, but it looks like Drupal modules are now able to define listeners for these events, nice.

Once the kernel events have all been handled a $controller object is initialised. Interestingly, before the $controller is used a controller event is dispatched, giving the opportunity for modules to modify or change the controller being used.

The registered controller is responsible for identifying the class, such as ViewPageController, that will generate the render array that is used for rendering the response body.

An interesting observation given that this debug session was initiated on a bog standard Drupal front page - where path is equivalent to /node the ViewPageController is invoked and it has code that feels very similar to the very popular, Views module from D7, such as view_id, display_id. This makes sense, because now that Views has been baked into Drupal core in D8 we would expect a page listing multiple  nodes to be powered by Views, rather than some other case specific database query.

Response handling has certainly had a refresh, no longer relying on drupal_deliver_html_page() to set the response headers and body in favour of using Response class from the Symfony framework.

There are lots of areas to look at in more detail, such as how blocks are added to the page render array etc, but from this whirlwind tour of the codebase there are some very nice architectural improvements, while also retaining a high degree of familiarity.

Creating a Theme

Before looking at how theming works in Drupal 8 I need to make a call over whether to use a base theme like Bootstrap, or a much lighter base theme such as Classy, one of the core themes added in D8.

Bootstrap would give more to play with out of the box, but my feeling is that it offers more value if you are building multiple sites, often using the various Bootstrap components, so you enjoy more benefit due to reuse which makes the effort learning the framework worthwhile. 

Since the motivation behind this exercise is to explore the capability of Drupal core in D8, I don't want to muddy the waters by adding a lot of additional functionality from contrib modules and themes unless I really need it. This approach will allow me to learn more quickly where D8 is strong and where it is deficient.

I am going to implement a responsive theme based on the core theme, Classy. For starters I create the theme directory at:
/web/themes/custom/themename

The .info script from D7 is gone and instead I start by creating:
themename.info.yml - configure the base theme, regions etc
themename.libraries.yml - specify CSS and JS scripts to load globally

At this point if I reload /admin/appearance I see my new theme listed as an uninstalled theme. If I go ahead and choose the option to Install and Set as Default, then the next time I load the home page it will be rendered in accordance with the new, empty theme.

By default I see that CSS aggregation is enabled, that is going to hinder my development efforts so I will turn that off for now.

Having defined the regions for my theme adding blocks to those regions via the admin UI is very familiar. Altering the page layout is also very similar. Yes, the templating engine has been altered, now using Twig, and variable embeds and control statements are noticeably different, using the {% %} and {{ }} markup. But while we are just setting up the html there is no significant difference at this stage.

One area I do we need to pay attention to is laying out my theme resources. For instance, there is a much stricter convention in terms of CSS scripts in an attempt to bring more structure and readability, for full details take a look here:
https://www.drupal.org/docs/develop/standards/css/css-architecture-for-drupal-8

Another interesting detail is that the theme will by default look for the logo with a svg file extension. If you have a reason to use a png you need to configure this theme setting in themes/custom/themename/config/install/themename.settings.yml:
logo:
  path: 'themes/custom/themename/logo.png'
  use_default: false

Site Building

Past D7 projects have used the Webform module extensively but upon seeing there is not a general release available for D8 I looked at other options. It was only then that I realised that the core Contact module has received a significant upgrade. When coupled with the Contact Storage and Contact Blocks modules it should make Webform redundant in many, although probably not all scenarios.

To kick things off I setup up the contact form recipient, thank you message, redirect path and a couple of fields through the core admin UI:
https://DRUPAL/admin/structure/contact

I decided I wanted a contact form embedded in the footer of my page - for now I am going to ignore what that might mean for full page caching in D8, you know, the problem where a cached page contains a form that has expired.

This is the first point at which a new module needs to be installed. Considering that I am adopting Composer for managing dependencies this ought to be a little different to the D7 days. The steps I followed were:

Update composer.json to require the contact_block module:
"require": {
        ….
        ….
        "drupal/contact_block": "^1.4"
}

Ask composer to update dependencies and install the new module:
composer update

As per the "installer-paths" config in composer.json, the contact_block module installed into the appropriate directory:
web/modules/contrib/contact_block

Now to install the module. Back in D7 I would use:
drush en contact_block

However, there is a problem:
Command 'drush' not found, did you mean:
command 'rush' from deb rush
Try: sudo apt install <deb name>

Given that our composer.json config does fetch drush, we know it is installed but the issue is that it is not in our PATH variable. We can confirm that by getting a successful result when we absolutely referencing the drush binary:
/path/to/drupal/vendor/bin/drush status

The issue is that in order to avoid package dependency problems, when managing the codebase with Composer it is recommended that drush is installed per project. There is a small program we can install in order to run drush in an unqualified context, which is perfect when combined with drush aliases to specify the Drupal root path:
https://github.com/drush-ops/drush-launcher

As in D7, I can navigate through Admin > Structure > Block, but there are some differences once on the page. I scroll to my footer region and hit the "Place block" button, launching a modal that lists out the available blocks. The one I am interested in is "Contact block" and I press the corresponding "Place block" button.

The next form prompts me to select which contact form I would like to render along with standard block config such as Content types, Pages and Roles, which has a very familiar feel. After saving the block layout page and then reloading the frontend page the contact block appears as expected.

A cache clear is needed after playing around with the block placement and styling so as a matter of habit I try:
drush cc all

I am almost surprised to see the accompanying error message:
`cache-clear all` is deprecated for Drupal 8 and later. Please use the `cache-rebuild` command instead.

That will take some getting used to!

I am not a fan of running a local mailserver. Not only does it increase the management overhead but deliverability rates are unlikely to be as high as a proper mail provider due to mail server trust scores and  white lists etc. I have had good success on past projects routing all transactional emails through Sendgrid, and the free tier for low usage projects is a great way to get started. What is more, there is already a stable contrib module available for D8. As before I will start by adding the module reference to composer.json
"require": {
        ….
        ….
        "drupal/sendgrid_integration": "^1.2"
}

Followed by:
composer update --dry-run

All looks good so I kick off the update
composer update

Enable the Sendgrid module
drush en sendgrid_integration

The sendgrid module tells us that it is dependent on this library:
https://github.com/taz77/sendgrid-php-ng

Because dependencies are being managed by Composer I have no work to do here, Composer will take care of fetching that library and installing it into the vendors directory because it is referenced in the composer.json of the sendgrid_integration module.

All that is left to do is login to Sendgrid, generate an API key and use that API key on the Sendgrid config form:
https://DRUPAL/admin/config/services/sendgrid

Another dependency of the Sendgrid module is another contrib module called Mailsystem. This is very similar to D7, I can use the Mailsystem module to specify that Sendgrid should be the default mail sender:
https://DRUPAL/admin/config/system/mailsystem

Now I can fill in the contact form embedded in the page footer and have the results emailed to me by Sendgrid. That was a piece of cake.

Database

The first thing I notice when I list out the tables in the D8 database is that field handling seems to be different. In D7 the database would have field_* and field_revision_* tables for every field. These tables would contain the field values that relate to a particular entity. Those tables are absent but so too are the tables that stored the field configuration: field_config and field_config_instance.

On closer inspection I can see that field data tables now seem to be entity type specific, for example:
node__field_image
node__field_tags

The field configuration points us towards where D8 has done a lot of work. Field configuration data has moved into a general config table. Looking at the names of the config settings it is apparent that the config table is responsible for storing config data for:
fields
field instances
node types
views
cron
text formats

..and more.

In other words it looks like anything related to config will appear here and I am expecting this, coupled with the Configuration API is what will make promoting changes from dev through staging and into production much slicker than in D7.

The users table is a little cleaner with user data no longer serialized as one single lump, instead shifted to a separate table keyed on uid and module and therefore stored in smaller lumps. Other fields that were considered entity properties in D7 were stored directly on the users table in D7, but in D8 they are shifted to a separate table, for example: pass, mail, timezone, created etc.

Similar with nodes, properties such as title, uid, created, sticky have been shifted to the node_field_data table.

On the whole though the database feels very similar. Field data is stored in a very similar fashion, retaining the bundle, entity_id, delta and one or more value columns.

Deployment process

I am going to avoid using a managed service such as Pantheon.io or Platform.sh for this project purely because I would like to see all of the moving parts, problems and general maintenance tasks that are required with a D8 install for the time being.

Instead I will use a plain VM. I will not be using one of the public clouds such as Azure, Google or AWS because this Drupal project is very small and will have stable usage patterns with no requirement for load balancers or elastic scaling capability in the near term. With that type of profile using one of those cloud providers would be a more expensive option in contrast to a bog standard VM from a provider such as Linode.

While writing the last two paragraphs my new VM has been provisioned complete with a Ubuntu 18.04 LTS image. Fortunately, right at the start of this project I wrote all of the server configuration into Puppet manifests, so configuring this server, covering the entire tech stack and including firewall rules will be a breeze.

Let's see how straight forward it is:

  1. SSH into new server
  2. Create a directory to hold our DevOps scripts
    1. mkdir /var/DevOps
  3. Create an SSH deploy key for the Git repo - the production environment should only need read access to the Git repo
    1. ssh-keygen -t rsa -b 4096 -C "[email protected]"
  4. Add the deploy key to the repo
    1. If you accepted the defaults from ssh-keygen this will be id_rsa.pub
  5. Clone the DevOps repo
    1. git clone [email protected]:reponame/devops.git /var/DevOps
  6. Set values in the Puppet config script
  7. Kick off the Puppet agent
  8. Update DNS records to point to the new server

That went without a hitch, all system requirements are now in place.

Puppet has taken care of checking out my Drupal Git repo into the web root, but as discussed earlier, this project does not commit the vendor libraries, contrib modules or core files since Composer is managing these for us. That means the next step is to ask Composer to update all of our dependencies:
composer update --no-dev

The --no-dev option is added to the update command because when deploying into the production environment we do not want libraries such as phpunit present, which could present us with a security risk and needlessly bloat the size of the deployed code base.

Composer tells us that it has completed the installation of dependencies successfully, but we are not done yet because we don't have a database. Rather than complete the Drupal installation wizard to setup the database I will promote the database used in the development environment into the production environment. Since the Drupal install wizard will not be used to setup the database and since settings scripts containing database credentials are not committed to Git a manual update is needed to /web/sites/default/settings.php

These are the settings that will be required at a minimum:
$settings['hash_salt']
$databases
$settings['trusted_host_patterns']

The database export/import is a pretty straightforward task using mysqldump in the dev environment for the export and then the mysql cli in the production environment for the import.

But that approach would not have a great degree of reusability. Instead, I turn to drush.

Drush

Drush has a sql-sync command that allows us to specify a source and a target and it will then take care of the relevant mysqldump and mysql commands. For maximum reusability it is possible to configure drush aliases so that all of the relevant server connection details are stored in a config script resulting in the following, simple command:
drush sql-sync @source @target

Up until Drush 8, which maintained support for D7, aliases were defined in a PHP script named according to this convention:
SITENAME.aliases.drushrc.php

But as of Drush 9 this has changed, the format is now YAML and the filename convention is:
SITENAME.site.yml

There are also some changes regarding where we place drush config scripts. The drupal-composer/drupal-project package that was used to build this project creates the following directory, where we can place drush scripts:
/path/to/drupal/drush/sites

However, I quite like the old convention of being able to store drush alias definitions in ~/.drush. This is because I tend to have Puppet manifests that configure all of this independently from the Drupal installation. This setup is still possible but first of all it is necessary to create the overall config script at ~/.drush/drush.yml, a path that drush will automatically scan. In that script we can specify that it will also be the location of site alias definitions with something like this:
drush:
  paths:
    alias-path:
      - "/home/USER_NAME/.drush"

Now drush will auto detect the alias script at ~/.drush/SITE_NAME.site.yml. The alias script will specify the connection properties for each version of the site:
dev:
  options:
    target-command-specific:
      sql-sync:
        enable:
          - stage_file_proxy
  root: /var/www/vhosts/DEV_HOST_NAME/web
  uri: 'https://DEV_HOST_NAME'
prod:
  host: DEV_HOST_NAME
  options: {  }
  root: /var/www/vhosts/HOST_NAME/web
  uri: 'https://DEV_HOST_NAME'
  user: USER_NAME
  ssh:
    options: '-o PasswordAuthentication=no -i ~/.ssh/id_rsa'

I test if this is working with these commands:
drush cache-rebuild
drush sa

Clearing the cache give Drush a chance to find the new config scripts. The sa command is the abbreviated site-alias command and should list out the aliases that are defined in the site.yml script. Now check that the dev and prod config looks ok:
drush @SITE_NAME.dev status
drush @SITE_NAME.prod status

At this point I can execute a database sync:
drush sql-sync @SITE_NAME.dev @SITE_NAME.prod

Job done. Now the site loads correctly in the production environment.

Wrap Up

That is all I have time for in this walkthrough and while there are some areas I would like to come back to, such as custom modules, querying the database, entity API, form API and configuration management, I have seen enough to confirm that Drupal 8 represents a great, architectural step forward while feeling very familiar to those coming from a Drupal 7 background.

Dec 07 2018
Dec 07

By Jesus Manuel OlivasHead of Products | December 07, 2018

By Jesus Manuel OlivasHead of Products | December 07, 2018

On the first post of this series “Improving Drupal and Gatsby Integration - The Drupal Modules”. I introduced two contributed modules we wrote to simplify the Drupal usage while working with Gatsby. One of the modules mentioned was `tui_editor` a WYSIWYG markdown editor integration with the Toast UI Editor project. This module allows content editors to enter content as markdown, making easy to implement JSON-API endpoints that return markdown.

In this post, I will mention you how to take advantage of that markdown using `gatsby-remark-drupal` the Gatsby plugin we wrote. This plugin provides support for markdown preprocessing to Drupal body fields. In order to take advantage of Gatsby `gatsby-transformer-remark` to parse markdown as HTML, `gatsby-remark-images` to processes images in markdown so they can be used in the production build, and `gatsby-remark-external-links` among others.

What does this plugin do?

  • Creates a new `text/markdown` field for drupal body fields of the selected content types.
  • Replaces Drupal relative image paths to previously downloaded and cached images by the `gatsby-source-drupal` plugin.

Where can you find the plugin? 

How can you install this plugin?

By executing npm, from the root of your project:

npm install --save @weknow/gatsby-remark-drupal

How can you configure this plugin?

As any other Gatsby plugin by registering in your gatsby-config.js file.

By default this plugin process page and article content types.

resolve: `gatsby-transformer-remark`,
options: {
  plugins: [
    `@weknow/gatsby-remark-drupal`,

But you can customize which content types to process.

resolve: `gatsby-transformer-remark`,
options: {
  plugins: [
    {
      resolve: `@weknow/gatsby-remark-drupal`,
      options: {
        nodes: [`article`,`page`, `landing`, `cta`]
      }
    }

NOTE: In order to keep this series of publications clear and simple. We will be extended with a one or a couple of extra posts instead of the original planned three, for your ease of reading and comprehension of each topic.

Are you excited as we are with GatsbyJS and this new API Driven approach?

We invite you to check back as this series continues, exploring more tools we are building to contribute back to Drupal and Gatsby ecosystems that will allow you to implement a Drupal and Gatsby integration without needing to DIY.

Want to learn how to take advantage of these modules?

We can show you how these modules can improve your Drupal and Gatsby integration.

Dec 07 2018
Dec 07

Have you ever heard the one about the web developer who goes in to make one last change to the site at 4:45PM on a Friday afternoon? It is SUCH an easy fix--he can get it done and go home for the weekend with his head held high. Ah, what a relaxing weekend it will be! Cleaning out the gutters, hiking with the kids, and really just taking some "me time". As it turns out, that plug-in update was not well-architected. As a result, it impacted the structure of the site--and now all of the content is right-justified. WHAT JUST HAPPENED? Okay, this is so clearly a sign that this needs to wait until Monday. Let's just roll back to an archived version. Wait a minute, why haven't we archived in six months? 

Now his weekend has to start with a very difficult conversation with his boss, followed by a very difficult conversation with his wife--and the kids for that matter. 

Unfortunately, we've all fallen victim to this trap. Yet, there are many times where we all run the gauntlet of making changes in prod, because small changes take forever to run through the dev environment, test, transfer over, retest, send release notes, etc... 

In short, making the changes in prod is WAY easier. 

Dev Ops as a Service

Confession time: we preach from the gospel of "thou shalt not make changes in prod", however, we have once or twice also drank from the chalice of "thy weekend will be spent recovering thouest website". In our nearly two decades in web development, many camping trips and gutter-cleaning has been "postponed".

That's why Freelock created an ecosystem that makes sure every website in their platform has been updated, is safe and secure, and can be restored with minimal human interaction. In the above scenario, the so-called "Friday Doomdsay Scenario", Freelock's ecosystem would have tested the plug-in upgrade, confirmed it had issues, and not installed the update. Even if the change would have been made in prod, one of the many daily updates could have restored the site in a matter of moments--still time to start the weekend. 

The Freelock Dev Ops Ecosystem

The question we often get is "how can you make sure that our site is always safe"? Well, we've put a lot of time, effort, and resources into our ecosystem to make sure that all of the backups, upgrades, changes all happen automatically. Our ecosystem has many components that are all a fail-safe for your company and it's web presence.

Staging Environments

Our solution starts with a production environment, a dev environment, and a staging environment. All of these are automatically backed up, and the production environment is checked for changes every night. When there is a change to make to the site, we do it on dev and our bots test it thoroughly before applying it to the live site. We actually do encourage our clients to make CONTENT CHANGES in prod--notice we do not want system or structure changes in prod. Sure, we have a backup if something goes awry, but we would prefer not to have to use those backups. 

Automatic Updates from the Staging Environment

One of the many reasons that developers make changes in prod is because they do not want to do them in dev, only to go update them in prod (with testing each time)--it's cumbersome. However, our system replicates the changes from dev to the staging environment for a full screenshot test of every important page. While you are making the changes in the dev environment, it feels like you are working in prod. 

Visual Regression Testing

Our solution is so robust that we've even got minions who do our testing for us (read: bots). Our solution will evaluate your prod environment against the staging environment and show any differences in a pixel-by-pixel basis. If the staging environment does not match the dev environment, it means we've got a problem. We'll get that fixed right away. 

Automatic Release Notes

Our minions also summarize the change (whether it is a patch, a security update, plug-in updates, or whatever) into an email and sends the release notes for your review as soon as the tests are all approved.

Backups

Yes, we backup your site daily. If you require more-frequent changes than that, we can do that. We also backup on multiple environments. 

Updates

Many of our clients are on similar CMS platforms (Drupal or Wordpress), when there is an update to one of these platforms, our system automatically updates all of the sites in our management plan. If there are any issues (and there sometimes are), we get information from our system where things went wrong--and then we roll up our sleeves.

If you want to run your business, instead of your website--why not use the Freelock Dev Ops as a Service so that you can focus on your business?

Dec 06 2018
Dec 06

If I say that in today’s booming social era, having a Facebook or Twitter account is not enough in terms of web presence for your music, you would agree, right?

We all know that having a website for the same purpose, not only delivers a professional impact on the audience but also eliminates the restrictions that these external sites would have. 

White background with Drupal logo wearing headphones.


For instance, you desire to put banners on your site to increase the song sales or wish to provide details about your latest concert to your fans - How do you do it?

Of course, it is only possible if you have your own music website. And there is no better option than Drupal for this task. 

If you are thinking that why specifically I am talking about Drupal then you need to know that Drupal is one of those CMSs which is flexible and consists of an excellent API framework that can accommodate almost all conceivable functionalities. 

Choosing Drupal for Your Music Website 

Yes, it is important to unite music composers i.e musicians and bands, with their fans and loved. The sole reason to trust Drupal for this task is only because:

It has responsive web designs 

There would be a variety of audience for your music website, using different devices. While switching from big screen to small screen, things change. Size of the image is modified, menu items become drop down, columns are pushed around the webpage that makes sense with content being the supreme leader. All of this is easily handled by Drupal.

Drupal 8 consist of responsive web designs that makes it “100% good to go” with all the devices.

Responsive Design is a strategy for optimizing the display of a website using CSS and javascript to respond to different device capabilities. 

Every musician wants that his work leads to more word of mouth referrals and that new fans are added to the list every day. A responsive website leads to a better user experience with them spending more time on your site. Moreover, responsive web designs would help the end users to access menu, links, buttons and fill out forms easily.  More users mean more traffic. According to statista, 52% of the web traffic is generated from mobile devices. 

52% means half of the internet traffic, making responsive websites an important aspect of the whole scenario.
Short description on Grammy Awards Website


Multisite Setup 

There are chances that you might be building a music website for a band. Which means that each member of the band would require an individual site. This is where Drupal is the best option for you. It provides you with the ability to run multiple sites from one single Drupal codebase. Each site has a separate database, however, the projects that are stored in modules, profiles and themes can be installed by all the sites. 

Warner Music Group (built on Drupal )is a great example of this. The website manages and creates the web and mobile sites for more than 300 artists and bands. It serves millions of yearly visitors, promotes news, music releases etc. 
 

Screenshot of the homepage of Warner Music Group with a mobile phone placed beside it.


ABC ( Aliases, Block and Content type)

Your fans need details to keep up with your band’s performance or music concerts. This can be done with the help of multiple addresses or aliases. Aliases provide an option to the end users to find a particular website easily. With modules like pathauto, metatag, redirect you can make your music website search engine friendly. 

  • Pathauto modules help in automatic generation of URL/path aliases for various kind of content without requiring the end user to manually describe the path alias. 
  • The Meta tags modules allow you to automatically provide structured metadata for a website. These meta tags help in improving the ranking and display of the site in search engine results.
  • Once your website has a lot of content, you may need to audit it. This would include merging and deleting of pages that are no longer in use, and for that, you need to create URLs redirects. The Redirect module lets you create and manage these URLs using a simple user interface. 

A block would essentially act like a container that would hold all the content, lists of content, images and can be placed anywhere on the page. For example: if you want the visitors to contact you regarding any collaboration or get information about the latest concerts then the additional field can be constructed with the help of block. 

A content type is the collection of data fields grouped together in a logical set to facilitate content entry and display. Drupal core is preconfigured with two content types which allow you to create and save multiple posts. For example: In your music website you might have a content type “concert” that would include individual fields to collect the data about dates, place, price etc. The concert content type could be used to create hundreds of individual concert records.   

Performance and Scalability

Loading time serves as a major role when it comes to the performance of the website. According to Google developers, 40% of the users abandon a website if it takes more than 3 seconds to load. Thus, making speed a huge factor for the strength of the website. 

When it comes to a music website, the factor contributes even more. The visitors of your website definitely need a fast and a swift platform that loads the website quickly, and Drupal by default is the best CMS for this task.  Drupal 8 can scale well to millions of users if it is optimized well. A poorly optimized website not only slows down a website but affects the performance as well. Thus to improve and keep up the site performance modules like Blazy and Google analytics are used. 

  • Blazy module is used to provide its end users with faster load time and helps in saving data usage when they are not browsing the whole page.
  • Google Analytics module is a type of web statistics that are used to track and monitor the traffic of a particular website. It comprehends on the links which are being tracked (downloads, outgoing and mailto) by the visitors which are there on your page.

Content Editing /Customizing and Managing 

The site has to be easy for you to manage. It is important that formatting of content and adding of images ( without seeing any spooky HTML codes) on your music website is simple and accessible. Drupal provides with modules that would allow you to edit your content the way you want. With drag and drop and WYSIWYG tool content authors/non-developers can create and design pages easily. With the evolution of the Drupal platform, new upcoming layout builder seems even more promising. 

Drupal upcoming layout builder which would be stabilized in Drupal 8.7, is unique in offering a single, powerful visual design tool with three use case:

  • Use case 1: Layouts for template contents 

For the sites that have a significant amount of content, it is important that similar content have the same type of appearance. For instance, you are selling songs from the same album. For your fans, it would provide a consistent experience when browsing the song list, making them easier to differentiate. For content authors, the templated approach means that they don’t have to think about the layout of each song. 

Drupal 8's new Layout Builder lets you to visually construct a layout template that will be utilized for each item of the same content type (e.g. a "songs layout" for the "song" content type). This is achievable only because the Layout Builder serves from Drupal's powerful "structured content" abilities.

  • Use case 2: Customizing the layouts  

Suppose you desire to put a video of your jam sessions on your website and later think of updating a band picture with the video, it may not make sense to add an optional “highlighted video” field with the band picture.

Drupal 8 Layout Builder offers the ability to customize templates layouts on a case per case basis. In the above example, Layout Builder would let a content creator rearrange the layout for just that one band picture and put the video directly below the image. 

  • Use case 3: Custom Pages 

It should be noted that not everything is templated and content authors often need to create a one-off page on “About us”. Drupal 8 Layout Builder can be used to create dynamic one-off custom pages. A content author can begin with a blank page, design a layout with the addition of blocks. 

Hosting

The Drupal-based website which is specially designed for Lady Gaga witnesses heavy traffic spike every day. The burst in the traffic has to be handled periodically. 
Web Hosting provides space that you can buy on a web server to store your website files. When you tend to buy website hosting you basically rent server space on a server where all your files are placed. 
Drupal provides you with many websites hosting options. Which are not only safe to use, but it handles your site traffic also.  Some of the most popular Drupal based web hostings are:

  • SiteGround
  • A2 Hosting
  • GreenGeeks
  • Pantheon

Strong Roadmap

With a variety of modules in Drupal 8, from enhanced media management to out-of-the-box experience, Drupal 8 is getting stronger with each update. It makes sure that the visitors on your website have a good user experience with no backward compatibility breaks.

With the release of Drupal 8.6, a large number of improvement has been foreseen for content authors, evaluators, site builders, and developers. 

For content authors, Drupal has added a support for “remote media type”. In other words, you can easily embed Youtube and other videos in your content. 

For evaluators, Drupal 8.6 brought about “out-of-the-box experience” that let them install and test drive Drupal in minutes. They can install with the help of a demo profile that showcases all the capabilities by presenting a beautiful website filled with content right out of the box. This makes it easy for an end user to download and install a fully functional Drupal demo application within minutes. 

For the developers, Drupal 8.6 has brought about an API first platform that involves an overview of REST improvements. 

Drupal 8.7 has promised to bring even more sophistication the Drupal family. It ensures its end users with:

  • Stabilized features that target content author.
  • Adding of JSON API (that did not make it to the release of Drupal 8.6) that would allow users to rapidly create and decouple the applications.
  • It would present with stable multilingual migrations
  • Would continually improve the evaluators' experience.
  • Iterate towards an entirely new and unique decoupled administrative experience. 

Variety of Modules for a Music Website

The best part about Drupal is that it provides its end users with a variety of modules that help them to build their website. Some of the modules which you can use for your music website might include:

  • AudioField 

AudioField module enables you to upload or link audio files and play them on your site using one or many audio players. It has the power to support many audio formats. Currently, the module supports the following audio formats:

  1. Default HTML5 audio player (which is built-in)
  2. Audio.js 
  3. jPlayer
  4. MediaElement 
  5. Projekktor
  6. SoundManager 
  7. wavesurfer.js 
  8. WordPress Audio 

AudioField also supports player libraries and comes with the basic HTML5 audio player.

HTML5 is a built-in HTML audio player and requires no additional libraries to be installed 

It also supports additional audio player libraries which requires installation before use. 

  • Audio Embedded Field

Audio Embedded Field module creates a simple field type that enables you to embed audio from SoundCloud and other custom URLs to a field with the integration of media entity module for Drupal 8. You only need to provide URL to the audio and the module creates an embedded audio player. 

(It should be noted that this module is not covered by the security advisor)

Go Beatles (Case study)

No doubt on the fact that the music of Beatles have created history and has been marked as “evergreen”. With such a wide fan base audience, the website was expected to be more engaging. It needed to promote new albums, merchandise, and events in a way that engaged fans.  

Drupal was chosen because of its flexibility and excellent API framework.  

Goals of the Project

  • It was a need for the site to be easy for content editors to manage.
  • It was expected that the new articles could be pushed to Facebook.
  • The existing users could easily migrate to the new site, along with their prior profile and comments.
  • The user experience must be truly responsive and engaging on all devices.
  • It was required that the content pages must be built from existing collections of text, quote, image, and audio.
  • With millions of fans, it was evident that the site experiences heavy traffic every day.
  • Thus it was important that the site was eligible to handle large surges of visitors and traffic spikes.

How Drupal Was Used?

In terms of presenting an engaging story, one of the creative inspirations of the whole project was “The Beatles Anthology Book”. The key idea was to provide story snippets, images audio clips, videos of the band that allowed an arrangement in a different order. Different content types with the field to accommodate image, audio, video, the text was created. All the snippet of information was tagged using taxonomies and tile width. The story node used Entity Reference, Entity Reference View Widget that allowed the editor to sort vast content and filter it as well. 

As regards to theme layer, Drupal highly contributed to the website. The whole website was responsive and demonstrated the actions that had to be done by the editors. The site also depicted that Drupal can cope up with heavy loads and traffic spikes with the help of CDN module that contributed highly to deliver image and file assets via Amazon CloudFront (which was the main concern). Optimized and bandwidth efficient images were also required to deliver a responsive website. Thus, usage of modules like pictures and breakpoint along with some heavyweight template.phpcustomization was done to achieve a perfect theme layer.

Another vital requirement of the site was to tie the site with Facebook for the whole purpose of logging in, registering, and commenting. This was done to promote visibility of the site. Drupal provided with several packages that allowed the website to integrate with Facebook at various levels. With modules like Facebook auto post, Facebook OAuth and Drupal for Facebook the integration was possible. Not only this but feeds and SQL parser module were also used to query the old database.

Thus, without the help of Drupal and its modules, it would have taken an incredible amount of time and money.
 

An image of the beatles band with Unites States Of Amercia's flag in the background


Conclusion 

Yes, it is important for an artist to have a website that is specifically dedicated to them. Without it, there are chances of you losing out on gigs, promos, press etc. Thus, contributing to your music career and talent, the need for a website is clear.

Drupal here emerges as a helping hand. The CMS would provide you with a collection of modules that contributes to the construction of your website. 

At OpenSense Labs we present you with services that not only enhance the performance of your website but we also help you in getting close to your dreams by embracing innovative technologies that can be implemented on Drupal framework. Contact us at [email protected] to know how you can build a Drupal-based music website. 

Dec 05 2018
Dec 05

It's never too late to start thinking about user experience design when working on a project. To help ensure the project is a success, it's best to have a UX designer involved in the project as early as possible. However, circumstances may not always be ideal, and User Experience may become an afterthought. Sometimes it isn't until the project is already well on its way when questions around user experience start popping up, and a decision is made to bring in a professional to help craft the necessary solutions. 

What’s the best way for a UX designer to join a project that is well on its way? In this article, we will discuss some actions that UX designers can take to help create a smooth process when joining a project already in progress.

General Onboarding

Planning and implementing an onboarding process can help set the tone for the remainder of the project. If it’s disorganized and not well planned out, you can feel underprepared for the first task, which can lead to a longer design process. It’s helpful to designate a project team member to help with on-boarding. It should be someone who knows the project well and can help answer questions about the project and process. This is usually a product owner or a project manager but isn’t limited to either. If you haven’t been assigned someone to help you with the on-boarding process, reach out to help identify which team member would be best for this role. During the on-boarding process, discuss what user experience issues the team is hoping to solve, and also review the background of significant decisions that were made. This will help you to evaluate the current state of the project as well as the history of the decision-making process. You should also make sure you understand the project goals and the intended audience. Ask for any documentation around usability testing, acceptance criteria, competitive reviews, or notes for meetings that discuss essential features. Don’t be afraid ask questions to help you fully grasp the project itself. And don’t forget to ask why. Sometimes entertaining the mindset of a five-year-old when trying to understand will help you find the answers you’re seeking.

Process Evaluation

How you climb a mountain is more important than reaching the top. - Yvon Chouinard

Processes help ensure that the project goes smoothly, is on time, and on budget. They can also be a checkpoint for all those involved.  If a process doesn't already exist that includes UX Design, work together with the team to establish a process to discuss, track and review work. If you feel that a process step is missing or a current system isn't working, speak up and work with the team to revise it. Make sure to include any critical process that the team may be lacking. You also may want to make sure that discussions around any important features include a UX Designer. Ask if there are any product meetings that you should be joining to help give input as early as possible.

Schedule Weekly Design Reviews

One example of improving the process to include UX Design is scheduling weekly meetings to review design work that’s in progress. This also gives project members an opportunity to ask questions and discuss upcoming features and acceptance criteria.

Incorporate Usability Testing

Another suggestion is to include usability tests on a few completed important features before moving ahead. The results of the usability tests may help give direction or answer questions the product team has been struggling with. It can also help prioritize upcoming features or feature changes. The most important thing to remember is that usability testing can help improve the product, so it’s tailored to your specific users, and this should be communicated to the project team.

Collect General User Feedback

Establishing early on the best way to collect and give feedback on a design or feature can help streamline the design process. Should it be written feedback? Or would a meeting work better where everyone can speak up? Sometimes, when multiple people are reviewing and giving feedback, it’s best to appoint one person to collect and aggregate the input before it filters down to you.

Track Project Progress

You also want to discuss the best way to track work in progress. If your team is using an agile process, one idea is to include design tickets in the same software that you’re using to keep track of sprints such as Jira [link] or Trello [link]. Discuss the best way for summarizing features, adding acceptance criteria and tracking input in whatever system you decide to use.

Prioritization of Work

Efficiency is doing things right; effectiveness is doing the right things. - Peter Drucker

The team should be clear on priorities when it comes to work, features, and feedback. Joining a team that’s in progress can be very overwhelming to both the designers and stakeholders and creating clear priorities can help set expectations and make it clear to both sides on what the team should focus on first. If a list of priorities doesn't already exist, create one. It doesn't have to be fancy. A simple excel sheet or Google Sheets will do. You can create separate priority lists for things like upcoming features that need design, QA, or user feedback. You can also combine everything into a single list if that works better for your team. Just make sure that it links to or includes as much detail as possible. In the example below, a feature that has completed acceptance criteria is linked to a ticket in Jira that explains all of the details.

Google Sheets

It’s also helpful to group related features together, even though they may have different priorities. This will help you think about how to approach a feature without needing to reworking it later down the line. Be proactive. Ask questions around the priority of items if something doesn't make sense to you. If needed, volunteer to help prioritize features based on what makes sense for a holistic finished product or feature. Creating diagrams and flowcharts can help get everyone to understand how separate features can be connected and what makes the most sense to tackle first. Make sure that QA and user feedback is also part of the priority process.

Process flowchart

Summary

Having any team member join a project mid-process can be intimidating for all parties involved, but it’s important to be open and understanding. Improving the process and the end result is in everyone's interest, and giving and accepting feedback with an open mind can play an important role in ensuring that the project runs smoothly for everyone involved.

For User Experience Designers, it’s important to respect what’s already been accomplished and established with the idea that you should tread lightly to make small improvements at first. This will help gain confidence from the team, while also giving you time to learn about the project and understand the decisions that lead up to where it’s at today. For stakeholders involved, it’s important to listen with an open mind and take a small step back to reevaluate the best way to include UX in the process moving forward. The above suggestions can help both parties understand what actions they can take to help make the onboarding process for a UX Designer a smooth transition.

Dec 05 2018
Dec 05

by David Snopek on December 5, 2018 - 1:59pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Less Critical security release for the Password Policy  module to fix a Denial of Service (DoS) vulnerability.

The Password Policy module makes it possible to set constraints on user passwords.

The "digit placement" constraint is vulnerable to Denial of Service attacks if an attacker submits specially crafted passwords.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the Password Policy module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Dec 05 2018
Dec 05

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Last week, WordPress Tavern picked up my blog post about Drupal 8's upcoming Layout Builder.

While I'm grateful that WordPress Tavern covered Drupal's Layout Builder, it is not surprising that the majority of WordPress Tavern's blog post alludes to the potential challenges with accessibility. After all, Gutenberg's lack of accessibility has been a big topic of debate, and a point of frustration in the WordPress community.

I understand why organizations might be tempted to de-prioritize accessibility. Making a complex web application accessible can be a lot of work, and the pressure to ship early can be high.

In the past, I've been tempted to skip accessibility features myself. I believed that because accessibility features benefited a small group of people only, they could come in a follow-up release.

Today, I've come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone.

As you can see in Drupal's Values and Principles, we are committed to building software that everyone can use. Accessibility should always be a priority. Making capabilities like the Layout Builder accessible is core to Drupal's DNA.

Drupal's Values and Principles translate into our development process, as what we call an accessibility gate, where we set a clearly defined "must-have bar." Prioritizing accessibility also means that we commit to trying to iteratively improve accessibility beyond that minimum over time.

Together with the accessibility maintainers, we jointly agreed that:

  1. Our first priority is WCAG 2.0 AA conformance. This means that in order to be released as a stable system, the Layout Builder must reach Level AA conformance with WCAG. Without WCAG 2.0 AA conformance, we won't release a stable version of Layout Builder.
  2. Our next priority is WCAG 2.1 AA conformance. We're thrilled at the greater inclusion provided by these new guidelines, and will strive to achieve as much of it as we can before release. Because these guidelines are still new (formally approved in June 2018), we won't hold up releasing the stable version of Layout Builder on them, but are committed to implementing them as quickly as we're able to, even if some of the items are after initial release.
  3. While WCAG AAA conformance is not something currently being pursued, there are aspects of AAA that we are discussing adopting in the future. For example, the new 2.1 AAA "Animations from Interactions", which can be framed as an achievable design constraint: anywhere an animation is used, we must ensure designs are understandable/operable for those who cannot or choose not to use animations.

Drupal's commitment to accessibility is one of the things that makes Drupal's upcoming Layout Builder special: it will not only bring tremendous and new capabilities to Drupal, it will also do so without excluding a large portion of current and potential users. We all benefit from that!

Dec 05 2018
Dec 05

The Drupal Association seeks volunteer organizations from Agency and Drupal site owners running production Drupal 8 sites for the creation of an official minor-release beta-testers program.

Since Drupal 8.0's release in November 2015, the Drupal community has successfully transitioned to a scheduled release process whereby two minor releases are made every year.

The most recent of these releases was Drupal 8.6, released in September 2018.

In a significant change from Drupal 7, these minor releases may contain new features and maintain backwards compatibility. This means that every six months there are new features in Drupal core, instead of waiting for the next major release.

This rapid acceleration in feature development has resulted in the need for greater real-world testing of upgrade paths and backwards compatibility. Drupal core has a vast automated test-suite comprising almost 25,000 tests—however, these can be greatly complemented by real-world testing of production sites. There are an infinite number of ways to put Drupal together that cannot always be handled in automated tests.

In order to improve the reliability of the minor-releases, the Drupal community—in conjunction with the Drupal Association—aims to develop a minor-release beta testers panel comprised of agencies and site-owners who maintain complex Drupal 8 production sites.

Many companies and Drupal users are looking to help with core development but aren't always sure where to start. Membership in this panel presents a new way to help the development of software that powers their website.

Who should apply?

Agencies and site owners who maintain large and complex Drupal 8 production sites. In particular, sites that use a wide range of contributed and custom modules or have large volumes of content.

What is involved?

When the beta release becomes available, the Drupal core committers will work in conjunction with the Drupal Association to contact the members of the beta-testing panel to advise that the next minor release is ready for testing.

Members of the panel will be asked to attempt updating to the beta using a staging version of their site (not straight-on production) and report back any issues found. New issues will be opened to track and resolve reported issues. The community will try to resolve the issue before the release deadline. If an issue cannot be resolved in time for the scheduled release, it will be documented in the release notes, or if it is severe enough, release managers may opt to revert the change that introduced the issue. Participants whose testing participation lapses may be removed from the program.

At the moment, testing of the new release occurs in a largely ad-hoc fashion, but once the program is established, this will become more structured and maintainers will have access to statistics regarding the breadth of testing. This will then inform release management decisions in regards to release preparedness.

What's in it for participants?

  • Updating early helps find issues beforehand, rather than after the release is out.

  • Reporting issues encountered lets you tap the wealth of experience of the Drupal core contributors that you'd not have the same level of access to if you update on your own after the release.

  • All organizations and individuals taking part in the testing will receive issue credits for both testing the update and fixing any issues that arise.

  • Satisfaction in the knowledge that you helped shape the next minor release of Drupal core.

  • Advanced preview of upcoming features in Drupal core.

Apply to participate in the program

Dec 05 2018
Dec 05

The Values & Principles Committee has formed and has started its work. It has started by looking at Principle 8.

Principle 8: Every person is welcome; every behavior is not

The Drupal community is a diverse group of people from a wide variety of backgrounds. Supporting diversity, equity, and inclusion is important not just because it is the right thing to do but because it is essential to the health and success of the project. The people who work on the Drupal project should reflect the diversity of people who use and work with the software.  In order to do this, our community needs to build and support local communities and Drupal events in all corners of the world. Prioritizing accessibility and internationalization is an important part of this commitment.

The expectation of the entire Drupal community is to be present and to promote dignity and respect for all community members. People in our community should take responsibility for their words and actions and the impact they have on others.

Our community is, by default, accepting with one exception: we will not accept intolerance. Every person is welcome, but every behavior is not. Our community promotes behaviors and decisions that support diversity, equity, and inclusion and reduce hatred, oppression, and violence. We believe that safety is an important component of dignity and respect, and we encourage behaviors that keep our community members safe. Our Code of Conduct is designed to help communicate these expectations and to help people understand where they can turn for support when needed.

Why are we doing this?

As Dries said, when announcing the first iteration of the Drupal Values & Principles, the Drupal project has had a set of Values & Principles for a very long time. Historically, they were mostly communicated by word of mouth and this meant that some in our community were more aware of them than others.

Writing down the Values & Principles was a great first step. What we need to do now is continually refine the common understanding of these Values & Principles across our whole community and ensure that they are built-in to everything we do.

How will we work?

The Values & Principles are held very closely to the heart of the members of our community and we absolutely recognise that any work on them must be inclusive, clear, structured and accountable.

We are, therefore, going to be open about the work we are doing. While there are members of a committee that will focus on this task, it is not the committee’s job to make decisions “behind closed doors”. Instead, the committee is responsible for enabling the whole community to refine and communicate our common Values & Principles.

We will record actions and progress in the Drupal Governance Project so that all in our community will be able to have the necessary input.

How will we communicate?

We will continue to post updates on the Drupal Community Blog and, as already mentioned, you will always be able to see and, most importantly, participate in issues in the Governance Project. We even have a board on ContribKanban!

Who is on the committee?

Hussain Abbas (hussainweb) works as an Engineering Manager at Axelerant. He started writing programs in 1997 for school competitions and never stopped. His work focus is helping people architect solutions using Drupal and enforcing best practices. He also participates in the local developer community meetup for PHP in general and Drupal in particular. He often speaks at these events and camps in other cities.

Alex Burrows (aburrows), from UK, is the Technical Director of Digidrop and has over 10 years working in Drupal, as well as an avid contributor and a member of the Drupal Community Working Group. As well as this he is a DrupalCamp London Director and Organizer and the author of Drupal 8 Blueprints book.

Jordana Fung (jordana) is a freelance, full-stack Drupal developer from Suriname, a culturally diverse country where the main language is Dutch. She has been steadily increasing her participation in the Drupal community over the past few years and currently has a role on the Drupal Community Working Group. She loves to spend her time learning new things, meeting new people and sharing knowledge and ideas.

Suchi Garg (gargsuchi), living in Melbourne Australia is a Tech Lead at Salsa Digital. She has been a part of the Drupal community for more than 12 years as a site builder, developer, contributor, mentor, speaker and trainer. She had been a part of the Indian community before moving to Australia and is now an active Drupal community down under.

John Kennedy (johnkennedy), lives in Boston, works as a Product Manager for AWS. Over 10 years in Drupal as a site-builder, developer, speaker and on the business side. Co-organiser of Drupal Camp London 2012-2015. PM for Acquia Lightning and the Drupal 8 Module Acceleration Program.

Rachel Lawson (rachel_norfolk), UK and the Community Liaison at the Drupal Association will finally be providing logistical support to the committee and helping wherever she can. Having been in the Drupal community for 11 years as a site builder, a contributor and a mentor, she has had opportunity to experience how the community understands its collective Values & Principles.

In order to be as transparent and forthcoming as possible we wanted to address the fact that there are currently 2 CWG members on the committee. The initial call for people to join the Values & Principles committee happened at the same time as the Community Working Group was calling for new members and, as luck would have it, Alex Burrows applied for both.

In October 2018 a current member of the CWG, Jordana Fung joined the Values & Principles committee and at same time he was being vetted for potential membership to the CWG, Alex joined the Values & Principles committee as well. After the vetting process, Alex officially became a member of the CWG in November. So as it stands now, there are 2 CWG members on the V&P committee.

There are a few possible options going forward, some are:

  • Both CWG members continue for now (whilst the V&P committee is in the very early formation stages) and then possibly:
    • One member drops off
    • They act as a team and only one member (whichever is available) participates in meetings
  • The CWG decides which member is on the VP committee
    • We may need to add another member to the VP committee to take the place of the member that will no longer attend.

So, what’s next?

We have started by compiling a summary of feedback from the community so far that relates to the project’s Values & Principles from such places as the Whitney Hess Interviews, community-led conversations around governance and some anonymized feedback from the Governance Taskforce. We will be adding this summary to an issue in the project.

Call to action

We recognize, though, that what we really want to understand is how you understand what we already have written in Principle 8. THis is how we intend to do that…

The members of the committee have each written stories from their own memories of the Drupal community that demonstrate Principle 8 in action.

We invite you all to write your own stories, from your memories of the Drupal community, other tech communities or indeed any other aspect of life, that demonstrate Principle 8 to you. You should add your story to this issue we have created:

Add my story about Principle 8

One thing we do ask, though, is that you only add your own stories (as many as you like!) and NOT comment or question others’ stories. All stories are valid.

By the end of the year, we hope to have a rich set of stories that show how we, as a global community, interpret Principle 8 and we can then look to see if any changes need to be made to the words or, maybe, it is more a case of linking the Principle to the stories or providing other statements supporting Principle 8.

Dec 05 2018
Dec 05

After months of reading, experimenting and a lot of coding, I'm happy that the first release candidate of the Drupal IndieWeb module is out. I guess this makes the perfect time to try it out for yourself, no? There are a lot of concepts within the IndieWeb universe, and many are supported by the module. In fact, there are 8 submodules, so it might be daunting to start figuring out which ones to enable and what they exactly allow you to do. To kick start anyone interested, I'll explain in a few steps how you can send a webmention to this page. Can you mention me?

Step 1: enabling modules

After you downloaded the module and installed the required composer packages, enable following modules: IndieWeb, Webmention and Microformats2. In case you are not authenticated as user 1, also toggle the following permissions: 'Administer IndieWeb configuration' and 'Send webmention'.

Step 2: expose author information

To discover the author of a website after receiving a webmention, your homepage, or the canonical url of a post needs author information. The module comes with an Author block so you can quickly expose a block where you can configure your name. Your real name or nickname is fine, as long as there's something. The minimal markup should look like something like this:

<p class="h-card">Your <a class="u-url p-name" rel="me" href="https://example.com">name</a></p>


Note: this can be anywhere in your HTML, even hidden.

Step 3: configure webmention module

All configuration exposed by the modules lives under 'Web services' > 'IndieWeb' at /admin/config/services/indieweb. To configure sending webmentions go to /admin/config/services/indieweb/webmention/send. Ignore the ' Syndication targets' fieldset and scroll down to ' Custom URL's for content' and toggle the 'Expose textfield' checkbox.

Scroll down a bit more and configure how you want to send webmentions, either by cron or drush (webmentions are stored in a queue first for performance reasons)

Step 4: configure Microformats module

When sending a webmention to me, it would be nice to be able to figure out what exactly your post is. To achieve this, we need to add markup to the HTML by using CSS classes. Let's configure the minimal markup at /admin/config/services/indieweb/microformats by toggling following checkboxes:

  • h-entry on node wrappers
  • e-content on standard body fields. In case your node type does not use the standard body field, enter the field name in the 'e-content on other textarea fields' textarea.
  • dt-published, p-name, u-author and u-url in a hidden span element on nodes.

Now create a post!

Create a post with a title and body. Your body needs to contain a link with a class so that when I receive your webmention, I know that this page is valid. As an example, we're going to write a reply:

Hi swentel! I just read your <a href="https://realize.be/blog/send-me-webmention-drupal" class="u-in-reply-to">article</a> and it's awesome!

Save the post and verify the markup more or less looks like underneath. Make sure you see following classes: h-entry, u-url, p-name, dt-published, e-content, u-author.

<article role="article" class="h-entry node node--type-article node--promoted node--unpublished node--view-mode-full clearfix">
  <header>
     <div class="node__meta">
        <span>
          Published on <span class=" field--label-hidden">Tue, 04/12/2018 - 22:39</span>
        </span>
        <span class="hidden">
          <a href="https://example.com/canonical-url" class="u-url">
            <span class="p-name">Test send!</span>
            <span class="dt-published">2018-12-04T22:39:57+01:00</span>
          </a>
          <a href="https://realize.be/" class="u-author"></a>
        </span>
      </div>
      </header>
  <div class="node__content clearfix">
  <div class="e-content clearfix ">Hi swentel! I just read your <a href="https://realize.be/blog/send-me-webmention-drupal" class="u-in-reply-to">article</a> and it's awesome!</div>
  </div>
</article>

If everything looks fine, go to the node form again. Open the 'Publish to' fieldset where you can enter 'https://realize.be/blog/send-me-webmention-drupal' in the custom URL textfield. Save again and check the send list at /admin/content/webmention/send-list. It should tell that there is one item in the queue. As a final step, run cron or the 'indieweb-send-webmentions' drush command. After that the queue should be empty and one entry will be in the send list and I should have received your webmention!

Note: You can vary between the 'u-in-reply-to', 'u-like-of' or 'u-repost-of' class. Basically, the class determines your response type. The first class will create a comment on this post. The other two classes will be a mention in the sidebar.

What's next?

Well, a lot of course. But the next step should be receiving webmentions no? If you go to /admin/config/services/indieweb/webmention, you can enable receiving webmentions by using the built-in endpoint. Make sure you expose the link tag so I know where to mention you!

I tried it, and it didn't work!

Maybe I missed something in the tutorial. Or you have found a bug :) Feel free to ping me on irc.freenode.net on #indieweb-dev or #drupal-contribute. You may also open an issue at https://github.com/swentel/indieweb

Send me a webmention with #drupal! 101 setup with the first release candidate of the #indieweb module. Can you mention me? https://realize.be/blog/send-me-webmention-drupal

Dec 05 2018
Dec 05
A figure opening doors, lit from behind with a bright light.

Last week, WordPress Tavern picked up my blog post about Drupal 8's upcoming Layout Builder.

While I'm grateful that WordPress Tavern covered Drupal's Layout Builder, it is not surprising that the majority of WordPress Tavern's blog post alludes to the potential challenges with accessibility. After all, Gutenberg's lack of accessibility has been a big topic of debate, and a point of frustration in the WordPress community.

I understand why organizations might be tempted to de-prioritize accessibility. Making a complex web application accessible can be a lot of work, and the pressure to ship early can be high.

In the past, I've been tempted to skip accessibility features myself. I believed that because accessibility features benefited a small group of people only, they could come in a follow-up release.

Today, I've come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone.

As you can see in Drupal's Values and Principles, we are committed to building software that everyone can use. Accessibility should always be a priority. Making capabilities like the Layout Builder accessible is core to Drupal's DNA.

Drupal's Values and Principles translate into our development process, as what we call an accessibility gate, where we set a clearly defined "must-have bar". Prioritizing accessibility also means that we commit to trying to iteratively improve accessibility beyond that minimum over time.

Together with the accessibility maintainers, we jointly agreed that:

  1. Our first priority is WCAG 2.0 AA conformance. This means that in order to be released as a stable system, the Layout Builder must reach Level AA conformance with WCAG. Without WCAG 2.0 AA conformance, we won't release a stable version of Layout Builder.
  2. Our next priority is WCAG 2.1 AA conformance. We're thrilled at the greater inclusion provided by these new guidelines, and will strive to achieve as much of it as we can before release. Because these guidelines are still new (formally approved in June 2018), we won't hold up releasing the stable version of Layout Builder on them, but are committed to implementing them as quickly as we're able to, even if some of the items are after initial release.
  3. While WCAG AAA conformance is not something currently being pursued, there are aspects of AAA that we are discussing adopting in the future. For example, the new 2.1 AAA "Animations from Interactions", which can be framed as an achievable design constraint: anywhere an animation is used, we must ensure designs are understandable/operable for those who cannot or choose not to use animations.

Drupal's commitment to accessibility is one of the things that makes Drupal's upcoming Layout Builder special: it will not only bring tremendous and new capabilities to Drupal, it will also do so without excluding a large portion of current and potential users. We all benefit from that!

December 05, 2018

9 sec read time

db db
Dec 04 2018
Dec 04

We are thrilled to have had three of our sessions chosen for DrupalCon Seattle in April 2019. You’ll find us at the booth, in the hallway, and out and about in Seattle, but make sure to visit us in our three Builder Track sessions:

Keep Living the Dream! How to work remotely AND foster a happy, balanced life

Virtual. Remote. Distributed. Pick your label. This style of organization is becoming wildly more in demand and popular among many Drupal shops. While many folks have gone remote, some people find the experience quite isolating and disconnected.

In this session we will talk about how to be the best remote employee, as well as provide ideas if you are a leader of a remote team. We will talk about key tactics to keep you (and all other staff) inspired, creative, productive and most importantly, happy!

Presenter: Anne Stefanyk

Beyond the Screen Reader: Humanizing Accessibility

We talk a lot about the basics of accessibility. But what does it really mean to be accessible? How do we ensure we are including everyone and empowering every user in every scenario to use our sites, products, and devices? We think about deaf and blind users, we check contrast for colorblind users. We consider the elderly and sometimes those with dyslexia. Are we including trans folks? Parents? The chronically ill? People with limited literacy? The injured? People in a major emergency? Who are we designing for? What should we be considering?

If you’re wondering how these folks might be affected by accessibility and you want your website to be inclusive for everyone, this is the session for you.

Presenter: Alanna Burke

Deep Cleaning: Creating franchise model efficiencies with Drupal 8

COIT offers cleaning and 24/7 emergency restoration services. Their 100+ locations serve more than 12 million homes & businesses across the United States and Canada.

It had been years since the COIT site had been updated, and it posed a host of technical challenges. Franchise content optimizations resulted in redundant updates for the SEO team. The mobile experience wasn’t optimized for conversions. There was a mountain of custom technical debt. And despite the current content administrative challenges, the localized experience lacked the level of context-awareness that consumers have come to expect. It was time for COIT to clean up its own mess.

In this case study we will cover the more technical parts of this Drupal 8 implementation: how we kept a multinational but distinctly separate brand presence with geolocative features, maintained custom promotions tailored to each franchise location, and kept the existing hard-won SEO and SEM business drivers intact.

Presenters: Anne Stefanyk and Katherine White

Dec 04 2018
Dec 04

Let me take you on a journey. We'll pass by Drupal content renderer services, AJAX commands, javascript libraries and a popular front-end framework. If you've only heard of one or two of those things, come lean on the experience I took diving deep into Drupal. I'm pleased with where my adventure took me to, and maybe what I learned will be useful to you too.

Here's the end result: a contact form, launched from a button link in the site header, with the page beneath obscured by an overlay. The form allows site visitors to get in touch from any page, without leaving what they were looking at.

Contact form showing in a Foundation reveal popup, launched from a button in the header. An overlay obscures the page behind.

Drupal has its own API for making links launch dialogs (leveraging jQuery UI Dialogs). But our front-end of the site was built with Foundation, the super-popular theming framework, which provides components of its own that are much better for styling. We often base our bespoke themes on Foundation, and manipulate Drupal to fit.

We had already done some styling of Foundation's Reveal component. In those places, the markup to show in the popup is already in the page, but I didn't really want the form to be in the page until it was needed. Instead, AJAX could fetch it in. So I wondered if I could combine Drupal's AJAX APIs with Foundation's Reveal markup and styling. Come with me down the rabbit hole...

There are quite a few components in making this possible. Here's a diagram:

Components involved in opening a Reveal popup containing the markup from a link's target.

So it comes down to the following parts, which we'll explore together. Wherever custom code is needed, I've posted it in full later in this article.

  • A link that uses AJAX, with a dialog type set in an attribute.
  • Drupal builds the content of the page that was linked to.
  • Drupal's content view subscriber picks up that response and looks for a content renderer service that matches the dialog type.
  • The content renderer returns an AJAX command PHP class in its response, and attaches a javascript library that will contain a javascript AJAX command (a method).
  • That command returns the content to show in the popup, and that javascript method name.
  • The javascript method launches the popup containing the HTML content.

Let's start at the beginning: the link. Drupal's AJAX API for links is pretty neat. We trigger it with two things:

  1. A use-ajax class, which tells it to open that link via an AJAX call, returning just the main page content (e.g. without headers & footers), to be presented in your existing page.
  2. A data-dialog-type attribute, to instruct how that content should be presented. This can be used for the jQuery UI dialogs (written up elsewhere) or the newer off-canvas sidebar, for example.

I wanted to have a go at creating my own 'dialog type', which would be a Foundation Reveal popup. The HTML fetched by the AJAX call would be shown in it. Let's start with the basic markup I wanted to my link to have:

<a href="https://www.computerminds.co.uk/contact" class="use-ajax" data-dialog-type="reveal">Enquire</a>

This could either just be part of content, or I could get this into a template using a preprocess function that would build the link. Something like this:

<?php
// $url could have come from $node->toUrl(), Url::fromRoute() or similar.
// For this example, it's come from a contact form entity.
$url->setOption('attributes', [
  'class' => [
    'use-ajax',
  ],
  // This attribute tells it to use our kind of dialog
  'data-dialog-type' => 'reveal',
]);
// The variable 'popup_launcher' is to be used in the template.
$variables['popup_launcher'] = \Drupal\Core\Link::fromTextAndUrl(t('Enquire'), $url);

After much reading around and breakpoint debugging to figure it out, I discovered that dialog types are matched up to content rendering services. So I needed to define a new one of those, which I could base closely on Drupal's own DialogRenderer. Here's the definition from my module's mymodule.services.yml file:

services:
  main_content_renderer.foundation_reveal:
    class: Drupal\mymodule\Render\MainContent\FoundationReveal
    arguments: ['@title_resolver']
    tags:
      - { name: render.main_content_renderer, format: drupal_reveal }

Adding the tag named 'render.main_content_renderer' means my class will be picked up by core's MainContentRenderersPass when building the container. Drupal's MainContentViewSubscriber will then consider it as a service that can render responses.

The 'format' part of the tag needs to be the value that our data-dialog-type attribute has, with (somewhat arbitrarily?) 'drupal_' prepended. The arguments will just be whatever the constructor for the class needs. I often write my class first and then go back to adjust the service definition once I know what it needs. But I'll be a good tour guide and show you things in order, rather than shuttling you backwards and forwards!

Onto that FoundationReveal service class now. I started out with a copy of core's own ModalRenderer which is a simple extension to the DialogRenderer class. Ultimately, that renderer is geared around returning an AJAX command (see the AJAX API documentation), which comes down to specifying a command to invoke in the client-side javascript with some parameters.

I would need my own command, and my FoundationReveal renderer would need to specify it to be used. That only two functional differences were needed in comparison to core's DialogRenderer:

  1. Attach a custom library, which would contain the actual javascript command to be invoked:
$main_content['#attached']['library'][] = 'mymodule/dialog.ajax';
  1. Return an AJAX command class, that will specify that javascript command (rather than the OpenDialogCommand command that DialogRenderer uses) - i.e. adding this to the returned $response:
new OpenFoundationRevealCommand('#mymodule-reveal')

We'll learn about that command class later!

So the renderer file, mymodule/src/Render/MainContent/FoundationReveal.php (in that location in order to match the namespace in the service file definition), looks like this - look out for those two tweaks:

<?php

namespace Drupal\mymodule\Render\MainContent;

use Drupal\Core\Ajax\AjaxResponse;
use Drupal\Core\Render\MainContent\DialogRenderer;
use Drupal\Core\Routing\RouteMatchInterface;
use Drupal\mymodule\Ajax\OpenFoundationRevealCommand;
use Symfony\Component\HttpFoundation\Request;

/**
 * Default main content renderer for foundation reveal requests.
 */
class FoundationReveal extends DialogRenderer {

  /**
   * {@inheritdoc}
   */
  public function renderResponse(array $main_content, Request $request, RouteMatchInterface $route_match) {
    $response = new AjaxResponse();

    // First render the main content, because it might provide a title.
    $content = drupal_render_root($main_content);

    // Attach the library necessary for using the OpenFoundationRevealCommand
    // and set the attachments for this Ajax response.
    $main_content['#attached']['library'][] = 'core/drupal.dialog.ajax';
    $main_content['#attached']['library'][] = 'mymodule/dialog.ajax';
    $response->setAttachments($main_content['#attached']);

    // Determine the title: use the title provided by the main content if any,
    // otherwise get it from the routing information.
    $title = isset($main_content['#title']) ? $main_content['#title'] : $this->titleResolver->getTitle($request, $route_match->getRouteObject());

    // Determine the dialog options and the target for the OpenDialogCommand.
    $options = $request->request->get('dialogOptions', []);

    $response->addCommand(new OpenFoundationRevealCommand('#mymodule-reveal', $title, $content, $options));
    return $response;
  }

}

That AJAX command class, OpenFoundationRevealCommand sits in mymodule/src/Ajax/OpenFoundationRevealCommand.php. Its render() method is the key, it returns the command which will map to a javascript function, and the actual HTML under 'data'. Here's the code:

<?php

namespace Drupal\mymodule\Ajax;

use Drupal\Core\Ajax\OpenDialogCommand;
use Drupal\Core\StringTranslation\StringTranslationTrait;

/**
 * Defines an AJAX command to open certain content in a foundation reveal popup.
 *
 * @ingroup ajax
 */
class OpenFoundationRevealCommand extends OpenDialogCommand {
  use StringTranslationTrait;

  /**
   * Implements \Drupal\Core\Ajax\CommandInterface:render().
   */
  public function render() {
    return [
      'command' => 'openFoundationReveal',
      'selector' => $this->selector,
      'settings' => $this->settings,
      'data' => $this->getRenderedContent(),
      'dialogOptions' => $this->dialogOptions,
    ];
  }

  /**
   * {@inheritdoc}
   */
  protected function getRenderedContent() {
    if (empty($this->dialogOptions['title'])) {
      $title = '';
    }
    else {
      $title = '<h2 id="reveal-header">' . $this->dialogOptions['title'] . '</h2>';
    }

    $button = '<button class="close-button" data-close aria-label="' . $this->t('Close') . '" type="button"><span aria-hidden="true">&times;</span></button>';
    return '<div class="reveal-inner clearfix">' . $title . parent::getRenderedContent() . '</div>' . $button;
  }
}

Now, I've mentioned that the command needs to match a javascript function. That means adding some new javascript to the page, which, in Drupal 8, we do by defining a library. My 'mymodule/dialog.ajax' library was attached in the middle of FoundationReveal above. My library file defines what actual javascript file to include - it is mymodule.libraries.yml and looks like this:

dialog.ajax:
  version: VERSION
  js:
    js/dialog.ajax.js: {}
  dependencies:
    - core/drupal.dialog.ajax

Then here's that actual mymodule/js/dialog.ajax.js file. It adds the 'openFoundationReveal' method to the prototype of the globally-accessible Drupal.AjaxCommands. That matches the command name returned by my OpenFoundationRevealCommand::render() method that we saw.

(function ($, Drupal) {
    Drupal.AjaxCommands.prototype.openFoundationReveal = function (ajax, response, status) {
    if (!response.selector) {
      return false;
    }

    // An element matching the selector will be added to the page if it does not exist yet.
    var $dialog = $(response.selector);
    if (!$dialog.length) {
      // Foundation expects certain things on a Reveal container.
      $dialog = $('<div id="' + response.selector.replace(/^#/, '') + '" class="reveal" aria-labelledby="reveal-header"></div>').appendTo('body');
    }

    if (!ajax.wrapper) {
      ajax.wrapper = $dialog.attr('id');
    }

    // Get the markup inserted into the page.
    response.command = 'insert';
    response.method = 'html';
    ajax.commands.insert(ajax, response, status);

    // The content is ready, now open the dialog!
    var popup = new Foundation.Reveal($dialog);
    popup.open();
  };
})(jQuery, Drupal);

There we have it - that last bit of the command opens the Foundation Reveal popup dialog!

I should also add that since I was showing a contact form in the popup, I installed the Contact ajax module. This meant that a site visitor would stay within the popup once they submit the form, which meant for a clean user experience.

Thanks for following along with me!

Dec 04 2018
Dec 04

By Jesus Manuel OlivasHead of Products | December 04, 2018

By Jesus Manuel OlivasHead of Products | December 04, 2018

At weKnow we are not only using Drupal, we also take contributing back very seriously and now is the time for improving the Drupal and Gatsby integration.

As mentioned in my personal blog, Moving weKnow's personal blog sites from Drupal to GatsbyJS, we have been using Gatsby with Drupal for projects as our decouple strategy lately, and after building a few sites with Drupal and Gatsby we found some challenges, which we resolved writing custom code. But now we’ve decided to share our knowledge as contributed modules.

Toast UI Editor

This module provides a markdown WYSIWYG editor integration for Toast UI Editor

Solutions this module provides

This module allows content editors to enter content as markdown using a WYSIWYG tool, making easy to implement JSON-API endpoints that return markdown. By providing markdown to Gatsby you can take advantage of the gatsby-transformer-remark plugin features.

tui_editor

Link to the module: https://www.drupal.org/project/tui_editor

Build Hooks

This module triggers a build hook on any service provider that supports build hooks. Provides the ability to be configured and execute that trigger manually clicking a toolbar element or automatically executing the trigger via cron or whenever a node is updated.

Solutions this module provide:

  • Deploy site to a PaaS CDN as Netlify.
  • Execute build and deploy the site on demand and/or programmatically after updating data on Drupal.
build_hooks module

Link to the module: https://www.drupal.org/project/build_hooks

Are you excited as we are with GatsbyJS and this new API Driven approach?

We invite you to check back as this series continues, exploring more tools we are building to contribute back to Drupal and Gatsby ecosystems that will allow you to implement a Drupal and Gatsby integration without needing to DIY.

Want to learn how to take advantage of these modules?

We can show you how these modules can improve your Drupal and Gatsby integration.

Dec 04 2018
Dec 04

In this article I am going to show you a technique I used recently to mock a relatively simple external service for functional tests in Drupal 8.

Imagine the following scenario: you have an API with one or a few endpoints and you write a service that handles the interaction with it. For example, one of the methods of this service takes an ID and calls the API in order to return the resource for that ID (using the Guzzle service available in Drupal 8). You then cast the Guzzle response stream to a string and return whatever from there to use in your application. How can you test your application with this kind of requirements?

The first thing you can do is unit test your service. In doing so, you can pass to it a mock client that can return whatever you set to it. Guzzle even provides a MockHandler that you can use with the client and specify what you want returned. Fair enough. But what about things like Kernel or Functional tests that need to use your client and make requests to this API? How can you handle this?

It’s not a good idea to use the live API endpoint in your tests for a number of reasons. For example, your testing pipeline would depend on an external, unpredictable service which can go down at any moment. Sure, it’s good to catch when this happens but clearly this is not the way to do it. Or you may have a limited amount of requests you can make to the endpoint. All these test runs will burn through your budget. And let’s not forget you need a network connection to run the tests.

So let’s see an interesting way of doing this using the Guzzle middleware architecture. Before diving into that, however, let’s cover a few theoretical aspects of this process.

Guzzle middlewares

A middleware is a piece of functionality that can be added to the pipeline of a process. For example, the process of turning a request into a response. Check out the StackPHP middlewares for a nice intro to this concept.

In Guzzle, middlewares are used inside the Guzzle handler stack that is responsible for turning a Guzzle request into a response. In this pipeline, middlewares are organised as part of the HandlerStack object which wraps the base handler that does the job, and are used to augment this pipeline. For example, let’s say a Guzzle client uses the base Curl handler to make the request. We can add a middleware to the handler stack to make changes to the outgoing request or to the incoming response. I highly recommend you read the Guzzle documentation on handlers and middlewares for more information.

Guzzle in Drupal 8

Guzzle is the default HTTP client used in Drupal 8 and is exposed as a service (http_client). So whenever we need to make external requests, we just inject that service and we are good to go. This service is instantiated by a ClientFactory that uses the default Guzzle handler stack (with some specific options for Drupal). The handler stack that gets injected into the client is configured by Drupal’s own HandlerStackConfigurator which also registers all the middlewares it finds.

Middlewares can be defined in Drupal as tagged services, with the tag http_client_middleware. There is currently only one available to look at as an example, used for the testing framework: TestHttpClientMiddleware.

Our OMDb (Open Movie Database) Mock

Now that we have an idea about how Guzzle processes a request, let’s see how we can use this to mock requests made to an example API: OMDb.

The client

Let’s assume a module called omdb which has this simple service that interacts with the OMDb API:

<?php

namespace Drupal\omdb;

use Drupal\Core\Site\Settings;
use Drupal\Core\Url;
use GuzzleHttp\ClientInterface;

/**
 * Client to interact with the OMDb API.
 */
class OmdbClient {

  /**
   * @var \GuzzleHttp\ClientInterface
   */
  protected $client;

  /**
   * Constructor.
   *
   * @param \GuzzleHttp\ClientInterface $client
   */
  public function __construct(ClientInterface $client) {
    $this->client = $client;
  }

  /**
   * Get a movie by ID.
   *
   * @param \Drupal\omdb\string $id
   *
   * @return \stdClass
   */
  public function getMovie(string $id) {
    $settings = $this->getSettings();
    $url = Url::fromUri($settings['url'], ['query' => ['apiKey' => $settings['key'], 'i' => $id]]);
    $response = $this->client->get($url->toString());
    return json_decode($response->getBody()->__toString());
  }

  /**
   * Returns the OMDb settings.
   *
   * @return array
   */
  protected function getSettings() {
    return Settings::get('omdb');
  }

}

We inject the http_client (Guzzle) and have a single method that retrieves a single movie from the API by its ID. Please disregard the complete lack of validation and error handling, I tried to keep things simple and to the point. To note, however, is that the API endpoint and key is stored in the settings.php file under the omdb key of $settings. That is if you want to play around with this example.

So assuming that we have defined this service inside omdb.services.yml as omdb.client and cleared the cache, we can now use this like so:

$client = \Drupal::service('omdb.client');
$movie = $client->getMovie('tt0068646');

Where $movie would become a stdClass representation of the movie The Godfather from the OMDb.

The mock

Now, let’s assume that we use this client to request movies all over the place in our application and we need to write some Kernel tests that verify that functionality, including the use of this movie data. One option we have is to switch out our OmdbClient client service completely as part of the test, with another one that has the same interface but returns whatever we want. This is ok, but it’s tightly connected to that test. Meaning that we cannot use it elsewhere, such as in Behat tests for example.

So let’s explore an alternative way by which we use middlewares to take over any requests made towards the API endpoint and return our own custom responses.

The first thing we need to do is create a test module where our middleware will live. This module will, of course, only be enabled during test runs or any time we want to play around with the mocked data. So the module can be called omdb_tests and we can place it inside the tests/module directory of the omdb module.

Next, inside the namespace of the test module we can create our middleware which looks like this:

<?php

namespace Drupal\omdb_tests;

use Drupal\Core\Site\Settings;
use GuzzleHttp\Promise\FulfilledPromise;
use GuzzleHttp\Psr7\Response;
use Psr\Http\Message\RequestInterface;
use Psr\Http\Message\ResponseInterface;

/**
 * Guzzle middleware for the OMDb API.
 */
class OmdbMiddleware {

  /**
   * Invoked method that returns a promise.
   */
  public function __invoke() {
    return function ($handler) {
      return function (RequestInterface $request, array $options) use ($handler) {
        $uri = $request->getUri();
        $settings = Settings::get('omdb');

        // API requests to OMDb.
        if ($uri->getScheme() . '://' . $uri->getHost() . $uri->getPath() === $settings['url']) {
          return $this->createPromise($request);
        }

        // Otherwise, no intervention. We defer to the handler stack.
        return $handler($request, $options);
      };
    };
  }


  /**
   * Creates a promise for the OMDb request.
   *
   * @param RequestInterface $request
   *
   * @return \GuzzleHttp\Promise\PromiseInterface
   */
  protected function createPromise(RequestInterface $request) {
    $uri = $request->getUri();
    $params = \GuzzleHttp\Psr7\parse_query($uri->getQuery());
    $id = $params['i'];
    $path = drupal_get_path('module', 'omdb_tests') . '/responses/movies';

    $json = FALSE;
    if (file_exists("$path/$id.json")) {
      $json = file_get_contents("$path/$id.json");
    }

    if ($json === FALSE) {
      $json = file_get_contents("$path/404.json");
    }

    $response = new Response(200, [], $json);
    return new FulfilledPromise($response);
  }

}

Before explaining what all this code does, we need to make sure we register this as a tagged service inside our test module:

services:
  omdb_tests.client_middleware:
    class: Drupal\omdb_tests\OmdbMiddleware
    tags:
      - { name: http_client_middleware }

Guzzle middleware services in Drupal have one single (magic) method called __invoke. This is because the service is treated as a callable. What the middleware needs to do is return a (callable) function which gets as a parameter the next handler from the stack that needs to be called. The returned function then has to return another function that takes the RequestInterface and some options as parameters. At this point, we can modify the request. Lastly, this function needs to make a call to that next handler by passing the RequestInterface and options, which in turn will return a PromiseInterface. Take a look at TestHttpClientMiddleware for an example in which Drupal core tampers with the request headers when Guzzle makes requests during test runs.

So what are we doing here?

We start by defining the first two (callable) functions I mentioned above. In the one which receives the current RequestInterface, we check for the URI of the request to see if it matches the one of our OMDb endpoint. If it doesn’t we simply call the next handler in the stack (which should return a PromiseInterface). If we wanted to alter the response that came back from the next handler(s) in the stack, we could call then() on the PromiseInterface returned by the stack, and pass to it a callback function which receives the ResponseInterface as a parameter. In there we could make the alterations. But alas, we don’t need to do that in our case.

A promise represents the eventual result of an asynchronous operation. The primary way of interacting with a promise is through its then method, which registers callbacks to receive either a promise's eventual value or the reason why the promise cannot be fulfilled.

Read this for more information on what promises are and how they work.

Now for the good stuff. If the request is made to the OMDb endpoint, we create our own PromiseInterface. And very importantly, we do not call the next handler. Meaning that we break out of the handler stack and skip the other middlewares and the base handler. This way we prevent Guzzle from going to the endpoint and instead have it return our own PromiseInterface.

In this example I decided to store a couple of JSON responses for OMDb movies in files located in the responses/movies folder of the test module. In these JSON files, I store actual JSON responses made by the endpoint for given IDs, as well as a catch-all for whenever a missing ID is being requested. And the createPromise() method is responsible for determining which file to load. Depending on your application, you can choose exactly based on what you would like to build the mocked responses.

The loaded JSON is then added to a new Response object that can be directly added to the FulfilledPromise object we return. This tells Guzzle that the process is done, the promise has been fulfilled, and there is a response to return. And that is pretty much it.

Considerations

This is a very simple implementation. The API has many other ways of querying for data and you could extend this to store also lists of movies based on a title keyword search, for example. Anything that serves the needs of the application. Moreover, you can dispatch an event and allow other modules to provide their own resources in JSON format for various types of requests. There are quite a lot of possibilities.

Finally, this approach is useful for “simple” APIs such as this one. Once you need to implement things like Oauth or need the service to call back to your application, some more complex mocking will be needed such as a dedicated library and/or containerised application that mocks the production one. But for many such cases in which we read data from an endpoint, we can go far with this approach.

Dec 04 2018
Dec 04

The Drupal 8 Rabbit Hole Module allows you to control what happens when someone views an entity page. Before you ask why this might be important, let me clarify, it's not a layout tool. It's a simple module that allows you to easily redirect or prevent users from viewing specific types on entities on your site. If you have content types or other entities that you are using to either build out a view or other area of your site, but don't want users to access the original page for this content type or entity, this is the module for you. If you read this entire description and are still confused, I promise it's very simple. Check out the video to find out how it might be useful for you on a future project.

Dec 04 2018
Dec 04

We’re featuring some of the people in the Drupalverse! This Q&A series highlights some of the individuals you could meet at DrupalCon. Every year, DrupalCon is the largest gathering of people who belong to this community. To celebrate and take note of what DrupalCon means to them, we’re featuring an array of perspectives and some fun facts to help you get to know your community.

Dec 03 2018
Dec 03

Matt Glaman, (mglaman) and Bojan Zivanovic, (bojanz) join Mike live from Disney World to talk about decoupling Drupal Commerce as well as the roadmap for Drupal Commerce as a project. We take a quick side trip into some blog posts Matt recently wrote about running all of Drupal core's automated tests in DDEV-Local.

Interview

DrupalEasy News

Upcoming events

Sponsors

  • Drupal Aid - Drupal support and maintenance services. Get unlimited support, monthly maintenance, and unlimited small jobs starting at $99/mo.
  • WebEnabled.com - devPanel.

Follow us on Twitter

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Dec 03 2018
Dec 03

Drupal and Composer - an In-Depth Look

 

As any developer working with Drupal 8 knows, working with Composer has become an integral part of working with Drupal. This can be daunting for those who don't have previous experience working with the command line, and can still be a confusing experience for those who do. This is the second post in an explorative series of blog posts I will be writing on Composer, hopefully clearing up some of the confusion around it. The four blog posts on this topic will be as follows:

  • Part 1: Understanding Composer
  • Part 2: Managing a Drupal 8 site with Composer
  • Part 3: Converting Management of an Existing Drupal 8 Site to Composer (Coming Soon)
  • Part 4: Composer for Drupal Developers (Coming Soon)

This article will be difficult to understand without first understanding the concepts explained in part 1, so If you have not read it, it would probably be worth your while to ensure you understand the concepts outlined in the summary of that article, before proceeding with this one.

Managing Drupal sites with Composer

Beginning a New Project

Fortunately a lot of work has been put into creating a Composer base (called a template) for Drupal projects. This Drupal Composer template can be found on Github at: https://github.com/drupal-composer/drupal-project. Instructions on how to use the template can be found there, and the same instructions are found in the README.md file that comes with the project when installed.

Starting a new project with the Drupal Composer template can be done with the following line of code:

composer create-project drupal-composer/drupal-project:8.x-dev some-dir --stability dev --no-interaction

This command does a number of things, many of which will be addressed below.

1) A new Composer project is created

The first thing the installation does is to create the directory specified as some-dir in the command, and initializes a Composer project in that directory. Change this to the appropriate name for your project. This is now your new project. The project contains the composer.json and composer.lock files that will be used to manage the code base of the project.

Note: The command provided lists the --stability as dev. This can be confusing as developers may think this means they are getting the -dev version of Drupal core. Don't worry, the command given above will always install the current full release of Drupal core, whatever it is at the time when the command is run.

2) Library dependencies (as well as their dependencies) are installed

The Composer template has a number of dependencies included by default, some of which we will take a look at here. These libraries are set by default as requirements in composer.json, and therefore are included when running the install command given earlier.

  • Drush: Drush is cool. If you don’t know what it is, it’s worth some Google-fu. Anything to be written about Drush has already been written somewhere else, so check it out - it will be worth your while!
  • Drupal Console: Drupal Console is also really cool. See comments on Drush.
  • Composer Patches: This one is very cool. This library in and of itself is worth using Composer to manage Drupal projects in my eyes. Even if Composer had no other benefits, this one would be great. First, an explanation of a patch is necessary. A patch is kind of like a band-aid that can be applied to code. Patches allow developers to submit changes, be they bug fixes or new functionality, to the library maintainer. The library maintainer may or may add the patch to the source code, but in the meantime, other developers can apply the patch to their own systems, both to test if it works, as well as use it if it does. However, when the library the patch has been applied to is updated to a newer version, the patches have to be re-applied. What Composer Patches does is allow developers to track patches applied to the project, and have them applied automatically during the update process. This ensures that bugs don't arise from forgetting to re-apply patches after the update. Patches are tracked by adding them to composer.json. Here is an example:
     
  • "extra": {
      "patches": {
        "drupal/core”: {
          “Patch description”: "https://www.drupal.org/files/issues/someissue-1543858-30.patch"
        }
      }
    }
    

    With the above code, the next time composer update drupal/core is run, Composer will attempt to apply the patch found at https://www.drupal.org/files/issues/someissue-1543858-30.patch to Drupal core. Note that the description of the patch, given above as "Patch description", is arbitrary, and should be something descriptive. If there is an issue for the patch, a link to the issue is good to add to the description, so developers can quickly look into the status of the patch, see if any updated patches have been released, and check if the patch has been incorporated into the library, rendering the patch unnecessary.

  • And Others: There are many other libraries that are included as part of the Drupal Composer template, but the truth is that I haven’t looked into them. Also note that Drupal core alone has multiple dependencies which have their own dependencies. At the time of writing, 123 libraries are installed into the project with the install command.
But wait - I don’t use [fill in library here]

The Composer template is just that - a template. Some people don’t use Drush, some don’t use Drupal console. If these are not needed, they can be removed from a project in the same manner as any Composer library. Example:

composer remove drush/drush

The above command will remove Drush from the code managed by the Composer template.

3) The system folder architecture is created

The file system created by the Composer template deserves a close look.

  • /web: The first thing to notice is that Drupal is installed into the /web directory in the root of the project. This means that the root of the project created with the Composer template is one level above the webroot. When configuring the server, the domain for the server will need to point at the /web directory.
  • /config/sync: The Composer template sets up the project to store Drupal 8 configuration .yml files in this folder. When running drush cex sync to export configuration, the entire site configuration will be exported to this folder. This is folder is best kept out of the webroot for security purposes.
  • /drush: This folder holds a few Drush specific items in it. If multiple environments exist for your project, Drush aliases can be set in /drush/sites/self.site.yml, allowing for interaction with your various environments from anywhere within the project.
  • /scripts: At the time of writing, this folder contains only a single file, /scripts/composer/ScriptHandler.php. This is a really cool file that contains code run by Composer during various processes.

    The composer.json file in the Drupal Composer template contains the following:
     

    "scripts": {
      "drupal-scaffold": "DrupalComposer\\DrupalScaffold\\Plugin::scaffold",
      "pre-install-cmd": [
        "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
      ],
      "pre-update-cmd": [
        "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
      ],
      "post-install-cmd": [
        "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
      ],
      "post-update-cmd": [
        "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
      ]
    },
    

    The code above executes the stated code on pre-install, pre-update, post-install and post-update. Any time either composer install or composer update are executed on the system, the pre and post hook for that call are executed, calling the relevant functions above. Developers can create their own pre/post install and update hooks following the examples shown above.

  • /vendor: The first thing to note is that the location of this file differs from a vanilla Drupal 8 installation. When installing Drupal manually, the vendor folder is part of the webroot by default. This however could lead to security issues, which is why the Composer template installs it above the webroot. The vendor folder contains most of the libraries that Composer manages for your project. Drupal core, modules, themes and profiles however are saved to other locations (to be discussed in the next section). Everything else is saved to the /vendor folder.

4) Drupal File/Folder Installation Locations are set

As mentioned above, Drupal core is installed into the /web folder. The Composer template also sets up installation locations (directories) for Drupal libraries, modules, themes and profiles so that when Composer installs these, they are put in the appropriate Drupal folder locations. This is the code in composer.json that handles the installation locations:

"extra": {
  "installer-paths": {
    "web/core": ["type:drupal-core"],
    "web/libraries/{$name}": ["type:drupal-library"],
    "web/modules/contrib/{$name}": ["type:drupal-module"],
    "web/profiles/contrib/{$name}": ["type:drupal-profile"],
    "web/themes/contrib/{$name}": ["type:drupal-theme"],
    "drush/contrib/{$name}": ["type:drupal-drush"]
  }
}

The most common library type that a Drupal developer will install will be Drupal modules. So let’s look at the line of code specific to the module installation location:

"web/modules/contrib/{$name}": ["type:drupal-module"],

This line of code says that if the type of Composer library is drupal-module, then install it to the /web/modules/contrib/[MODULE MACHINE NAME] folder.

But how does Composer know that the library being downloaded is type drupal-module? Well, the key to this is in how Composer libraries are managed in the first place. Throughout this article and the one that precedes it, we have repeatedly looked at the composer.json file that defines this project. Well, every Composer library contains a composer.json file, and every composer.json file that comes with packaged with Drupal modules contains a type declaration as follows:

"type": "drupal-module",

When Composer is installing libraries, it looks for a type declaration, and when it finds one, if there is a custom install location set in composer.json for that type, the library is installed to the declared folder. Drupal themes are of type drupal-theme, Drupal profiles are of type drupal-profile, and so on, and they are installed to the folders declared in composer.json.

Managing Custom Drupal Code with Composer

Custom code for a project can be managed with Composer as mentioned in Part 1 of this series. Drupal convention generally separates contributed (3rd party) modules and custom modules into separate folders. To install custom code in the locations according to this convention, the following lines should be added to the installer-paths declaration in composer.json:

"web/modules/custom/{$name}": ["type:drupal-custom-module"],
"web/profiles/custom/{$name}": ["type:drupal-custom-profile"],
"web/themes/custom/{$name}": ["type:drupal-custom-theme"],

This code adds three additional library types for custom modules, custom profiles, and custom themes.

Next, you’ll need to add a composer.json file to the custom module/theme/profile. Directions for this can be seen here: https://www.drupal.org/docs/8/creating-custom-modules/add-a-composerjson-file. Note that for the step named Define your module as a PHP package, you should set the type as drupal-custom-[TYPE], where [TYPE] is one of: module, theme, or profile.

Continuing on, make sure the composer.json file containing the type declaration has been pushed to the remote private repository.

The last step step is to add your private repository to you project’s composer.json, so that when running composer require my/privatelibrary, Composer knows in which repository to look for the library. Declaring private repositories in composer.json is explained here: https://getcomposer.org/doc/05-repositories.md#using-private-repositories.

With the above steps, when running composer install my/library, Composer will find the private repository declared in composer.json, search that repository for my/library, and download it. The composer.json file in my/library tells Composer that it’s of type drupal-custom-[TYPE], so Drupal will install it into the directory specified for Drupal custom [TYPE].

If using Git for version control on your system, you'll probably want to alter the .gitignore file in the Composer project root to ignore the custom folder locations. If you have created a custom module, and will be managing all custom modules with Composer and private repositories, you should probably add the /web/modules/custom folder to .gitignore. If you will be managing some custom modules with Git and not Composer, then you should probably add the custom module you have created to .gitignore as /web/modules/custom/[MODULE NAME].

Managing site settings: settings.php and settings.local.php

This section isn’t actually directly related to Composer and Drupal, but it’s a good step for setting up a project, and we can use the methodology to work with Composer template and Git.

Each Drupal installation depends on the file settings.php. This file is loaded as part of Drupal’s bootstrap process on every page load. Site-specific settings are added into this file, such as database connection information.

Towards the bottom of settings.php, the following lines can be found:

# if (file_exists($app_root . '/' . $site_path . '/settings.local.php')) {
#   include $app_root . '/' . $site_path . '/settings.local.php';
# }

These lines are commented out by the # symbol at the start of each line. This means that the code is not executed. If these lines are uncommented, by removing the hash symbol at the start of each line, this code is executed. The code looks for a file named settings.local.php, and if it exists, it includes that file. This means any settings in settings.local.php become part of the bootstrap process and are available to Drupal. After uncommenting the code, it will look like this:

if (file_exists($app_root . '/' . $site_path . '/settings.local.php')) {
  include $app_root . '/' . $site_path . '/settings.local.php';
}

Why do this? This allows settings to be split into two types: settings that will be the same across all environments (eg. production, staging, local installations etc), and local settings that are only relevant to the current Drupal environment in which this file exists. This is done by committing settings.php to Git so it is shared amongst every environment (note - settings.local.php is NOT committed to Git, as it should not be shared). For example, the majority of Drupal installations have a separate database for each environment, meaning that database connection details will be specific to each environment. Therefore, database settings would be put into settings.local.php, as they are not shared between environments. The connection details to a remote API however may be the same regardless of environment, and therefore would be put into settings.php. This ensures any developer working on the system has access to the API details.

After splitting up the settings this way, settings.php is committed to Git so it is tracked and shared between environments. 

The full process is as follows:

  1. Install the Drupal Composer template as described earlier in this article
  2. Uncomment the code in settings.php as explained above
  3. Install Drupal as normal
  4. Run git diff settings.php to see what has been added to settings.php as part of the installation process. Anything that shows up should be added to settings.local.php, and removed from settings.php. This will definitely be the database connection, but could be other files as well.
  5. Edit .gitignore in the Composer project root, and remove this line:
    /web/sites/*/settings.php
    
    You can now add settings for all environments to settings.php and settings specific to the local environment to settings.local.php.
Setting Up the Private Files Directory

After setting up settings.php and settings.local.php as described above, you can now add the following to settings.php:

$settings['file_private_path'] = ‘../private’;

Next, create the folder /private in the root of your Composer project. Finally clear the Drupal cache, which will create the file /private/.htaccess. At this point you can now add settings.php and the /private folder to Git. Finally, edit .gitignore and add the following:

# Ignore private files

/private/
This sets up the private file directories across all installations, saving developers having to set it up for each installation. Note that the .gitignore setting will ensure the contents of this folder are ignored by Git, as they should be.
What should I add to Git?

The Drupal Composer template project page states:

You should create a new git repository, and commit all files not excluded by the .gitignore file.

In particular, you will need to ensure you commit composer.json and composer.lock any time composer changes are made.

Is It Safe to Use Composer on a Production Server?

The actual answer to this question is not one I have. I am not a server guy overall, and for that matter, I’m not even that much of a Composer expert. It may be that Composer is entirely safe on a production server, but personally my thoughts are that having a program that can write files to the server from remote servers would seem to open up a potential security risk, and therefore it’s likely better to NOT have Composer on a production server. This comment may lead you to question why I would have wasted the time to write so much on Composer if it’s better to not use it in the first place. But not so fast partner! It really depends on the system architecture and the server setup. Some servers have been set up so that as part of the deployment process, the codebase is built using Composer, but then set up as a read-only file system or a Docker container or some other process ensuring security. This however is a particularly complex server set up. Fortunately there is an alternative for developers who are not working with servers configured in this fashion, which we'll look at next.

Using Composer, without having Composer installed on the production server

There is an in-between solution that allows us to use Composer to manage our projects, even with multiple developers, while not having Composer installed on the production server. In this case, we can use a hybrid of a Composer managed project and the old style of using pure Git for deployment.

First, we need to edit the .gitignore folder in the root of our Composer installation. In particular, we need to remove the following code:

# Ignore directories generated by Composer

/drush/contrib/
/vendor/
/web/core/
/web/modules/contrib/
/web/themes/contrib/
/web/profiles/contrib/
/web/libraries/

The above directories are all created by Composer and managed by Composer, which is why they were originally ignored by Git. However, we will not be managing our production server using Composer, and therefore we want to include these folders into the Git repository rather than ignoring them.

After setting up your project, commit the folders listed above to Git. This ensures all the files that are managed by Composer will be part of Git. That waycomposer install never needs to be run on the production server, since any code that command would download will already be part of Git.

What this means now is that any time a developer on the project add/updates code by running either composer update or composer install, they will need to commit not just the composer.json and composer.lock files, but also the added/updated source files that Composer manages, so that all code be available on the production server when checking out the code from Git.

Updating Drupal using Composer

In part one of this series, I discussed library versions. I am not going to go deep into how the versioning works internally, but I’ll explain how updating works specific to Drupal core. At the time of writing, the current version of Drupal is 8.6.3. The dependency set in composer.json is for Drupal 8.6.*. The * at the end of this means that your project uses upon any version of Drupal 8.6, so when a minor version update comes out for 8.6, for example 8.6.4, Drupal core will be updated to Drupal 8.6.4 when composer update drupal\core is run.

However, when Drupal 8.7 is released, it will not be automatically installed, since it does not fit the pattern 8.6.*. To upgrade to Drupal 8.7, the following command is run:

composer update drupal/core:~8.7

The ~/8.7 part of the above command tells Composer to use any version of Drupal 8.7. This means that in the future, when you run composer update drupal/core, minor releases of the 8.7 branch of Drupal core will be installed.

Summary

In this article, we have gone over how Composer is used with Drupal to make project deployment a little smoother, more stable, and consistent between environments. You should have an understanding of:

  • How to set up a new Drupal project using Composer
  • How the following folders relate to the project:
    • /config
    • /drush
    • /scripts
    • /web
    • /vendor
  • How Composer adds Drupal modules and/or themes
  • Drupal library dependencies
  • Managing custom Drupal code with Composer
  • Which items should be committed to Git
  • How to use Composer to manage Drupal projects where the production server does not have Composer
  • How to update Drupal using Composer

In the next post, coming soon, we'll look at how to convert an existing Drupal project to being managed by Composer.

Dec 03 2018
Dec 03

I used to draw a lot. I never thought of myself as a good artist, but I felt like I had a knack for replicating images, so I turned into a game. I’d say, “let’s see if I can take this small cell from my favorite comic and blow it up into a larger version of itself.” Take this for example:

Alita as drawn by Amy

Alita as drawn by Amy

It came as no surprise to me when I became obsessed with creating responsively pixel perfect websites that represented our talented design team’s work precisely. There was a problem, however. Up until recently front end developers didn’t have a great way to create responsive grids, so we used the tools that we had readily available to us. For example, if you intend to bake chocolate chip cookies with carob, the result is going to be wildly different than if you used chocolate chips! (Carob is never a replacement for chocolate, I don’t care what you say…)

Let’s review our options in the evolutionary order we received them:

  1. Tables: Tables were originally intended for tabular data created through HTML and nothing more.
  2. Floats: Since the web was inspired by print, floats were created to successfully wrap text around images. Floats used with inline elements, and specified widths and gutters, have been used for years to create grids. In addition, all browsers support floats. The problem, however, is they can be complex, prone to break, and often contain tricky clearing and nth item margin removal between responsive breakpoints. In other words, it’s a lot of work.
  3. Flexbox: As browsers became more supportive of flexbox, many of us got tremendously excited. Flexbox was designed with rows and columns in mind, including created elements within rows and columns that have equal heights and widths. Vertical centering has never been easier, but that’s a whole other blog post. Flex-wrap made it possible for front end devs to create grids using flexbox, so we kissed floated grids goodbye.
  4. Grid Layout: Low and behold, a CSS solution made with grid styling in mind - finally! As soon as this became widely browser compatible, I jumped at the opportunity to add this to a brand new project implementation.

First things first… after being part of a handful of new site builds from start to finish, it quickly became apparent how important establishing global site containers were at the foundational level. That’s when I created this visual guide:

Rainbow Guide

  • The Full Screen width is as far as the screen allows. This assumes the site could stretch to infinity, if we had a screen that would allow for that. But we don’t. So, the site boundaries need to be established to something more reasonable.
  • The Site Max Width limits how wide the website itself is willing to stretch. In the instance I’ll be talking about, a width of 3000px was set on the <body> tag and centered using margin-left: auto; and margin-right: auto;. Full-site width elements would live here including but not limited to background images that stretch the full width of the site.
  • The Content Side Padding always exists, regardless of device. It creates left/right spacing in between elements we don’t want touching the very edge of the screen, and the edge of screen itself. An example of this would be text elements.
  • The Grid area is where design-determined columns start and end. Designers typically imagine 12-columns when crafting websites, though the total columns can vary. Planning the full grid area pixel width with a designer right at the beginning is crucial for creating precisely planned grid walls, and perfectly centered elements. Grid Layout also makes creating sidebars super easy to create and maintain as well.

Containers

In order to make Grid Layout compatible with IE, it was necessary to add a polyfill to the project. We’re using npm, so a dependency was added like so to the package.json file, and npm update was run to update the package-lock.json file.

Polyfill

Next up, create the grid:

Grid Layout

You’re going to see a lot of Sass variables in these code examples, so this might provide a bit of context. (Note: This is a simplified version. This project in particular has 4 grid columns on mobile, and 16 on extra large screens. There were a number of media queries needed to be taken into account too):

Variables

After creating the global grid, it dawned on me that I could create a visual grid fairly easily so my team and I could check our work against the exact line assignments. The idea came to me after I saw my teammate, Marlene, apply a Neat (of Bourbon and Neat CSS libraries) visual-grid to a previous project. So, I added a bit of markup to the html.html.twig template:

Checker Markup

My tech lead, Maria, loved the idea so she came up with a quick way for us to see the visual grid using a preprocess hook in the .theme file. All the team needed to do was apply ?show-grid=1 to the end of any of the site’s urls to check our work and see if it lined up:

Checker Hook

Of course, it had to inherit the overall .grid properties (code shown above), in addition to containing its own unique styling, in order to superimpose the visual grid over the top of the entire page.

Checker scss

The visual grid looked like this when we were working on our project:

Visual Grid 1

Given that applying CSS Grid Layout to a project was a new experience for me and my team, a couple things needed to happen:

  • Create documentation
  • Meet with the team to explain how I’ve setup the project,
  • Encourage innovation and communication, and
  • Go over the documentation
  • Encourage updates to the documentation when new tools have been created, and report back to the rest of the team

It was important for the team to know my intentions for applying the horizontal containers and the use of CSS Grid Layout, and to admit that this is new territory and I was relying on all of us to be on the same page, applying the tools in the same way, and finding ways to make our jobs easier and more efficient.

Initially, we started using CSS Grid Layout for specifically layout purposes of an element’s parent wrapper. We are all well versed in Flexbox now, so we applied Flexbox on the child element rows like so:

Visual Grid 2

Later, I discovered a way to generically place child elements into a grid parent without needing to assign each child element a place inside the grid. Check out my Codepen example to see how this might work.

Many of the elements on the site needed grid to be applied first. However these elements needed to be centered as well, like the elements in the screenshot above. CSS Grid Layout comes with a property grid-column that takes two arguments: 1. a start column and 2. an end column. It needed to be IE compatible, so I whipped up this little mixin:

Grid Browser Compliance

It took my team some acclimating to enter start and end column values for every element in every breakpoint necessary. Admittedly, it was a bit of a mind-bender at times. This especially took some getting used to since applying grid to a parent element will render all its children into a single column out of the box. It wasn’t long before this grid centering code appeared in the project, thanks to innovative thinking from Maria, Marlene, and Jules:

Center Columns

With that, a themer simply enters how many columns the centered element is wide and how many columns exist on the grid at that breakpoint. Voila, centered grid elements in no time flat!

I could go on and on, there’s so much to share. Though designs may have adjusted slightly over the course of the project, the layout and overall elements largely stayed the same. My ultimate goal was to create an exact replication of our designer, Vicki’s, beautiful work. But, I couldn’t do this all by myself. I needed my teammates to be on the same page. And, that they were! Here’s an example of a page I did very little work on. This is also an example of using CSS Grid Layout with a sidebar element with a 12-column across element in the middle of listing items that needed to be placed in fewer columns. One of these screenshots is the design and one is the live site. Can you tell which is which?

Screen comparison

This whole experience truly was a team effort. The code snippets I’ve shared with you did not look like that at the beginning. They evolved overtime, and have grown much more beyond the foundation I originally envisioned. To my surprise, my team got so excited about these tools of precision that they helped produce robust mixins, setting global spacing variables. Say our designer wants to change the gutter from 16px to 20px. No problem, change the $grid-gutter variable and all of the elements across the site just fall in line. No more changing every spacing instance of every element across the site. Goodbye frustration, hello efficiency and perfection!

So, if you find you’re as obsessed as I am with creating a pixel perfect responsive experience for your websites, or if you simply want to start using CSS Grid Layout with purpose in your next project, please feel free to adapt these tools into your practice.

Thanks for taking this journey with me, and I hope you look forward to the next blog posts of this series where my teammates discuss other problems we solved in this implementation. Isn’t nerding out with other passionate people the best?!

Get In Touch

Questions? Comments? We want to know! Drop us a line and let’s start talking.

Learn More Get In Touch
Dec 01 2018
Dec 01

Having a routine of reading newspaper early in the morning is an intricate tapestry. We need something more to continue with our engagement with a newspaper day after day as we aren’t vessels of information. And that elusive thing is not just the pleasure of words but an engrossing design. So, when the editors make the prose readable, graphic designers work on creating templates that help in sustaining our interest in the act of reading.

Three task sheets are drawn on a white surface with blue, indigo and green colours


Similarly, site builders and content authors require intuitive tools as well for developing pages, alter layouts and add or arrange blocks with live preview. Hence, when it comes to new and refreshing easy-to-use page builders, An ambitious visual design tool called Layout Builder is on its way to change things completely.

Layout Initiative: The inception

Layout Builder took its birth through Layout Initiative which was started with the objectives of:

  • Underlying APIs for supporting layout management to be utilised by core and contributed projects alike.
  • Offering a drag and drop interface for building layouts that can apply both to overall site sections and overridden on separate landing pages.
  • Allowing the layout application to data entry forms and content

Layouts have been a significant tool in component-based theming as they can be utilised to map data to component templates. In Drupal 8.3, the Layout API and the Layout Discovery module were included as an experimental subsystem. They were stabilised in Drupal 8.4.

Layout Builder is being planned to be stabilised in Drupal 8.7

The Layout API and the Layout Discovery module were added in Drupal 8.3 as an experimental subsystem, and they were stabilized in Drupal 8.4. An experimental Layout Builder module was included in Drupal 8.5 and is being planned to be stabilised in Drupal 8.7.

Layout Builder: A guile approach to site building

Layout Builder offers the ability to customise the layout of your content where you start by choosing predefined layouts for different sections of the page and then populate those layouts with one or more blocks. That is, it gives you a UI with a visual preview and supports using multiple layouts together. It lets you place blocks in layout regions and create layouts for separate pieces of content.

Layout Builder gives you a UI with a visual preview and supports using multiple layouts together.

[embedded content]


At present, there are some Drupal modules that can enable the functionality of Layout Builder. The combination of Panels and Panelizer can help in developing templated layouts and landing pages. Also, the Paragraphs module can be used for the creation of custom landing pages.

The greatness of the Layout Builder

Following are some of the reasons that show how great the Layout Builder is:

Drupal’s layout builder module in action with the images of a bunch of flowers on top and two puppies in a basket at the bottomSource: Dries Buytaert’s blog
  • To lay out all the instances of a specific content type, Layout Builder helps in creating layout templates.
  • Layout Builder allows you to customise these layout templates. You can override these layout templates on a case-by-case basis.
  • It can be utilised to create custom pages. You can create custom, one-off landing pages that are not linked to a content type or structured content.
  • The key to stabilise Layout Builder is to make sure that it passes Drupal’s accessibility gate (Level AA conformance with WCAG and ATAG). So, it will ensure web accessibility standards once stabilised in Drupal 8.7.
  • The Menu Item Extras module is used in Drupal 8 for implementing mega menus through additional fields. In Drupal 8.6, all you need to do for creating a mega menu is to enable Layout Builder for the default menu item view display mode.

How to work around with Layout Builder?

Follow the following steps to use the Layout Builder for configuring content types and nodes:

1. In the first step, you start by enabling the Layout Builder module.

Layout Builder drupal module interface with a checkboxSource: OSTraining

2. This is followed by content creation. This requires the installation of Devel module where you enable Devel and Devel generate parts of the plugin. This, then, helps in generating articles.

List of options with checkboxes and scrollable list boxes creating content while enabling layout builder drupal moduleSource: OSTraining

3. Then, the layout of the article content type is configured.

Edit layout option while enabling layout builder drupal module with a box containing instructions and save and cancel options above it.Source: OSTraining

4. Finally, the layout of a single node is configured.

Check box for layout option while configuring layout builder drupal moduleSource: OSTraining

Conclusion

There is no doubt that Drupal’s current site building and content authoring capabilities are great. But Layout Builder will transform the whole experience of creating pages with its simple and intuitive tools.

Opensense Labs has been offering a suite of services to help organisations lead their ambitious plans of digital transformation.

Contact us at [email protected] to dive into the world of Layout Builder and get the best out of it.

Nov 30 2018
Nov 30

NBC Sports Digital partners with Acquia to cover more than 30,000 sporting events every year, including some of the highest trafficked events in the history of the web.

Dries interviews Eric Black from NBC

Many of Acquia's customers have hundreds or even thousands of sites, which vary in terms of scale, functionality, longevity and complexity.

One thing that is very unique about Acquia is that we can help organizations scale from small to extremely large, one to many, and coupled to decoupled. This scalability and flexibility is quite unique, and allows organizations to standardize on a single web platform. Standardizing on a single web platform not only removes the complexity from having to manage dozens of different technology stacks and teams, but also enables organizations to innovate faster.

Acquia Engage 2018 - NBC Sports

A great example is NBC Sports Digital. Not only does NBC Sports Digital have to manage dozens of sites across 30,000 sporting events each year, but it also has some of the most trafficked sites in the world.

In 2018, Acquia supported NBC Sports Digital as it provided fans with unparalleled coverage of Super Bowl LII, the Pyeongchang Winter Games and the 2018 World Cup. As quoted in NBC Sport's press release, NBC Sports Digital streamed more than 4.37 billion live minutes of video, served 93 million unique users, and delivered 721 million minutes of desktop video streamed. These are some of the highest trafficked events in the history of the web, and I'm very proud that they are powered by Drupal and Acquia.

To learn more about how Acquia helps NBC Sports Digital deliver more than 10,000 sporting events every year, watch my conversation with Eric Black, CTO of NBC Sports Digital, in the video below:

[embedded content]

Not every organization gets to entertain 100 million viewers around the world, but every business has its own World Cup. Whether it's Black Friday, Mother's Day, a new product launch or breaking news, we offer our customers the tools and services necessary to optimize efficiency and provide flexibility at any scale.

November 30, 2018

1 min read time

db db
Nov 30 2018
Nov 30

Perhaps it is not very surprising that in an age of austerity and a climate of fear about child abuse, new technology is being sought by social workers for help. The Guardian, news media, revealed that local authorities in the UK have been using machine learning and predictive modelling to intervene before children were referred to social services. For instance, local councils are building ‘predictive analytics’ systems for leveraging cornucopia of data on hundreds of people for constructing computer models in an effort to predict child abuse and intervene before it can happen.

Sticky notes on a wall with one hand sticky note in focus reading “Run a usability test”


Power of predictive analytics can be extracted not only for social issues like child abuse but for a superabundance of areas in different industries. One such area is the web user experience where implementation of predictive analytics can be very essential. Predictive user experience (UX) can help usher in a plenitude of betterment. But how did predictive analytics came into being?

Destination 1689

Contemporary Analysis states that the history of predictive analytics takes us back to 1689. While the rise of predictive analytics has been attributed to technological advances like Hadoop and MapReduce, it has been in use for centuries.

One of the first applications of predictive analytics can be witnessed in the times when shipping and trade were prominent.

One of the first applications of predictive analytics can be witnessed in the times when shipping and trade were prominent. Lloyd’s of London, one of the first insurance and reinsurance markets, was a catalyst for the distribution of important information required for underwriting. And the name underwriting itself took birth from London insurance market. In exchange for a premium, bankers would accept the risk on a given sea voyage and write their names underneath the risk information that is written on one Lloyd’s slip developed for this purpose.

Lloyd’s coffee house was established in 1689 by Edward Lloyd. He was well-known among the sailors, merchants and ship owners as he shared reliable shipping news which helped in discussing deals including insurance.

Technological advancements in the 20th century and 21st century have given impetus to predictive analytics as can be seen through the following compilation by FICO.

Infographic showing a table with each column showing discoveries in the area of predictive analytics.Source: FICO

Predictive Analytics and User Experience: A Detailed Look

IBM states that predictive analytics brings together advanced analytics capabilities comprising of ad-hoc analysis, predictive modelling, data mining, text analytics, optimisation, real-time scoring and machine learning. Enterprises can utilise these tools in order to discover patterns in data and forecast events.

Predictive Analytics is a form of advanced analytics which examines data or content to answer the question “What is going to happen?” or more precisely, “What is likely to happen?”, and is characterized by techniques such as regression analysis, forecasting, multivariate statistics, pattern matching, predictive modelling, and forecasting. - Gartner

A statistical representation of data compiled by Statista delineates that predictive analytics is only going to grow and its market share will keep expanding in the coming years.

A Bar graph with blue-coloured vertical bars depicting statistics on market share of predictive analyticsPredictive analytics revenues/market size worldwide, from 2016 to 2022 (in billion U.S. dollars) | Statista

A Personalisation Pulse Check report from Accenture found that 65% of customers were more likely to shop at a store or online business that sent relevant and personalized promotions. So, instead of resulting in alterations to the user interface, applying a predictive analytics algorithm to UX design presents the users with relevant information. For instance, a user who has recently bought a costly mobile phone from an e-commerce site might be willing to buy a cover to protect it from dust and scratches. Hence, that user would receive a recommendation to purchase a cover. The e-commerce site might also recommend other accessories like headphones, memory cards or antivirus software.

How does Predictive Analytics Work?

Following are the capabilities of predictive analytics according to a compilation by IBM:

  • Statistical analysis and visualisation: It addresses the overall analytical process including planning, data collection, analysis, reporting and deployment.
  • Predictive modelling: It leverages the power of model-building, evaluation and automation capabilities.
  • Linear regression: Linear regression analysis helps in predicting the value of a variable on the basis of the value of another variable.
  • Logistic regression: It is also known as the logit model which is used for predictive analytics and modelling and is also utilised for application in machine learning.

Leveraging Predictive Models in UX Design

Data will drive the UX in the future. Patterns that derive data make for a terrific predictive engine. This helps in forecasting a user’s intent by compiling numerous predictors that together influence conversions.

Data will drive the UX in the future

With the help of predictive analytics in UX design, conversation rates can be improved. For instance, recommendation systems leverage data such as consumer interest and purchasing behaviour which is then applied via a predictive model for generating a listicle of recommended items. 

Amazon, e-commerce giant, utilises an item-item collaborative filtering algorithm for suggesting products. This helps in displaying the books to a bookworm and baby toys to a new mother. Quantum Interface, which is a startup in Austin Texas, has built a predictive user interface with the help of natural user interface (NUI) principles. This utilises the directional vectors - speed, time and angle change - for forecasting user’s intent.

Implementing Predictive UX with Drupal

Predictive UX adapts content based on a user’s previous choices just like web personalisation does. But predictive UX extracts the power of machine learning and statistical techniques for making informed decisions on the user’s behalf.

While modern technology is oscillating from mobile-first to AI-first, predictive UX is the next huge thing which is going to be a trend-setter. It is meritorious as it helps users reduce the cognitive load because coercing users to make too many decisions will propel them to take the easy way out.

Drupal provides different ways of implementing predictive UX:

Acquia Lift

A drop shaped icon and a rocket passing through it representing Acquia lift logo

Acquia Lift Connector, a Drupal module, offers integration with the Acquia Lift service and an improved user experience for web personalisation, testing and targeting directly on the front end of your website.

It leverages machine learning to automatically recommend content based on what a user is currently looking at or has looked in the past. It has a drag-and-drop feature for developing, previewing and launching personalisations and has a customer data repository for providing a single view of customers.

It has the feature of real-time adaptive targeting that refines segments while A/B helps in keeping the users engrossed with the content that resonates.

ApachePrediction IO

Bay Area Drupal Camp 2018 has a session where a demonstration showed how predictive UX helps users decide. It was geared towards both UX designers and developers. It talked about how machine learning powers predictive UX and the ways of implementing it using Drupal.
 

[embedded content]


It exhibited a Drupal 8 site which had a list of restaurants that could be sorted by proximity. That means you can check out the nearest restaurant and order food. When users log in to this site, they see top recommendations customised to them.

There are some interesting things happening behind-the-scenes to show the recommendations. An API query is being sent to the machine learning server which, in return, shows a ranked list of recommendations. So, when users go to a restaurant and order food, all that data is sent to the event server through the API which is how data is being collected. Here, the Apache PredictionIO server, which is an open source machine learning stack, offers simple commands to train and deploy engine.

Gazing into the future of UX

UX Collective says that the future of UX is effulgent in the coming years. Demand for pixel perfect, usable and delightful UX is sky-high especially when digital transformation endeavours underway globally. Following graphical representation shows the top design-driven organisations against all of Standard and Poor’s (S&P) index.

Graphical representation showing a yellow line and a red line to depict Standard and Poor’s indexSource: Job Trends Report: The Job Market for UX/UI Designers

It further states that UX design will consist of more formal studies:

  • Study of cognitive neuroscience and human behaviour
  • Study of ethics
  • Artificial Intelligence advances, generated and unsupervised machine learning-based system interactions, predictive UX, personalised robotic services and so on

Conclusion

User experience will always be an integral component of any sector in any industry. While web personalisation is a sure-shot way of improving digital web experience, disrupting technologies like machine learning take it to another level. Leveraging machine learning algorithms, predictive UX can forecast user choices and help them decide. Implementing predictive UX is a praiseworthy solution to offer users an unprecedented digital experience.

When it comes to Drupal development, OpenSense Labs has been making steadfast in its objectives of embracing innovative technologies that can be implemented with Drupal’s robust framework.

Contact us at [email protected] to implement predictive UX with Drupal.

Nov 28 2018
Nov 28

We're excited to announce that 14 Lullabots will be speaking at DrupalCon Seattle! From presentations to panel discussions, we're looking forward to sharing insights and good conversation with our fellow Drupalers. Get ready for mass Starbucks consumption and the following Lullabot sessions. And yes, we will be hosting a party in case you're wondering. Stay tuned for more details!

Karen Stevenson, Director of Technology

Karen will talk about the challenges of the original Drupal AMP architecture, changes in the new branch, and some big goals for the future of the project.

Zequi Vázquez, Developer

Zequi will explore Drupal Core vulnerabilities, SA-CORE-2014-005 and SA-CORE-2018-7600, by discussing the logic behind them, why they present a big risk to a Drupal site, and how the patches work to prevent a successful exploitation.

Sally Young, Senior Technical Architect (with Matthew Grill, Senior JavaScript Engineer at Acquia & Daniel Wehner, Senior Drupal Developer at Chapter Three)

Discussing common problems and best practices of decoupled Drupal has surpassed the question of whether or not to decouple. Sally, Matthew, and Daniel will talk about why the Drupal Admin UI team went with a fully decoupled approach as well as common approaches to routing, fetching data, managing state with autosave and some level of extensibility.

Sally Young, Senior Technical Architect (with Lauri Eskola, Software Engineer in OCTO at Acquia; Matthew Grill, Senior JavaScript Engineer at Acquia; & Daniel Wehner, Senior Drupal Developer at Chapter Three)

The Admin UI & JavaScript Modernisation initiative is planning a re-imagined content authoring and site administration experience in Drupal built on modern JavaScript foundations. This session will provide the latest updates and a discussion on what is currently in the works in hopes of getting your valuable feedback.

Greg Dunlap, Senior Digital Strategist

Greg will take you on a tour of the set of tools we use at Lullabot to create predictable and repeatable content inventories and audits for large-scale enterprise websites. You will leave with a powerful toolkit and a deeper understanding of how you use them and why.

Mike Herchel, Senior Front-end Developer

If you're annoyed by slow websites, Mike will take you on a deep dive into modern web performance. During this 90 minute session, you will get hands-on experience on how to identify and fix performance bottlenecks in your website and web app.

Matt Westgate, CEO & Co-founder

Your DevOps practice is not sustainable if you haven't implemented its culture first. Matt will take you through research conducted on highly effective teams to better understand the importance of culture and give you three steps you can take to create a cultural shift in your DevOps practice. 

April Sides, Developer

Life is too short to work for an employer with whom you do not share common values or fits your needs. April will give you tips and insights on how to evaluate your employer and know when it's time to fire them. She'll also talk about how to evaluate a potential employer and prepare for an interview in a way that helps you find the right match.

Karen Stevenson, Director of TechnologyMike Herchel, Senior Front-end DeveloperWes Ruvalcaba, Senior Front-end Developer, & Ellie Fanning, Head of Marketing

Karen, Mike, Wes, and team built a soon-to-be-launched Drupal 8 version of Lullabot.com as Layout Builder was rolling out in core. With the goal of giving our non-technical Head of Marketing total control of the site, lessons were learned and successes achieved. Find out what those were and also learn about the new contrib module Views Layout they created.

Matthew Tift, Senior Developer

The words "rockstar" and "rock star" show up around 500 times on Drupal.org. Matthew explores how the language we use in the Drupal community affects behavior and how to negotiate these concepts in a skillful and friendly manner.

Helena McCabe, Senior Front-end Developer (with Carie Fisher, Sr. Accessibility Instructor and Dev at Deque)

Helena and Carie will examine how web accessibility affects different personas within the disability community and how you can make your digital efforts more inclusive with these valuable insights.

Marc Drummond, Senior Front-end Developer Greg Dunlap, Senior Digital Strategist (with Fatima Sarah Khalid, Mentor at Drupal Diversity & Inclusion Contribution Team; Tara King, Project lead at Drupal Diversity & Inclusion Contribution Team; & Alanna Burke, Drupal Engineer at Kanopi Studios)

Open source has the potential to transform society, but Drupal does not currently represent the diversity of the world around us. These members of the Drupal Diversity & Inclusion (DDI) group will discuss the state of Drupal diversity, why it's important, and updates on their efforts.

Mateu Aguiló Bosch, Senior Developer (with Wim Leers, Principal Software Engineer in OCTO at Acquia & Gabe Sullice, Sr. Software Engineer, Acquia Drupal Acceleration Team at Acquia)

Mateu and his fellow API-first Initiative maintainers will share updates and goals, lessons and challenges, and discuss why they're pushing for inclusion into Drupal core. They give candy to those who participate in the conversation as an added bonus!

Jeff Eaton, Senior Digital Strategist

Personalization has become quite the buzzword, but the reality in the trenches rarely lives up to the promise of well-polished vendor demos. Eaton will help preserve your sanity by guiding you through the steps you should take before launching a personalization initiative or purchasing a shiny new product. 

Also, from our sister company, Drupalize.Me, don't miss this session presented by Joe Shindelar:

Joe will discuss how Gatsby and Drupal work together to build decoupled applications, why Gatsby is great for static sites, and how to handle private content, and other personalization within a decoupled application. Find out what possibilities exist and how you can get started.

Photo by Timothy Eberly on Unsplash

Nov 28 2018
Nov 28
Dries interviews Erica Bizzari from Paychex

One trend I've noticed time and time again is that simplicity wins. Today, customers expect every technology they interact with to be both functionally powerful and easy to use.

A great example of such is Acquia's customer, Paychex. Paychex' digital marketing team recently replatformed Paychex.com using Drupal and Acquia. The driving factor was the need for more simplicity.

They completed the replatforming work in under four months, and beat the original launch goal by a long shot. By levering Drupal's improved content authoring capabilities, Paychex also made it a lot simpler for the marketing team to publish content, which resulted in doubled year-over-year growth in site traffic and leads.

Acquia Engage 2018 - Paychex

If you want to learn more about how Paychex accomplished its ambitious digital and marketing goals, you can watch my Q&A with Erica Bizzari, digital marketing manager at Paychex.

[embedded content]

November 28, 2018

36 sec read time

db db
Nov 28 2018
Nov 28

At Lullabot, we’ve been using GitHub, as well as other project management systems for many years now. We first wrote about managing projects with GitHub back in 2012 when it was still a bit fresh. Many of those guidelines we set forth still apply, but GitHub itself has changed quite a bit since then. One of our favorite additions has been the Projects tab, which gives any repository the ability to organize issues onto boards with columns and provides some basic workflow transitions for tickets. This article will go over one of the ways we’ve been using GitHub Projects for our clients, and set forth some more guidelines that might be useful for your next project.

First, let’s go over a few key components that we’re using for our project organization. Each of these will be explained in more detail below.

  1. Project boards
  2. Epics
  3. Issues
  4. Labels

Project boards

A project board is a collection of issues being worked on during a given time. This time period is typically what is being worked on currently, or coming up in the future. Boards have columns which represent the state of a given issue, such as “To Do”, “Doing”, “Done”, etc.

For our purposes, we’ve created two main project boards:

  1. Epics Board
  2. Development Board

Epics Board

ex: https://github.com/Lullabot/PM-template/projects/1

The purpose of this Project board is to track the Epics, which can be seen as the "parent" issues of a set of related issues. More on Epics below. This gives team members a birds-eye view of high-level features or bodies of work. For example, you might see something like “Menu System” or “Homepage” on this board and can quickly see that “Menu System” is currently in “Development”, while the “Homepage” is currently in “Discovery”.

The “Epics” board has four main columns. (Each column is sorted with highest priority issues at the top and lower priority issues at the bottom.) The four columns:

  • Upcoming - tracks work that is coming up, and not yet defined.
  • Discovery - tracks work that is in the discovery phase being defined.
  • Development - tracks work that is currently in development.
  • Done - tracks work that is complete. An Epic is considered complete when all of its issues are closed.

Development Board

ex: https://github.com/Lullabot/PM-template/projects/2

The purpose of the Development board is to track the issues which are actionable by developers. This is the day-to-day work of the team and the columns here are typically associated with some state of progression through the board. Issues on this board are things like “Install module X”, “Build Recent Posts View”, and “Theme Social Sharing Component”.

This board has six main columns:

  • To do - issues that are ready to be worked on - developers can assign themselves as needed.
  • In progress - indicates that an issue is being worked on.
  • Peer Review - issue has a pull request and is ready for, or under review by a peer.
  • QA - indicates that peer review is passed and is ready for the PM or QA lead for testing.
  • Stakeholder review - stakeholder should review this issue for final approval before closing.
  • Done - work that is complete.

Epics

An Epic is an issue that can be considered the "parent" issue of a body of work. It will have the "Epic" label on it for identification as an Epic, and a label that corresponds to the name of the Epic (such as "Menu"). Epics list the various issues that comprise the tasks needed to accomplish a body of work. This provides a quick overview of the work in one spot. It's proven very useful when gardening the issue queue or providing stakeholders with an overall status of the body of work.

For instance:

Homepage [Epic]

  • Tasks

    • #4 Build Recent Posts View
    • #5 Theme Social Sharing Component

The Epic should also have any other relevant links. Some typical links you may find in an Epic:

  • Designs
  • Wiki entry
  • Dependencies
  • Architecture documentation
  • Phases

Phases

Depending on timelines and the amount of work, some Epics may require multiple Phases. These Phases are split up into their own Epics and labeled with the particular Phase of the project (like “Phase 1” and “Phase 2”). A Phase typically encompasses a releasable state of work, or generally something that is not going to be broken but may not have all of the desired functionality built. You might build out a menu in Phase 1, and translate that menu in Phase 2.

For instance:

  • Menu Phase 1

    • Labels: [Menu] [Epic] [Phase 1]
    • Tasks
    • Labels: [Menu] [Phase 1]
  • Menu Phase 2

    • Labels: [Menu] [Epic] [Phase 2]
    • Tasks
    • Labels: [Menu] [Phase 2]
  • Menu Phase 3

    • Labels: [Menu] [Epic] [Phase 3]
    • Tasks
    • Labels: [Menu] [Phase 3]

Issues within Phase 3 (for example) will have the main epic as a label "Menu" as well as the phase, "Phase 3", for sorting and identification purposes.

Issues

Issues are the main objects within GitHub that provide the means of describing work, and communicating around that work. At the lowest level, they provide a description, comments, assignees, labels, projects (a means of placing an issue on a project board) and milestones (or a means of grouping issues by release target date).

Many times these issues are directly linked to from a pull request that addresses the issue. By mentioning the issue with a pound(#) sign, GitHub will automatically create a link out of the text and add a metadata item on the issue deep linking to the pull request. This is relevant as a means of tracking what changes are being made with the original request which then can be used to get to the source of the request.

For our purposes, we have two "types" of issues: Epics or Tasks. As described above, Epics have the "Epic" label, while all others have a label for the Epic to which it belongs. If an issue does not have a value in the "Project" field, then it does not show up on a project board and is considered to be in the Backlog and not ready for work.

Labels

Labels are a means of having a taxonomy for issues.

We have 4 main uses for Labels currently:

  1. Epic - this indicates the issue is an Epic and will house information related to the body of work.
  2. [name of epic] (ex: Menu) - indicates that this is a task that is related to the Menu epic. If combined with the Epic label, it is the Menu Epic.
  3. [phase] (ex: Phase 1) - indicates this is part of a particular phase of work. if there is no phase label, it's considered to be a part of Phase 1.
  4. bug - indicates that this task is a defect that was found and separated from the issue in which it was identified.
  5. Blocked - Indicates this issue is blocked by something. The Blocker should be called out in the issue description.
  6. Blocker - indicates that this issue is blocking something.
  7. front-end - indicates that an issue has the underlying back-end work completed and is ready for a front-end developer to begin working on it.

There are other labels that are used sometimes to indicate various meta, such as "enhancement", "design", or "Parking Lot". There are no set rules about how to use these sort of labels, and you can create them as you see fit if you think they are useful to the team. Though be warned, if you include too many labels they will become useless. Teams will generally only use labels if they are frictionless and helpful. The moment they become overwhelming, duplicative, or unclear, the team will generally abandon good label hygiene.

These are just some guidelines we consider when organizing a project with GitHub. The tools themselves are flexible and can take whatever form you choose. This is just one recommendation which is working pretty well for us one of our projects, but the biggest takeaway is that it’s versatile and can be adapted to whatever your situation may require.

How have you been organizing projects in GitHub? We’d love to hear about your experiences in the comments below!

Nov 28 2018
Nov 28

I recently worked with the Mass.gov team to transition its development environment from Vagrant to Docker. We went with “vanilla Docker,” as opposed to one of the fine tools like DDev, Drupal VM, Docker4Drupal, etc. We are thankful to those teams for educating and showing us how to do Docker right. A big benefit of vanilla Docker is that skills learned there are generally applicable to any stack, not just LAMP+Drupal. We are super happy with how this environment turned out. We are especially proud of our MySQL Content Sync image — read on for details!

Pretty docks at Boston Harbor. Photo credit.

Docker compose

The heart of our environment is the docker-compose.yml. Here it is, then read on for a discussion about it.

Developers use .env files to customize aspects of their containers (e.g. VOLUME_FLAGS, PRIVATE_KEY, etc.). This built-in feature of Docker is very convenient. See our .env.example file:

MySQL content sync image

The most innovative part of our stack is the mysql container. The Mass.gov Drupal database is gigantic. We have tens of thousands of nodes and 500,000 revisions, each with an unholy number of paragraphs, reference fields, etc. Developers used to drush sql:sync the database from Prod as needed. The transfer and import took many minutes, and had some security risk in the event that sanitization failed on the developer’s machine. The question soon became, “how can we distribute a mysql database that’s already imported and sanitized?” It turns out that Docker is a great way to do just this.

Today, our mysql container builds on CircleCI every night. The build fetches, imports, and sanitizes our Prod database. Next, the build does:

That is, we commit and push the refreshed image to a private repository on Docker Cloud. Our mysql image is 9GB uncompressed but thanks to Docker, it compresses to 1GB. This image is really convenient to use. Developers fetch a newer image with docker-compose pull mysql. Developers can work on a PR and then when switching to a new PR, do a simple ahoy down && ahoy up. This quickly restores the local Drupal database to a pristine state.

In order for this to work, you have to store MySQL data *inside* the container, instead of using a Docker Volume. Here is the Dockerfile for the mysql image.

Drupal image

Our Drupal container is open source — you can see exactly how it’s built. We start from the official PHP image, then add PHP extensions, Apache config, etc.

An interesting innovation in this container is the use of Docker Secrets in order to safely share an SSH key from host to the container. See this answer and mass_id_rsa in the docker-compose.yml above. Also note the two files below which are mounted into the container:

Configure SSH to use the secrets file as private key Automatically run ssh-add when logging into the container

Traefik

Traefik is a “cloud edge router” that integrates really well with docker-compose. Just add one or two labels to a service and its web site is served through Traefik. We use Traefik to provide nice local URLs for each of our services (www.mass.local, portainer.mass.local, mailhog.mass.local, …). Without Traefik, all these services would usually live at the same URL with differing ports.

In the future, we hope to upgrade our local sites to SSL. Traefik makes this easy as it can terminate SSL. No web server fiddling required.

Ahoy aliases

Our repository features a .ahoy.yml file that defines helpful aliases (see below). In order to use these aliases, developers download Ahoy to their host machine. This helps us match one of the main attractions of tools like DDev/Lando — their brief and useful CLI commands. Ahoy is a convenience feature and developers who prefer to use docker-compose (or their own bash aliases) are free to do so.

Bells and whistles

Our development environment comes with 3 fine extras:

  • Blackfire is ready to go — just run ahoy blackfire [URL|DrushCommand] and you’ll get back a URL for the profiling report
  • Xdebug is easily enabled by setting the XDEBUG_ENABLE environment variable in a developer’s .env file. Once that’s in place, the PHP in the container will automatically connect to the host’s PHPStorm or other Xdebug client
  • A chrome-headless container is used by our suite which incorporates Drupal Test Traits — a new open source project we published. We will blog about DTT soon

Wish list

Of course, we are never satisfied. Here are a couple issues to tackle:

Nov 28 2018
Nov 28

Helping content creators make data-driven decisions with custom data dashboards

Our analytics dashboards help Mass.gov content authors make data-driven decisions to improve their content. All content has a purpose, and these tools help make sure each page on Mass.gov fulfills its purpose.

Before the dashboards were developed, performance data was scattered among multiple tools and databases, including Google Analytics, Siteimprove, and Superset. These required additional logins, permissions, and advanced understanding of how to interpret what you were seeing. Our dashboards take all of this data and compile it into something that’s focused and easy to understand.

We made the decision to embed dashboards directly into our content management system (CMS), so authors can simply click a tab when they’re editing content.

GIF showing how a content author navigates to the analytics dashboard in the Mass.gov CMS.

How we got here

The content performance team spent more than 8 months diving into web data and analytics to develop and test data-driven indicators. Over the testing period, we looked at a dozen different indicators, from pageviews and exit rates to scroll-depth and reading grade levels. We tested as many potential indicators as we could to see what was most useful. Fortunately, our data team helped us content folks through the process and provided valuable insight.

Love data? Check out our 2017 data and machine learning recap.

We chose a sample set of more than 100 of the most visited pages on Mass.gov. We made predictions about what certain indicators said about performance, and then made content changes to see how it impacted data related to each indicator.

We reached out to 5 partner agencies to help us validate the indicators we thought would be effective. These partners worked to implement our suggestions and we monitored how these changes affected the indicators. This led us to discover the nuances of creating a custom, yet scalable, scoring system.

Line chart showing test results validating user feedback data as a performance indicator.

For example, we learned that a number of indicators we were testing behaved differently depending on the type of page we were analyzing. It’s easy to tell if somebody completed the desired action on a transactional page by tracking their click to an off-site application. It’s much more difficult to know if a user got the information they were looking for when there’s no action to take. This is why we’re planning to continually explore, iterate on, and test indicators until we find the right recipe.

How the dashboards work

Using the strategies developed with our partners, we watched, and over time, saw the metrics move. At that point, we knew we had a formula that would work.

We rolled indicators up into 4 simple categories:

  • Findability — Is it easy for users to find a page?
  • Outcomes — If the page is transactional, are users taking the intended action? If the page is focused on directing users to other pages, are they following the right links?
  • Content quality — Does the page have any broken links? Is the content written at an appropriate reading level?
  • User satisfaction — How many people didn’t find what they were looking for?
Screenshot of dashboard results as they appear in the Mass.gov CMS.

Each category receives a score on a scale of 0–4. These scores are then averaged to produce an overall score. Scoring a 4 means a page is checking all the boxes and performing as expected, while a 0 means there are some improvements to be made to increase the page’s overall performance.

All dashboards include general recommendations on how authors can improve pages by category. If these suggestions aren’t enough to produce the boost they were looking for, authors can meet with a content strategist from Digital Services to dive deeper into their content and create a more nuanced strategy.

GIF showing how a user navigates to the “Improve Your Content” tab in a Mass.gov analytics dashboard.

Looking ahead

We realize we can’t totally measure everything through quantitative data, so these scores aren’t the be-all, end-all when it comes to measuring content performance. We’re a long way off from automating the work a good editor or content strategist can do.

Also, it’s important to note these dashboards are still in the beta phase. We’re fortunate to work with partner organizations who understand the bumps in the proverbial development road. There are bugs to work out and usability enhancements to make. As we learn more, we’ll continue to refine them. We plan to add dashboards to more content types each quarter, eventually offering a dashboard and specific recommendations for the 20+ content types in our CMS.

Nov 28 2018
Nov 28

Variety of content and the need for empathy drive our effort to simplify language across Mass.gov

Nearly 7 million people live in Massachusetts, and millions more visit the state each year. These people come from different backgrounds and interact with the Commonwealth for various reasons.

Graphic showing more than 3 million visitors go to Mass.gov each month.

We need to write for everyone while empathizing with each individual. That’s why we write at a 6th grade reading level. Let’s dig into the reasons why.

Education isn’t the main factor

The Commonwealth has a high literacy rate and a world-renowned education network. From elementary school to college and beyond, you can get a great education here.

We’re proud of our education environment, but it doesn’t affect our readability standards. Navigating the Women, Infants, and Children (WIC) Nutrition Program might be challenging for everyone.

Complexity demands simplicity

People searching for nutrition services are doing so out of necessity. They’re probably frustrated, worried, and scared. That affects how people read and retain information.

Learn about our content strategy. Read the 2017 content team review.

This is the case for many other scenarios. Government services can be complicated to navigate. Our job is to simplify language. We get rid of the white noise and focus on essential details.

Time is not on our side

You don’t browse Mass.gov in your free time. It’s a resource you use when you have to. Think of it as a speedboat, not a cruise ship. They’ll both get you across the water, just at different speeds.

Graphic showing desktop visitors to Mass.gov look at more pages and have longer sessions than mobile and tablet visitors.

Mass.gov visitors on mobile devices spend less time on the site and read fewer pages. The 44% share of mobile and tablet traffic will only increase over time. These visitors need information boiled down to essential details. Simplifying language is key here.

Exception to the rule

A 6th-grade reading level doesn’t work all the time. We noticed this when we conducted power-user testing. Lawyers, accountants, and other groups who frequently use Mass.gov were involved in the tests.

These groups want jargon and industry language. It taught us that readability is relative.

Where we are today

We use the Flesch-Kincaid model to determine reading level in our dashboards. It accounts for factors like sentence length and the number of syllables in words.

This is a good foundation to ensure we consistently hit the mark. However, time is the most important tool we have. The more content we write, the better we’ll get.

Writing is a skill refined over time, and adjusting writing styles isn’t simple. Even so, we’re making progress. In fact, this post is written at a 6th grade reading level.

Nov 28 2018
Nov 28

Authors are eager to learn, and a content-focused community is forming. But there’s still work to do.

Video showing highlights of speakers, presenters, and attendees interacting at ConCon 2018.

When you spend most of your time focused on how to serve constituents on digital channels, it can be good to simply get some face time with peers. It’s an interesting paradox of the work we do alongside our partners at organizations across the state. Getting in a room and discussing content strategy is always productive.

That was one of the main reasons behind organizing the first ever Massachusetts Content Conference (ConCon). More than 100 attendees from 35 organizations came together for a day of learning and networking at District Hall in Boston. There were 15 sessions on everything from how to use Mayflower — the Commonwealth’s design system — to what it takes to create an awesome service.

Graphic showing more than 100 attendees from 50 organizations attended 15 sessions from 14 presenters at ConCon 2018.

ConCon is and will always be about our authors, and we’re encouraged by the feedback we’ve received from them so far. Of the attendees who responded to a survey, 93% said they learned about new tools or techniques to help them create better content. More so, 96% said they would return to the next ConCon. The average grade attendees gave to the first ever ConCon on a scale of 1 to 10 — with 1 being the worst and 10 the best — was 8.3.

Our authors were engaged and ready to share their experiences, which made for an educational environment, for their peers as well as our own team at Digital Services. In fact, it was an eye opening experience, and we took a lot away from the event. Here are some of our team’s reflections on what they learned about our authors and our content needs moving forward.

We’re starting to embrace data and feedback

“The way we show feedback and scores per page is great but it doesn’t help authors prioritize their efforts to get the biggest gain for their constituents. We’re working hard to increase visibility of this data in Drupal.”

— Joe Galluccio

Katie Rahhal, Content Strategist
“I learned we’re moving in the right direction with our analysis and Mass.gov feedback tools. In the breakout sessions, I heard over and over that our content authors really like the ones we have and they want more. More ways to review their feedback, more tools to improve their content quality, and they’re open to learning new ways to improve their content.”

Christine Bath, Designer
“It was so interesting and helpful to see how our authors use and respond to user feedback on Mass.gov. It gives us a lot of ideas for how we can make it easier to get user feedback to our authors in more actionable ways. We want to make it easy to share constituent feedback within agencies to power changes on Mass.gov.”

Embedded tweet from @MassGovDigital highlighting a lesson on good design practices from ConCon 2018.

Joe Galluccio, Product Manager
“I learned how important it is for our authors to get performance data integrated into the Drupal authoring experience. The way we show feedback and scores per page is great but it doesn’t help authors prioritize their efforts to get the biggest gain for their constituents. We’re working hard to increase visibility of this data in Drupal.”

Our keynote speaker gave a great use case for improving user journeys

Bryan Hirsch, Deputy Chief Digital Officer
“Having Dana Chisnell, co-founder of the Center for Civic Design, present her work on mapping and improving the journey of American voters was the perfect lesson at the perfect time. The page-level analytics dashboards are a good foundation we want to build on. In the next year, we’re going to research, test, and build Mass.gov journey analytics dashboards. We’re also spending this year working with partner organizations on mapping end-to-end user journeys for different services. Dana’s experience on how to map a journey, identify challenges, and then improve the process was relevant to everyone in the room. It was eye-opening, enlightening, and exciting. There are a lot of opportunities to improve the lives of our constituents.”

Want to know how we created our page-level data dashboards? Read Custom dashboards: Surfacing data where Mass.gov authors need it

Embedded tweet from @epubpupil highlighting her positive thoughts on Dana Chisnell’s keynote presentation on mapping and improving the journey of American voters.

The Mayflower Design System is a work in progress

“It’s great to see there’s a Mayflower community forming among stakeholders in different roles across state government. ”

— Minghua Sun

Sienna Svob, Developer and Data Analyst
“We need to work harder to build a Mayflower community that will support the diversity of print, web, and applications across the Commonwealth. Agencies are willing and excited to use Mayflower and we need to harness this and involve them more to make it a better product.”

Minghua Sun, Mayflower Product Owner
“I’m super excited to see that so many of the content authors came to the Mayflower breakout session. They were not only interested in using the Mayflower Design System to create a single face of government but also raised constructive questions and were willing to collaborate on making it better! After the conference we followed up with more information and invited them to the Mayflower public Slack channel. It’s great to see there’s a Mayflower community forming among stakeholders in different roles across state government. ”

All digital channels support content strategy

Sam Mathius, Digital Communications Strategist
“It was great to see how many of our authors rely on digital newsletters to connect with constituents, which came up during a breakout session on the topic. Most of them feel like they need some help integrating them into their overall content strategy, and they were particularly excited about using tools and software to help them collect better data. In fact, attendees from some organizations mentioned how they’ve used newsletter data to uncover seasonal trends that help them inform the rest of their content strategy. I think that use case got the analytics gears turning for a lot of folks, which is exciting.”

Authors are eager and excited to learn and share

“I’d like to see us create more opportunities for authors to get together in informal sessions. They’re such a diverse group, but they share a desire to get it right.”

— Fiona Molloy

Shannon Desmond, Content Strategist
“I learned that the Mass.gov authors are energetic about the new content types that have been implemented over the past 8 months and are even more eager to learn about the new enhancements to the content management system (CMS) that continue to roll out. Furthermore, as a lifelong Massachusetts resident and a dedicated member of the Mass.gov team, it was enlightening to see how passionate the authors are about translating government language and regulations for constituents in a way that can be easily and quickly understood by the constituents of the State.”

Fiona Molloy, Content Strategist
“Talking to people who came to ConCon and sitting in on various sessions, it really struck me how eager our content authors are to learn — whether from us here at Digital Services or from each other. I’d like to see us create more opportunities for authors to get together in informal sessions. They’re such a diverse group, but they share a desire to get it right and that’s really encouraging as we work together to build a better Mass.gov.”

Embedded tweet from @MassGovDigital highlighting a session from ConCon 2018 in which content authors offered tips for using authoring tools on Mass.gov.

Improving content and author support is a continual process

Adam Cogbill, Content Strategist
“I was reminded that one of the biggest challenges that government content authors face is communicating lots of complex information. We need to make sure we understand our audience’s relationships to our content, both through data about their online behavior and through user testing.”

Greg Derosiers, Content Strategist
“I learned we need to do a better job of offering help and support. There were a number of authors in attendance that didn’t know about readily-available resources that we had assumed people just weren’t interested in. We need to re-evaluate how we’re marketing these services and make sure everyone knows what’s available.”

Embedded tweet from @MassGovDigital highlighting the start of ConCon 2018.

Thinking about hosting your own content conference? Reach out to us! We’d love to share lessons and collaborate with others in the civic tech community.

Nov 28 2018
Nov 28

Today, more than 80% of people’s interactions with government take place online. Whether it’s starting a business or filing for unemployment, too many of these experiences are slow, confusing, or frustrating. That’s why, one year ago, the Commonwealth of Massachusetts created Digital Services in the Executive Office of Technology and Security Services. Digital Services is at the forefront of the state’s digital transformation. Its mission is to leverage the best technology and information available to make people’s interactions with state government fast, easy, and wicked awesome. There’s a lot of work to do, but we’re making quick progress.

In 2017, Digital Services launched the new Mass.gov. In 2018, the team rolled out the first-ever statewide web analytics platform to use data and verbatim user feedback to guide ongoing product development. Now our researchers and designers are hard at work creating a modern design system that can be reused across the state’s websites and conducting the end-to-end research projects to create user journey maps to improve service design.

If you want to work in a fast-paced agile environment, with a good work life balance, solving hard problems, working with cutting-edge technology, and making a difference in people’s lives, you should join Massachusetts Digital Services.

Check out some of our current postings here:

Digital Strategist

Digital Project Manager

Web Analytics Business Analyst

Didn’t see a good fit for you? Check out more about hiring at the Executive Office of Technology and Security Services and submit your resume in order to be informed on roles as they become available.

Coming soon…

Senior Drupal Developer

Director of Technology

Creative Director

Senior UI/UX Designer

Nov 28 2018
Nov 28
No two Drupalists are exactly alike. But just how much do Drupalists differ from one another? What trends are emerging in the adoption of tools and user behaviors throughout the Drupal community? Take the Drupal user tools and trends survey to help Drupal better serve its community.
Nov 28 2018
Nov 28

More than five centuries ago, Johannes Gutenberg introduced the mechanical movable type printing and set the stage for the Renaissance and Age of Enlightenment. Years later, digitisation has brought a volte-face in the thinking and has carved out new ways of sharing and governing the content. Gutenberg editor, which is named after Johannes Gutenberg, is one of a kind and is destined to streamline website creation and editing even for the average non-technical users with its cutting-edge features.

Black and white image of Johannes Gutenberg and written text in the background


Other platforms like Medium, Squarespace or Ghost provide a really unique and refreshing experience for writers. This led to the development of Gutenberg editor. It was introduced to the world by Matt Mullenweg, founder of WordPress, at WordCamp Europe in 2017. The idea behind this is to make the process of adding rich content to the site simple and enjoyable. So, how can Drupal and Gutenberg be combined?

What is Gutenberg?

Gutenberg editor allows you to govern website content in customisable chunks or blocks where you do not have to be adept with HTML or need to write shortcodes. The complete layout of the website can be controlled including both the front end and the back end from a single console.

Black and white image with the word ‘Gutenberg’ written in bold


By looking at the editor as more than a content field, Gutenberg allows you to revisit a layout that has not been touched for over a decade thereby enabling you to design a modern editing experience. Now, the question arises. Why does Gutenberg lets you look at the whole editing experience and not just the content field?

As the block unifies several interfaces, adding that on top of the existing interface would add intricacy as opposed to removing it. Revisiting the interface allows us to create a rich and modern experience while writing, editing and publishing and all the while keeping factors like usability and simplicity in mind. Singular block interface offers a clear path for the developers for the creation of blocks. By considering the whole interface puts the emphasis on full site customisation. Full editor screen not only entirely modernises the foundation but paves way for more fluid and JavaScript-powered future.

Gutenberg for Drupal

What’s the situation like in Drupal? Like Wordpress, Drupal is an open source content management system (CMS) and there is a never-ending debate on which one’s better (we have done our part as well). But providing a modern UI for rich content creation is a priceless feature which is what Wordpress has done by introducing Gutenberg editor. This decoupled React-based editing experience can work wonders for Drupal as well.

Difference between CKEditor and Gutenberg

Merging Drupal and Gutenberg is a killer combination as it allows us to empower content authors to develop rich landing pages inside a rock solid CMS framework. Before we jump into that, let’s see what the current mode of editing looks like in Drupal.

Admin interface of Drupal’s CKEditor module in action with an illustration showing a dumbbell, gym rope, pair of shoes, water bottle and a smartwatch kept on a smartphoneAdmin Interface of CKEditor module

The picture shown above is an example of the current text editor of Drupal which is CKEditor - WYSIWYG HTML editor. It is part of the Drupal 8 core modules and is magnificent to work around. It brings the stupendous WYSIWYG editing functions of known desktop editors like Microsoft Word to the web. It is super fast and does need any sort of installation on the client computer.

On the contrary, Gutenberg editor can make your words, pictures, and layout look as good on screen as they do in your visualisation. By leveraging blocks for the creation of all types of content, it removes inconsistent ways of customisation of Drupal and adheres to modern coding standards thereby aligning with open web initiatives. You can try it out yourself!

Table showing three columns and five rows for comparing Gutenberg editor and CKEditor


How does Gutenberg work?

In a session, held at Drupal Europe 2018, a demonstration showed how Gutenberg content editor would work with Drupal. Gutenberg module needs to be installed and enabled.

[embedded content]


Like the Drupal paradigm, all elements on a page are Gutenberg blocks. Blocks are basically the unifying evolution of what is now encompassed by shortcodes, embeds, meta-boxes, theme options, custom post types, post formats, and other formatting elements.

While Gutenberg comes with its own set of blocks, Drupal core has its own as well. That is, all the existing Drupal blocks are available in the Gutenberg UI and can be inserted into a page along with core blocks. In addition to this you can, of course, extend them or build your own. You can also access to Gutenberg Cloud library for more contributed blocks.

Gif showing the demo of Drupal Gutenberg with text on left-hand side and the settings on the right-hand sideA demo of Gutenberg

The block types that are working in the first release are:

  • Content positioning: Performing the positioning of content can be flexibly done as there is no separation between what’s inside the editor and what is before or after.
  • Font: It has an awesome font colour and size adjustment mechanism. The UI for altering fonts and colours is eye-catching.
  • Searchable blocks: In addition to having a search box at the top left, page blocks are accessible inline with the help of “/”.
  • Embedding: Whether you need to embed social posts or videos, just paste the URL. You will see it expanding by itself.
  • Layout: As the blocks can have child blocks, handling layout is simple. You can split your blocks into columns on a grid.

Conclusion

It is so wonderful to think that Drupal is the best way to get your ideas on the web (Of course it is!). But if you know how to write code, then you can unlock a world of beautiful features that Drupal can offer. Not everyone is adept with code. With Gutenberg editor, you don’t need to.

Gutenberg’s content blocks would metamorphose how users, developers, and hosts communicate with Drupal to make developing rich web content simpler and more intuitive. Thus, it democratises publishing and can work for everyone no matter what their technical proficiency is.

With our expertise in Drupal Development, we can help make your digital transformations dreams come true.

Ping us at [email protected] to know more on how Gutenberg can change the editing process on your Drupal site forever.

Nov 28 2018
Nov 28

Meet John Piccozi, co-host of the weekly podcast Talking Drupal and co-organizer of the Drupal Providence Meetup and the New England Drupal Camp. John met with Drupal about 10 years ago, and he is looking forward to what will the next 10 years bring.

1. Please tell us a little about yourself. How do you participate in the Drupal community and what do you do professionally?

I’m an Acquia-certified Site Builder, Drupal podcaster, and co-organizer of a camp and meetup. I’m the Senior Drupal Architect at Oomph, my claim to fame is being our resident Drupal enthusiast. It's important to me that Oomph be part of the Drupal community here, and at-large, in every way we can. Sharing knowledge, staying curious, and trying new things is definitely the name of the game. My Drupal work includes projects for CVS Caremark, Leica Geosystems, Blue Cross Blue Shield, and Marriott International. I am a co-organizer of the Drupal Providence Meetup and the New England Drupal Camp. I also co-host the weekly Talking Drupal podcast with Stephen Cross, and Nic Laflin.

2. When did you first came across Drupal? What convinced you to stay, software or the community, and why?

Neither, it was my job. Shortly after graduating college I worked for a company that was “building websites”. I put this in quotes because I knew nothing of Drupal or it’s capabilities at the time, I thought I was going to be coding HTML and CSS. After getting hired, I was told they used this content management system called Drupal. At that point it was “sink or swim”. Luckily, Mark Ferree (our senior developer at the time) was an amazing mentor and Drupal coach. After getting a few sites under my belt I was hooked. In looking back at that experience, I would say it was both the software (not having to build it from the ground up) and the community. Fun fact: I worked with Oomph’s current Director of Engineering, Rob Aubin, at that job, pretty sure it was his first interaction with Drupal too!

3. What impact has Drupal made on you? Is there a particular moment you remember?

I would have to say my first real exposure to the Drupal Community made a huge impact on me. I remember it like it was yesterday. It was DrupalCon Austin and, unlike more recent DrupalCon’s, travelling with members of the Oomph team, I was flying solo. Lucky for me, I was in a city full of fellow Drupalers and a friend (and former boss) Jason Pamental. I reached out to Jason and he quickly filled me in on all of the social events happening after sessions. He introduced me to people I had only heard about or talked with in the issue queue. We went to the various social events and he even brought me to a few invite only events. It was a whirlwind trip and the experience has stuck with me all these years. So much so that I have been to DrupalCon every year since I made that first trek to Austin and every year it’s amazing to reunite with Drupal Friends and learn from the community.

4. How do you explain what Drupal is to other, non-Drupal people?

Well, most of the time when people ask me what I do, I tell them I’m a Web Developer. Sometimes when I wear one of my many Drupal shirts, someone will ask “What is Drupal?” to which I will tell them it’s a Content management system for their website. Usually, I will get some confused faces, but once in a while someone’s eyes will light up and they will want to know more. I’ll never forget once being in a mall in Massachusetts, going up an escalator, and someone on the opposite escalator said “Nice Shirt.” I nodded and didn’t think much of it at the time. A few minutes later I realized what the person said and what shirt I was wearing (a Drupal shirt of course). It brought a smile to my face. Drupal Rocks!

5. How did you see Drupal evolving over the years? What do you think the future will bring?

Drupal has evolved greatly in my 10+ years working with it. I started on Drupal 4.7 with Ubercart and saw the release of Drupal 6. More memorable was Oomphs Drupal 7 release party. I remember being excited for the release of Drupal 7 and the improvements that brought. However, all of that excitement and improvement pales to now working on Drupal 8.6 with Commerce 2.x, Core Translations, and the newly added Configuration Management system in core. With the release of Drupal 8 and a firm and frequent release schedule, Drupal keeps improving year after year. Like a fine wine, Drupal gets better with age. I look forward to the coming releases, a finished media system, layout builder, and improvements to the core. Drupal’s future is bright and I look forward to the next 10+ years!

6. What are some of the contribution to open source code or community that you are most proud of?

I found a core bug last week, that was pretty cool! I spoke at DrupalCon for the first time in Nashville. I co-founded and co-organize the New England Drupal Camp, which is a camp that aims to bring the New England Drupal community together in one place each year. I also co-organize the Providence Drupal Meetup each month at the Oomph, Inc. offices. Oomph is working on a helper module for Paragraphs called Oomph Paragraphs. It’s all pretty exciting and keeps me coming back for more!

7. Is there an initiative or a project in the Drupal space that you would like to promote or highlight?

Well, I work with commerce and translation quite a bit. The advances to those two systems in Drupal 8 has been amazing. I’m looking forward to kicking off a project with the commerce guys in the next few weeks. I have a feeling that will lead to some enhancements to commerce 2.x. On Talking Drupal we talked to many maintainers and co-maintainers in the Drupal community. Recently we talked with  Adam, one of the co-maintainers of the media initiative. In the last few weeks, I have been excited about the improvements to that system over Drupal 7. I also think that the work Jacob Rockowitz is doing, maintaining the Webform module, is inspiring. He is providing great documentation and training, as well as frequent updates to the module. So many cool projects and initiatives in the Drupal space, I couldn’t possibly name them all. 

8. Is there anything else that excites you beyond Drupal? Either a new technology or a personal endeavorment.

Watching my kids grow and learn is very exciting. My oldest just turned 6 and is turning into a very skilled Lego architect, as well as a soccer star. Then his brother (my one and a half-year-old) is just learning to walk and has some choice words he likes to use – “More More!”. In the tech space, I think we are moving ahead with some very interesting ideas and technology. The internet of things is always amazing to me. The new Apple watch has got me thinking it’s time to replace my analog version. The idea of wearables is great. I am really looking for Apple to come out with glasses. As a lifelong wearer of glasses. It would be amazing to have Siri, headphones, and a phone built into the glasses I wear every day.

In closing when thinking about the Drupal community and answering these questions. This quote from American Scientist Margaret Mead kept coming to mind. I leave you with this: “Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has.”.
 

Nov 28 2018
Nov 28

Once I got a task to fix a bug. While bug itself was easy to fix, I had to find the commit where it was introduced. To describe why I had to do it, I have to explain a bit our development process.

Our branching model

The exact branching and deployment workflow may differ from project to project, but we have two mainstream versions. One is for legacy amazee.io hosting and one is for Lagoon.

Here is the common part. The production instance always uses the latest prod branch. When we start to work on a new task, we create a new branch from prod. When the task is tested and demoed, we deploy it. Separately from other tasks.

We do this to speed up the delivery process, and to make our clients happy.

If a project lives on the legacy hosting system, it usually has PROD and DEV environments. For a task to be tested and demoed we have to deploy it to DEV first.

With Lagoon, we have a separate environment for each task, and this is awesome!

The bug I had to fix was on a project hosted on the legacy system. Also the bug was found on the DEV environment, and it was not present on PROD. So one of the active tasks introduced it (and at that time we had lots of active tasks). I had to find which one.

The bug

An element was appearing on a page, that it should not have appeared on.

The project

The backend is built with Drupal. The frontend is also Drupal, but we used progressive decoupling to embed dynamic Vue.js elements. In between - our beloved GraphQL. No test coverage (nooooooooooooooo.com) yet, but we have a plan to add it with some end-to-end testing framework. Most probably it will be Cypress.

Cypress

It's a modern e2e testing framework. It has lots of cool features, and some of them, like time traveling, help you not only to write tests but to develop in general. Just watch the 1-minute video on the Cypress website and you'll love it.

Git bisect

This is a very easy and very powerful Git tool. To make it work, you just need to give it three things:

  • a commit where things are good
  • a commit where things are bad
  • a command to test if things are good or bad

The result would be the first bad commit.

Docs: https://git-scm.com/docs/git-bisect

The search

Finally, I can share my experience in combining these two tools.

Since we don't yet use Cypress on the project, I installed it globally on my machine with npm i -g cypress and created cypress.json in project root with {} contents. That's all Cypress needed.

To run Git bisect, I used the following commands:

The my_test.sh was looking like this:

(I actually was lucky that for Drupal I only had to run cache clear after each Git jump. If, for example, there would be Drupal core updates in between bad and good commits, then running drush cr would not work. But in this case I could install Drupal every time from an existing configuration. It would have been a bit slower.)

And here is the Cypress test which I put into the path/to/vue/cypress/integration/test.js file:

It took a little time to set this all up. The result was good - I was able to identify the commit in which the bug was introduced.

Sum up

Modern e2e testing frameworks are easy to set up and use. They can do more than just automated testing. All it takes is some your imagination.

For example, once a colleague of mine had a task to do a content update on a project using an Excel file as a source. One way to do it was to do everything by hand, copy-pasting the data. The other way would be to write a one time importer. But instead, he turned the Excel file into JSON data and used TestCafe to do the click-and-paste job. This was faster than the first two options. And it was quite cool to see the visualization of the automated task - it's so nice when you can see the result of your work.
 

Nov 27 2018
Nov 27

Attending Drupal Events grows our community

In my last blog post about Talking about Advanced Webforms and Showing Code, I mentioned my presentation needed work and I needed practice. At DrupalCamp Atlanta, I was able to do just that. I delivered my 'Advanced Webform' presentation and received some very valuable constructive criticism on how to simplify and improve my slide deck from Michael Anello (ultimike).

At DrupalCamps I enjoy presenting and sharing my passion for the Webform module with the Drupal community. I also value hearing and seeing what other people are doing with Drupal. In Atlanta, listening to Jesus Manuel Olivas (jmolivas) passionately talking about using GatsbyJS with Headless Drupal was inspiring plus I want to mention that this approach represents a huge paradigm shift for all enterprise content management systems. Besides hearing about the latest and greatest technologies at DrupalCamp Atlanta, I also learned how much work it is to organize a DrupalCamp.

The organizers of DrupalCamp Atlanta did a fantastic job. Kaleem Clarkson (kclarkson), one of the camp's organizers, posted a call to action for organizations using Drupal to sponsor an event

Sponsoring an event

Kaleem's post titled, Sponsoring a DrupalCamp is Not About the Return on Investment (ROI), reminds everyone of the importance and value of sponsoring a camp. He also acknowledges that some companies focus and roles are shifting within the Drupal community. I think the most important question/situation Kaleem addresses is…

If Drupal is the "enterprise solution", what does this mean to our community and camps?

I am 100% okay with Drupal being an enterprise solution. Large organizations are using Open Source and they need an enterprise content management solution. Enterprise companies using Drupal need to train and support their employees; DrupalCamps are a great place to start.

Companies should sponsor Drupal events and the community

Kaleem makes solid arguments as to why a company should sponsor an event, which includes visibility, credibility, and collegiality. Companies have to keep track of their ROI, so it is important to ask what does a company gets and wants from Drupal event. The best way to get a company to open their wallets is to show them some results. Drupal is the result of companies and people sharing time and money to build something outstanding. We need to entice new companies to sponsor event and sprints.

Sponsor a sprint

Code sprints happen at every Drupal event. Sometimes there are organized collaborations with a dedicated time and place and other times peoples are committing last-minute patches and tagging a new release before a presentation. Events and sprints tend to go hand-in-hand, but it’s worth emphasizing that code sprints produce tangible results.

I always get nervous before I get up in front of a large audience, so instead of biting my nails, I work on fixing a bug or writing some cool feature enhancement to the Webform module. I have recognized this pattern, so now when I go to camps I have a personal Webform code sprint agenda.

At Design4Drupal, I started caring about and fix accessibility issues. At DrupalCamp Atlanta, I finally fixed all the lingering issues with conditional logic and even created a ticket for a stable release of webform. My point here is that things happen at Drupal events and sprints, sponsoring companies help make these things happen.

Companies should help sponsor and provide content and speakers for Drupal events.

Sponsor a speaker

Drupal events can’t happen if there is no one willing or able to get up at the podium and talk about Drupal. Good presentations and keynotes can inspire and even shift the direction of the entire Drupal community.

Who is speaking at Drupal event

There are generally two types of speakers at a Drupal event, the first is someone representing their organization/employer, and the other is someone representing their personal and professional interest. Both types are valuable and add to the collective discussion and evolution of Drupal and its community. Some speakers, including myself, represent an organization and their personal and professional interest.

Organizations need to encourage their employees to speak and contribute to Drupal. Everyone benefits when an employee participates in a Drupal event. Personally, learning how to speak publically is my biggest personal and professional accomplishment while working on the Webform module. Having my recorded session available online allows potential employers and clients to know that I am not afraid to get up in front of a large group of people and talk about Drupal.

Organizations also benefit from having employees promote services, hosting, products, and ideas being built around Drupal. Of course, we don't want DrupalCamps to become paid infomercials but these events provide opportunities for organizations and developers to collectively help steer the direction of Drupal.

Supporting individual speakers

There are a bunch of people in the Drupal community who are the maintainers of specific modules or APIs. For example, I am the maintainer of the Webform module for Drupal 8. I travel on my own dime to events and talk about the Webform module and related topics. The only limitation I run into with traveling to events is the cost per event, which for anything outside of the New York tri-state area is around $750 to $1,000 dollars. I am not exactly sure if people should be paid to speak at events, however, I know first hand that travel expenses limit where and how often I speak at Drupal Events.

Should organizations sponsor speakers?

There are three immediate benefits to sponsoring a speaker.

First, you get the subject matter expert to come to your community and share their ideas. For example, as a community, we need people passionately talking about using GatsbyJS and Headless Drupal at as many Drupal events as possible.

Second, you get to pick the brains of subject matter experts for ideas. I know that if an organization paid for my travel expenses, I would have no hesitation spending a few hours helping them review and understand how they can best leverage the Webform module and Drupal 8 for their client's projects and needs.

Finally, helping to offset some of the costs for someone who is sharing their ideas and time with the Drupal community is a great way to say thanks.

Proposing that a speaker should be sponsored/paid to talk at a Drupal event is going to require some more thought and work. There is one aspect to all our Drupal events which should be paid for… the red button.

Sponsor the red button

Kevin Thull has been recognized for his ongoing contribution to Drupal community which is making sure our presentations are recorded and shared online. I will let everyone in on a little somewhat known secret to getting your session proposal accepted at a DrupalCon - you need prior speaking experience. Kevin's session recordings from DrupalCamps are used and vital to the DrupalCon speaker selection process.

Let's not forget that Kevin's 600-plus session recording helps to share information with people that could not attend a DrupalCamp. We have all faced the quagmire of trying to understand how a module or Drupalism is supposed to work and have watched a recorded presentation to get over this challenging hump.

Kevin started a GoFundMe campaign for "paying for food and commuter costs out of my own pocket, which average around $350 per camp." He is really close to his goal of $5000. We should all contribute to making sure he reaches this goal and then exceeds it.

Click the damned button!

Click the damned button!

Final thoughts

The Drupal community is massive, and we collaborate and solve some amazingly challenging problems. The challenge of sustainability is only going to be addressed when we figure out our economy around Open Source software. Realistically, the recent high profile acquisitions of RedHat and GitHub is tied to these companies’ reliance on software-as-a-service (SaaS) and cloud computing. Both companies contribute heavily to Open Source. These acquisitions are significant to ongoing growth and success of Open Source but these are large transactions.

Drupal and Open Source community still need to figure out what smaller transactions are required in order to help grow and sustain our communities. For now, giving $10, $20, or $50 to help Kevin Thull continue to record our presentations and share our ideas and passions is a good start.

Support the Red Button

What are your thoughts?

I am sure Kevin and I are not alone in trying to figure out how to offset the costs of presenting at and recording DrupalCamps. Conversely, there are also organizations who are also figuring how to start or continue support DrupalCamps and Meetups. Please post your thought below and on Drupal.org

via Issue #3012321: Governance Taskforce Recommendation "Provide Greater Support For In-Person.

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Nov 27 2018
Nov 27

If you’re evaluating CMS platforms for an upcoming project, Drupal should be one platform that you consider. It was built for generating content and also has robust ecommerce abilities through the Drupal Commerce module. If you only need to publish content, it’s great for that. If you only need ecommerce, it’s great for that, too. The fact that it does both very well is a winning combination that will always be available to you, now or down the road. This post takes a look under the hood of Drupal to show you why you might want to take a first, or second, look at the Drupal CMS.

Drupal for Content

As mentioned in the introduction, Drupal was built for content creation and it is very good at that. But, if you’re unfamiliar with Drupal, you probably wouldn’t understand WHY it works so well for this. Here are some of the things that really separate Drupal from other platforms.

Content Types

At the core of Drupal content creation is something called a Content Type. A content type is a collection of fields that are used to generate a certain type of content, such as a general page, landing page, blog post, press release, etc. It’s one of the first pieces of a new Drupal site to be configured.

Demo Drupal Commerce today! View our demo site.Configuring content types is mostly done through Drupal’s admin user interface (UI). Through the interface, you add fields. If you think of any website form that you’ve seen in the past, the form is made up of fields for you to enter in your information. This is the same for Drupal, but you’re actually creating the fields that are used to generate content. For example, a blog post typically contains a title (text field), body (textarea field), header image (image field), publish date (date field), author and category (reference fields). For the blog content type, all of these fields would be added as well as any other that you need. The field options available a many. If you don’t see a field that you need, chances are someone has already created it and you just need to install a module that adds it in.

After all of the fields have been added, you then configure how the fields are displayed to your content creators and to the end user viewing the content. I won’t get into details here, but many fields have options for how that content gets rendered on the page. Using an image field as an example, you can choose to render the image as the original image, or as a processed image (like a thumbnail), or as the url path to the image on the server. Each option has its uses once you start theming the site.

Regions and Blocks

Keeping with the blog post example, when viewing a blog post you typically see other elements on the pages such as a subscribe form, list of recent posts, and call to actions. It doesn’t make sense to manually add these things to every single blog post, so instead we place this content in something called a Block and assign the block to a Region.

Regions are added to your page templates and are there for you to place blocks into. When adding a block into a region, each block can be configured independently of one another so that you can assign blocks to specific pages, content types, access levels (i.e. anonymous vs. logged in users), etc. A block can be many different things, but one type of block is similar to a content type in that you can add fields that are used to make up the block.

Views

A View is a powerful tool within Drupal for creating dynamic content based on other content. Views allow you to take existing content, manipulate it, and display it in another way. They can be used to create both pages and blocks.

Again, using the blog as an example, if you look at a page that is listing all of your blog posts at one time, this is most likely a view. The view is taking content generated using the blog content type, manipulating each post so that you’re only seeing specific information such as a date, title and introduction, and then adding a ‘Read More’ link after the introduction. Not only is the view manipulating each post like this, it’s also displaying the 10 most recent posts and showing you a ‘Load More’ button afterwards to load the next 10 posts.

This is a pretty simple example, but as you can see it’s quite powerful. You can use as much or as little of the content information as you need and it gives you fine-grained control to use and re-use your content in new ways. 

Metatags

Any serious content platform needs to include a robust set of metatag options. The built in metatag module for Drupal is excellent in this regard. You can set default options for every content type and override those defaults for individual pieces of content if needed. You can choose if your content should be crawled by search bots or not, how your post would appear on social media if shared, and more.

Workflows

This might not apply to you if you’re the only one creating content for your website, but, if you have a team of content creators, workflows let you assign specific permissions to your teammates. For example, you can allow your writers to draft content, your editors to approve the content, and finally a publisher can publish the content. Instead of explaining it all here, here’s a separate article and video that shows you how it works.

Modules

Anything that adds new functionality to the base Drupal platform is called a module. A module can be small (such as adding a new field type) or big (such as adding ecommerce functionality). You can separately Google “best modules for Drupal” and see a whole bunch of popular modules, but one of our favorites that I want to mention for content creation is the “Paragraphs” module. This module lets you create reusable sections of content that can be used within your content types and product pages. So, instead of just a body of text you can add cta straps, rich media, image galleries, forms, etc., all within your content. We use it on our own site to quickly make unique page layouts for our content.

Theming

Drupal’s theming engine enables your designers and front end developers to implement anything they can dream up. You have broad control over the look and feel of your site so that everything is consistent, but you can also create totally unique pieces of content or individual pages that may break away from your normal styleguide.

Say you have a new product lineup that you’re launching. You’re store branding is one thing, but this product has its own unique branding and personality that you want to convey. Well, you can give your designers full control over how the product should appear on your website and your front end developers can make it happen using the granular template override system.

Drupal for Commerce

The Commerce module for Drupal turns your Drupal site into a fully fledged ecommerce platform that is 100% capable of running any size of ecommerce site you throw at it. And remember, this is adding functionality to Drupal, so you still maintain the ability to do all of the content side of things mentioned above. 

In fact, not only can you still generate other content, but all of the things that make content creation great on Drupal also apply to the ecommerce side of your site. Your product pages are totally fieldable and themable, just like the content. You can assign blocks to your project pages. You can use views to set up your catalog and create blogs that filter out featured products or related products. Everything is fully customizable. 

There are also many modules available specifically for Commerce that give you even more functionality and integrations, and this is actually where ecommerce on Drupal becomes a “big deal”. Drupal Commerce is API first, which means that it was made to be able to connect to other services. So while you might run your ecommerce store on Drupal Commerce, you will most likely also use other software for your business accounting, marketing and customer relations, to name a few. Drupal Commerce can integrate with these services and share information in order to automate tasks.

We have a whole article that drills down on this topic and explains why ecommerce platforms like Drupal Commerce can be a great fit for your business. I would recommend reading it here.

Content and Commerce

We’ve really only scratched the surface on what Drupal can do from both a content and commerce perspective. I hope you’re beginning to see the whole picture. 

The truth is that most ecommerce platforms don’t do both content and commerce well. You can definitely find many great content creation platforms out there, but can they also do ecommerce? Likewise, there are a ton of ecommerce platforms that will sell your products, but how well can you create other content and do you have the flexibility to customize one or all product pages in the way that works best for your products. And, can you integrate that platform with other services?

These are all important questions to ask even if you don’t think you need a robust content platform or an ecommerce component now. If you think you might need it in the future, planning ahead could save you a headache later. While there are a lot of options out there and I encourage you to explore them, Drupal should be high on your list of possible options.

Try A Demo

It’s one thing to say Drupal is great at all of these things, but why not give it a try. We’ve actually created a complete Drupal demo that showcases both content and commerce together. Click the link below to check it out and see what you think. If you’re interested in exploring how Drupal can fit with your business, feel free to Contact Us. We’d be happy to have that discussion with you.

Demo Drupal Commerce today! View our demo site.

Nov 27 2018
Nov 27

Our team loves exploring and using hot trends in development, one of which is decoupled Drupal architecture. Our previous post was devoted to using decoupled Drupal with JSON.API, and our today’s story hero will be “the Great Gatsby”. Does it sound like the famous book hero? No, Gatsby.JS is a new and hot JavaScript tool, but it promises to be equally famous and deserve a hundred books! In this post, we will discuss its principle of work and the benefits of using decoupled Drupal 8 and Gatsby.JS. And, of course, you always can rely on our Drupal experts in implementing it all.

Gatsby.JS: what it is and how it works

Gatsby.JS is defined a static site generator, but it is approaching a front-end framework in its capacities. Gatsby is built on very hot front-end tools, some of which are:

  • React.JS — the amazingly popular JavaScript library for building complex interfaces
  • GraphQL — the super efficient query language
  • Webpack — the great JavaScript module bundler 

Gatsby.JS is meant for building blazing fast static sites. It fetches the data to them from absolutely any sources and generates static content using GraphQL. Right now, there are 500+ source plugins to establish the connection between particular data sources and Gatsby. The sources include YouTube, Twitter, Hubspot, Shopify, Trello, Vimeo, Google Sheets, content management systems like Drupal, WordPress, and so on. 

Gatsby uses source plugins and GraphQL

 

Decoupled Drupal 8 and Gatsby.JS: the great duet and its benefits

One of the hottest and most beneficial combinations for today is Gatsby and Drupal 8. According to the the decoupled, or headless Drupal architecture, Drupal serves as the backend only, while Gatsby.JS handles the presentation layer. 

Drupal 8 and Gatsby.JS are both open-source, have a large and active community and a huge ecosystem of add-on modules or plugins. And Drupal 8 has built-in web services to make integration a breeze. 

What makes this combination so beneficial? The simplicity and speed of a static site combines perfectly with the power and flexibility of the backend provided by the Drupal 8 CMS. Here are at least some of the features that we get in the end:

  • Unmatched speed. Gatsby.JS pre-fetches all pages of the website instead of querying the database every time on demand, which makes navigation enjoyable and amazingly fast. Gatsby is a static PWA (progressive web app) generator. It efficiently fetches only the critical HTML, CSS, and JS files. 
  • Easy setup. No cumbersome deploy and setup processes will be needed with Gatsby. It builds your site as static files that can be quickly deployed anywhere.
  • Great personalization features. Drupal-and-Gatsby combinations can feature awesome user personalization and authentication capabilities.
  • Awesome content editing. Usually, static site generators need writing content in Markdown, which could be cumbersome for content editors. But the problem is solved with Drupal 8 as a backend! Drupal 8 content creation features are a joy for any content editor. 

One of examples of using decoupled Drupal 8 and Gatsby.JS is the demo site Umami Food Magazine. The site is built on headless Drupal distribution Contenta CMS with Gatsby.JS. 

Umami Food Magazine uses Gatsby 2Umami Food Magazine uses Gatsby

If this looks appetizing enough, contact our Drupal team right now to combine decoupled Drupal 8 with Gatsby.JS for you! Or continue reading about some implementation details. 

Some specifics of using Drupal 8 and Gatsby.JS

In the decoupled setup, both Drupal 8 and Gatsby sites need to be prepared to work together. They will be connected by means of the special Gatsby’s source plugin for Drupal that fetches data, including images, from Drupal 8 websites with JSON API installed. 

So it is necessary to install and enable the JSON API and JSON API extras contributed Drupal modules, as well as enable the core Serialization module on our Drupal website.

Enable JSON API module

Our next destination is Configuration — Web Services — JSON API Overwrites.

Configure JSON API

In Settings, we need to make sure the path prefix for JSON API is /jsonapi. This is what the Gatsby site will need to know.

Configure JSON API

In People — Roles — Permissions we give access to the JSON API list of resources to users with all roles, including anonymous.

Permissions for JSON API

Our Drupal site is ready for Gatsby integration, and we now need to prepare our Gatsby site. It begins with installing Gatsby’s CLI:

npm install --global gatsby-cli

Then we follow all the site creation steps in the “Get started” documentation. Gatsby also offers pre-configured starters for site creation.

Gatsby starters

Then we run Gatsby with the command, after which the Gatsby site should become available at localhost:8000:

gatsby develop

The above mentioned source plugin for Drupal then needs to be installed on the Gatsby site. Next, we add the piece of code from the plugin’s documentation to the gatsby-config.js file. The URL should changed to the one of our Drupal site.

plugins: [
 {
 resolve: `gatsby-source-drupal`,
 options: {
 baseUrl: `https://our-site-name.com/`,
 apiBase: `api`, // optional, defaults to `jsonapi`
 },
 },
]

We then configure our Gatsby site to fetch exactly the content we need from Drupal. We need to create the appropriate pages in /src/pages on the Gatsby site and add the code for React import to the JS file. 

And we configure GraphQL at http://localhost:8000/___graphql to query the Drupal site exactly how we want. 

It all crowns up with the last command to publish our Gatsby site with the Drupal data:

gatsby build

This is just a very brief description of getting Drupal 8 work with Gatsby. Our experts are ready to do the setup exactly in accordance with your wishes.

Enjoy the combination of decoupled Drupal 8 and Gatsby.JS!

If you are interested in using decoupled Drupal 8 and Gatsby.JS, either on an existing project or on a new one, contact our Drupal developers. Our Drupal 8 team has great experience in third-party integration. We will advise you the best decoupled setup and, of course, smoothly implement it. Let’s enjoy the latest and greatest technologies!

Nov 27 2018
Nov 27

Batch processing is usually an important aspect of any Drupal project and even more when we need to process huge amounts of data.

The main advantage of using batch is that it allows large amounts of data to be processed into small chunks or page requests that run without any manual intervention. Rather than use a single page load to process lots of data, the batch API allows the data to be processed as number of small page requests.

Using batch processing, we divide a large process into small pieces, each one of them executed as a separate page request, thereby avoiding stress on the server. This means that we can easily process 10,000 items without using up all of the server resources in a single page load.

This helps to ensure that the processing is not interrupted due to PHP timeouts, while users are still able to receive feedback on the progress of the ongoing operations.

Some uses of the batch API in Drupal:

  • Import or migrate data from an external source
  • Clean-up internal data
  • Run an action on several nodes
  • Communicate with an external API

Normally, batch jobs are launched by a form. However, what can we do if we want a *nix crontab to launch them on a regular basis? One of the best solutions is to use an external command like a custom Drush command and launch it from this crontab.

In this post we are going to create a custom Drush 9 command that loads all the nodes of a content type passed in as an argument (page, article ...). Then a batch process will simulate a long operation on each node. Next, we'll see how to run this drush command from crontab.

You can find the code for this module here: https://github.com/KarimBoudjema/Drupal8-ex-batch-with-drush9-command

This is the tree of the module:

tree web/modules/custom/ex_batch_drush9/
web/modules/custom/ex_batch_drush9/
|-- composer.json
|-- drush.services.yml
|-- ex_batch_drush9.info.yml
|-- README.txt
`-- src
    |-- BatchService.php
    `-- Commands
        `-- ExBatchDrush9Commands.php

We are going to proceed in three steps:

  1. Create a class to host our two main callback methods for batch processing (BatchService.php).
  2. Create our custom Drush 9 command to retrieve nodes, to create and process the batch sets (ExBatchDrush9Commands.php).
  3. Create a crontab task to run automatically the Drush command at scheduled times.

1. Create a BatchService class for the batch operations

A batch process is made of two main callbacks functions, one for processing each batch and the other for post-processing operations.

So in our class will have two methods, processMyNode() for processing each batch and processMyNodeFinished() to be launched when the batch processing is finished.

It's best practice to store the callback functions in their own file. This keeps them separate from anything else that your module might be doing. In this case I prefer to store them in a class that we can reuse later as a service.

Here is the code of BatchService.php

<?php
namespace Drupal\ex_batch_drush9;
/**
 * Class BatchService.
 */
class BatchService {
  /**
   * Batch process callback.
   *
   * @param int $id
   *   Id of the batch.
   * @param string $operation_details
   *   Details of the operation.
   * @param object $context
   *   Context for operations.
   */
  public function processMyNode($id, $operation_details, &$context) {
    // Simulate long process by waiting 100 microseconds.
    usleep(100);
    // Store some results for post-processing in the 'finished' callback.
    // The contents of 'results' will be available as $results in the
    // 'finished' function (in this example, processMyNodeFinished()).
    $context['results'][] = $id;
    // Optional message displayed under the progressbar.
    $context['message'] = t('Running Batch "@id" @details',
      ['@id' => $id, '@details' => $operation_details]
    );
  }
  /**
   * Batch Finished callback.
   *
   * @param bool $success
   *   Success of the operation.
   * @param array $results
   *   Array of results for post processing.
   * @param array $operations
   *   Array of operations.
   */
  public function processMyNodeFinished($success, array $results, array $operations) {
    $messenger = \Drupal::messenger();
    if ($success) {
      // Here we could do something meaningful with the results.
      // We just display the number of nodes we processed...
      $messenger->addMessage(t('@count results processed.', ['@count' => count($results)]));
    }
    else {
      // An error occurred.
      // $operations contains the operations that remained unprocessed.
      $error_operation = reset($operations);
      $messenger->addMessage(
        t('An error occurred while processing @operation with arguments : @args',
          [
            '@operation' => $error_operation[0],
            '@args' => print_r($error_operation[0], TRUE),
          ]
        )
      );
    }
  }
}

In the processMyNode()method we are going to process each element of our batch. As you can see, in this method we just simulate a long operation with the usleep() PHP function. Here we could load each node or connect with an external API. We also grab some information for post-processing.

In the processMyNodeFinished() method, we display relevant information to the user and we can even save the unprocessed operations for a later process.

2. Create the custom Drush 9 command to launch the batch

This is the most important part of our module. With this Drush command, we'll retrieve the data and fire the batch processing on those data.

Drush commands are now based on classes and the Annotated Command format. This will change the fundamental structure of custom Drush commands. This is great because we can now inject services in our command class and take advantage of all the OO power of Drupal 8.

A Drush command is composed of three files:

drush.services.yml - This is the file where our Drush command definition goes into. This is a Symfony service definition. Do not use your module's regular services.yml as you may have done in Drush 8 or else you will confuse the legacy Drush, which will lead to a PHP error.

You'll can see that in our example we inject two core services in our command class: entity_type.manager to access the nodes to process and logger.factory to log some pre-process and post-process information.

services:
  ex_batch_drush9.commands:
    class: \Drupal\ex_batch_drush9\Commands\ExBatchDrush9Commands
    tags:
      - { name: drush.command }
    arguments: ['@entity_type.manager', '@logger.factory']

composer.json - This is where we declare the location of the Drush command file for each version of Drush by adding the extra.drush.services section to the composer.json file of the implementing module. This is now optional, but will be required for Drush 10.

{
    "name": "org/ex_batch_drush9",
    "description": "This extension provides new commands for Drush.",
    "type": "drupal-drush",
    "authors": [
        {
            "name": "Author name",
            "email": "[email protected]"
        }
    ],
    "require": {
        "php": ">=5.6.0"
    },
    "extra": {
        "drush": {
            "services": {
                "drush.services.yml": "^9"
            }
        }
    }
}

MyModuleCommands.php - (src/Commands/ExBatchDrush9Commands.php in our case) It's in this class that we are going to define the custom Drush commands of our module. This class uses the Annotated method for commands, which means that each command is now a separate function with annotations that define its name, alias, arguments, etc. This class can also be used to define hooks with @hook annotation.

Some of the annotations available for use are:

@command: This annotation is used to define the Drush command. Make sure that you follow Symfony’s module:command structure for all your commands.
@aliases: An alias for your command.
@param: Defines the input parameters. For example, @param: integer $number
@option: Defines the options available for the commands. This should be an associative array where the name of the option is the key and the value could be - false, true, string, InputOption::VALUE_REQUIRED, InputOption::VALUE_OPTIONAL or an empty array.
@default: Defines the default value for options.
@usage: Demonstrates how the command should be used. For example, @usage: mymodule:command --option
@hook: Defines a hook to be fired. The default format is @hook type target, where type determines when the hook is called and target determines where the hook is called.

For a complete list of all the hooks available and their usage, refer to: https://github.com/consolidation/annotated-command

Here is the code for our Drush command.

<?php
namespace Drupal\ex_batch_drush9\Commands;
use Drupal\Core\Entity\EntityTypeManagerInterface;
use Drupal\Core\Logger\LoggerChannelFactoryInterface;
use Drush\Commands\DrushCommands;
/**
 * A Drush commandfile.
 *
 * In addition to this file, you need a drush.services.yml
 * in root of your module, and a composer.json file that provides the name
 * of the services file to use.
 *
 * See these files for an example of injecting Drupal services:
 *   - http://cgit.drupalcode.org/devel/tree/src/Commands/DevelCommands.php
 *   - http://cgit.drupalcode.org/devel/tree/drush.services.yml
 */
class ExBatchDrush9Commands extends DrushCommands {
  /**
   * Entity type service.
   *
   * @var \Drupal\Core\Entity\EntityTypeManagerInterface
   */
  private $entityTypeManager;
  /**
   * Logger service.
   *
   * @var \Drupal\Core\Logger\LoggerChannelFactoryInterface
   */
  private $loggerChannelFactory;
  /**
   * Constructs a new UpdateVideosStatsController object.
   *
   * @param \Drupal\Core\Entity\EntityTypeManagerInterface $entityTypeManager
   *   Entity type service.
   * @param \Drupal\Core\Logger\LoggerChannelFactoryInterface $loggerChannelFactory
   *   Logger service.
   */
  public function __construct(EntityTypeManagerInterface $entityTypeManager, LoggerChannelFactoryInterface $loggerChannelFactory) {
    $this->entityTypeManager = $entityTypeManager;
    $this->loggerChannelFactory = $loggerChannelFactory;
  }
  /**
   * Update Node.
   *
   * @param string $type
   *   Type of node to update
   *   Argument provided to the drush command.
   *
   * @command update:node
   * @aliases update-node
   *
   * @usage update:node foo
   *   foo is the type of node to update
   */
  public function updateNode($type = '') {
    // 1. Log the start of the script.
    $this->loggerChannelFactory->get('ex_batch_drush9')->info('Update nodes batch operations start');
    // Check the type of node given as argument, if not, set article as default.
    if (strlen($type) == 0) {
      $type = 'article';
    }
    // 2. Retrieve all nodes of this type.
    try {
      $storage = $this->entityTypeManager->getStorage('node');
      $query = $storage->getQuery()
        ->condition('type', $type)
        ->condition('status', '1');
      $nids = $query->execute();
    }
    catch (\Exception $e) {
      $this->output()->writeln($e);
      $this->loggerChannelFactory->get('ex_batch_drush9')->warning('Error found @e', ['@e' => $e]);
    }
    // 3. Create the operations array for the batch.
    $operations = [];
    $numOperations = 0;
    $batchId = 1;
    if (!empty($nids)) {
      foreach ($nids as $nid) {
        // Prepare the operation. Here we could do other operations on nodes.
        $this->output()->writeln("Preparing batch: " . $batchId);
        $operations[] = [
          '\Drupal\ex_batch_drush9\BatchService::processMyNode',
          [
            $batchId,
            t('Updating node @nid', ['@nid' => $nid]),
          ],
        ];
        $batchId++;
        $numOperations++;
      }
    }
    else {
      $this->logger()->warning('No nodes of this type @type', ['@type' => $type]);
    }
    // 4. Create the batch.
    $batch = [
      'title' => t('Updating @num node(s)', ['@num' => $numOperations]),
      'operations' => $operations,
      'finished' => '\Drupal\ex_batch_drush9\BatchService::processMyNodeFinished',
    ];
    // 5. Add batch operations as new batch sets.
    batch_set($batch);
    // 6. Process the batch sets.
    drush_backend_batch_process();
    // 6. Show some information.
    $this->logger()->notice("Batch operations end.");
    // 7. Log some information.
    $this->loggerChannelFactory->get('ex_batch_drush9')->info('Update batch operations end.');
  }
}

In this class, we first inject our two core services in the __construct() method: entity_type.manager and logger.factory.

Next, in the updateNode() annotated method we define our command with three annotations:

@param string $type - Defines the input parameter, the content type in our case.
@command update:node - Defines the name of the Drush command. In this case it's "update:node" so we would launch the command with: drush update:node
@aliases update-node - Defines an alias for the command

The main part of this command is the creation of the operations array for our batch processing (See points 3,4,5 and 6). Nothing strange here, we just define our operations array (3 and 4) pointing to the two callback functions that are located in our BatchService.php class.

Once the batch operations are added as new batch sets (5), we process the batch sets with the function drush_backend_batch_process(). This is a drop in replacement for the existing batch_process() function of Drupal. It will process a Drupal batch by spawning multiple Drush processes.

Finally, we show information to the user and log some information for a later use.

That's it! We can now test our brand new Drush 9 custom command!

To do so, just clear the cache with drush cr and launch the command with drush update:node

3. Bonus: Run the Drush command from crontab

As noted before, we want to run this custom command from crontab to perform the update automatically at the scheduled times.

The steps taken may vary based on your server's operating system. With Mac, Linux and Unix servers, we manage scheduled tasks by creating and editing crontabs that execute cron jobs at specified intervals.

1. Open a terminal window on your computer and enter the following command to edit your crontab - this will invoke your default editor (usually a flavor of vi):

crontab -e

2. Enter the following scheduled task with our Drush command (where [docroot_path] is your server's docroot):

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/5 * * * * drush --root=[docroot_path] update:node

We need to add an environment path in our cron table first line (cron commands run one after anonther).
This command will run our custom Drush command every five minutes.

3. Restart the cron service with the following command:

systemctl restart cron

4. Check if the command is running
We can check the log of our Drupal application every five minutes since we logged some information with the command itself, but we can also use another way with the following command:

sudo tail -f /var/mail/root

Recap.

- We created a class to host our two main callback methods for batch processing (BatchService.php).
- We created a custom Drush 9 command to retrieve nodes, to create and process the batch sets, injecting core services, like entity_type.manager to access the nodes to process and logger.factory to log some pre-process and post-process information.
- We created a crontab task to run automatically the Drush command at scheduled times.

This way, we can now run heavy processes, on a regular basis without putting undue stress on the server.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web