Jun 05 2019
Jun 05

Guzzle makes HTTP requests easy. When they work, it's like magic. However, as with all coding, getting something to work requires debugging, and this is where the Drupal implementation of Guzzle has a major usability problem - any returned messages are truncated, meaning that with the default settings, error messages that can help debug an issue are not accessible to the developer. This article will show developers how they can re-structure their Guzzle queries to log the full error to the Drupal log, instead of a truncated error that does not help fix the issue.

Standard Methodology

Generally, when making a Guzzle request, it is made using a try/catch paradigm, so that the site does not crash in the case of an error. When not using try/catch, a Guzzle error will result in a WSOD, which is as bad as it gets for usability. So let's take a look at an example of how Guzzle would request a page using a standard try/catch:

try {
  $client = \Drupal::httpClient();
  $result = $client->request('GET', 'https://www.google.com');
}
catch (\Exception $error) {
  $logger = \Drupal::logger('HTTP Client error');
  $logger->error($error->getMessage());
}

This code will request the results of www.google.com, and place them in the $result variable. In the case that the request failed for some reason, the system logs the result of $error->getMessage() to the Drupal log.

The problem, as mentioned in the intro, is that the value returned from $error->getMessage() contains a truncated version of the response returned from the remote website. If the developer is lucky, the text shown will contain enough information to debug the problem, but rarely is that the case. Often the error message will look something along the lines of:

Client error: `POST https://exaxmple.com/3.0/users` resulted in a `400 Bad Request` response: {"type":"http://developer.example.com/documentation/guides/error-glossary/","title":"Invalid Resource","stat (truncated...)

As can be seen, the full response is not shown. The actual details of the problem, and any suggestions as to a solution are not able to be seen. What we want to happen is that the full response details are logged, so we can get some accurate information as to what happened with the request.

Debugging Guzzle Errors

In the code shown above, we used the catch statement to catch \Exception. Generally developers will create a class that extends \Exception, allowing users to catch specific errors, finally catching \Exception as a generic default fallback.

When Guzzle hits an error, it throws the exception GuzzleHttp\Exception\GuzzleException. This allows us to catch this exception first to create our own log that contains the full response from the remote server.

We can do this, because GuzzleException provides the response object from the original request, which we can use to get the actual response body the remote server sent with the error. We then log that response body to the Drupal log.

use Drupal\Component\Render\FormattableMarkup;
use GuzzleHttp\Exception\GuzzleException;

try {
  $response = $client->request($method, $endpoint, $options);
}

// First try to catch the GuzzleException. This indicates a failed response from the remote API.
catch (GuzzleException $error) {

  // Get the original response
  $response = $error->getResponse();

  // Get the info returned from the remote server.
  $response_info = $response->getBody()->getContents();

  // Using FormattableMarkup allows for the use of <pre/> tags, giving a more readable log item.
  $message = new FormattableMarkup('API connection error. Error details are as follows:<pre>@response</pre>', ['@response' => print_r(json_decode($response_info), TRUE)]);

  // Log the error
  watchdog_exception('Remote API Connection', $error, $message);
}

// A non-Guzzle error occurred. The type of exception is unknown, so a generic log item is created.
catch (\Exception $error) {
  // Log the error.
  watchdog_exception('Remote API Connection', $error, t('An unknown error occurred while trying to connect to the remote API. This is not a Guzzle error, nor an error in the remote API, rather a generic local error ocurred. The reported error was @error', ['@error' => $error->getMessage()));
}

With this code, we have caught the Guzzle exception, and logged the actual content of the response from the remote server to the Drupal log. If the exception thrown was any other kind of exception than GuzzleException, we are catching the generic \Exception class, and logging the given error message.

By logging the response details, our log entry will now look something like this:

Remote API connection error. Error details are as follows:

stdClass Object (
  [title] => Invalid Resource
  [status] => 400
  [detail] => The resource submitted could not be validated. For field-specific details, see the 'errors' array.
  [errors] => Array (
    [0] => stdClass Object (
      [field] => some_field
      [message] => Data presented is not one of the accepted values: 'Something', 'something else', or another thing'
    )
  )
)

* Note that this is just an example, and that each API will give its own response structure.

This is a much more valuable debug message than the original truncated message, which left us understanding that there had been an error, but without the information required to fix it.

Summary

Drupal 8 ships with Guzzle, an excellent HTTP client for making requests to other servers. However, the standard debugging method doesn't provide a helpful log message from Guzzle. This article shows how to catch Guzzle errors, so that the full response can be logged, making debugging of connection to remote servers and APIs much easier.

Happy Drupaling!

May 16 2019
May 16

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it's fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

This article will focus specifically on how developers can manage, declare, and debug configuration in their custom modules.

Configuration Schema

Configuration schema describes the type of configuration a module introduces into the system. Schema definitions are used for things like translating configuration and its values, for typecasting configuration values into their correct data types, and for migrating configuration between systems. Having configuration in the system is not as helpful without metadata that describes what the configuration is. Configuration schemas define the configuration items.

Any module that introduces any configuration into the system MUST define the schema for the configuration the module introduces.

Configuration schema definitions are declared in [MODULE ROOT]/config/schema/[MODULE NAME].schema.yml, where [MODULE NAME] is the machine name of the module. Schema definitions may define one or multiple configuration objects. Let's look at the configuration schema for the Restrict IP module for an example. This module defines a single configuration object, restrict_ip.settings:

restrict_ip.settings:
  type: config_object
  label: 'Restrict IP settings'
  mapping:
    enable:
      type: boolean
      label: 'Enable module'
    mail_address:
      type: string
      label: 'Contact mail address to show to blocked users'
    dblog:
      type: boolean
      label: 'Log blocked access attempts'
    allow_role_bypass:
      type: boolean
      label: 'Allow IP blocking to be bypassed by roles'
    bypass_action:
      type: string
      label: 'Action to perform for blocked users when bypassing by role is enabled'
    white_black_list:
      type: integer
      label: 'Whether to use a path whitelist, blacklist, or check all pages'
    country_white_black_list:
      type: integer
      label: 'Whether to use a whitelist, blacklist, or neither for countries'
    country_list:
      type: string
      label: 'A colon separated list of countries that should be white/black listed'

The above schema defines the config object restrict_ip.settings which is of type config_object (defined in core.data_types.schema.yml).

When this module is enabled, and the configuration is exported, the filename of the configuration will be restrict_ip.settings.yml. This object has the keys enable, mail_address, dblog etc. The schema tells what type of value is to be stored for each of these keys, as well as the label of each key. Note that this label is automatically provided to Drupal for translation.

The values can be retrieved from the restrict_ip.settings object as follows:

$enable_module = \Drupal::config('restrict_ip.settings')->get('enable');
$mail_address = \Drupal::config('restrict_ip.settings')->get('mail_address');
$log = \Drupal::config('restrict_ip.settings')->get('dblog');

Note that modules defining custom fields, widgets, and/or formatters must define the schema for those plugins. See this page to understand how the schema definitions for these various plugins should be defined.

Default configuration values

If configuration needs to have default values, the default values can be defined in [MODULE ROOT]/config/install/[CONFIG KEY].yml where [CONFIG KEY] is the configuration object name. Each item of configuration defined in the module schema requires its own YML file to set defaults. In the case of the Restrict IP module, there is only one config key, restrict_ip.settings, so there can only be one file to define the default configuration, restrict_ip/config/install/restrict_ip.settings.yml. This file will then list the keys of the configuration object, and the default values. In the case of the Restrict IP module, the default values look like this:

enable: false
mail_address: ''
dblog: false
allow_role_bypass: false
bypass_action: 'provide_link_login_page'
white_black_list: 0
country_white_black_list: 0
country_list: ''

 

As can be seen, each of the mapped keys of the restrict_ip.settings config_object in the schema definition are added to this file, with the default values provided for each key. If a key does not have a default value, it can be left out of this file. When the module is enabled, these are the values that will be imported into active configuration as defaults.

Debugging Configuration

When developing a module, it is important to ensure that the configuration schema accurately describes the configuration used in the module. Configuration can be inspected using the Configuration Inspector module. After enabling your custom module, visit the reports page for the Configuration Inspector at /admin/reports/config-inspector, and it will list any errors in configuration.

The Configuration Inspector module errors in configuration schema definitions

Clicking on 'List' for items with errors will give more details as to the error.

The 'enable' key has an error in schema. The stored value is a boolean, but the configuration definition defines a string

Using the Configuration Inspector module, you can find where you have errors in your configuration schema definitions. Cleaning up these errors will correctly integrate your module with the Configuration API. In the above screenshot, then type of data in the active schema is a boolean, yet the configuration schema defines it as a string. The solution is to change the schema definition to be a boolean.

Summary

In this final article of this series on the Drupal 8 Configuration API, we looked at configuration schema, how developers can define this schema in their modules and provide defaults, as well as how to debug configuration schema errors. Hopefully this series will give you a fuller understanding of what the Configuration API is, how it can be managed, and how you can use it effectively in your Drupal projects. Happy Drupaling!

May 15 2019
May 15

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it's fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

Part 1 gives the background of the Configuration API, as well as discusses some terminology used within this article, and Part 2 describes how the API works, and Part 3 explains how to use functionality provided by core, so they are worth a read before beginning this article. 

Read-only configuration

In some situations, site builders may want to prevent any configuration changes from being made on the production environment, preventing changes that may cause unexpected issues. For example, clients with admin access could log into the production server, and make what they think is an innocent configuration change, that results in unexpected and drastic consequences. Some site builders consider it to be a best practice to prevent configuration changes on the production server altogether, under the idea that only content should be editable on the production server, and configuration changes should only be made in development and/or staging environments before being tested and pushed to production.

The Config Readonly module, allows for configuration changes through the UI to be disabled on a given environment. It does this by disabling the submit buttons on configuration pages. The module also disables configuration changes using Drush and Drupal console.

A configuration form that has been disabled with the Configuration Readonly module

Note: some configuration forms may still be enabled when using this  module. Module developers must build their forms by extending ConfigFormBase for the Configuration Readonly module to do its magic. If the developer has built the form using other means, the form will not be disabled, and the configuration for that form can be changed through the admin UI.

To set up an environment as read-only, add the following line to settings.php, then enable the module:

$settings['config_readonly'] = TRUE;

After an environment is set as read-only, changes to configuration can only be made on other environments, then migrated and imported into the active configuration on the read-only environment.

Complete split (blacklist) configuration

Sometimes configuration needs to exist on some environments, but not exist in other environments. For example, development modules, like the Devel module, or UI modules like Views UI (Drupal core) and Menu UI (Drupal core) should not be enabled on production environments, as they add overhead to the server while being unnecessary since the production server should not be used for development.

A problem arises when configuration is exported from one environment, and imported into the production environment. All the configuration from the source environment is now the active configuration on the production environment. So any development modules that were enabled on the source environment are now enabled on the production environment. In the case of development modules like Devel, this may only add some overhead to the server, but imagine a module like the Shield module, which sets up environments to require a username and password before even accessing the site. If this module is accidentally enabled upon import on production, it will block the site from public access - a disaster!

The solution to this situation is to blacklist configuration. Blacklisted configuration is blacklisted (removed) from configuration upon export. This functionality is provided by the Configuration Split module. This module allows for black-listing configuration either by module, by individual configuration key(s), and/or by wildcard.

Note that more detailed directions for creating blacklists can be found on the documentation page. The following is meant to give an overview of how black lists work.

Blacklists are created as part of a configuration profile. Configuration profiles allow for 'splitting' (a divergence in) configuration between environments. Profiles may be created for environment types such development, staging and production allowing for configuration specific to those types of environments. Or profiles could be set up for public non-production environments, that would have the Shield module enabled and configured. While a development profile may apply to all development environments, not all development environments are on publicly accessible URLs, and therefore may not need the Shield module enabled.

When setting up a configuration profile, note that the folder name must be the same as the machine_name of the profile.

Configuration split profile settings

Note that you must manually create the folder specified above, and that folder can and should be tracked using Git, so it can be use on any environment that enables the profile.

Configuration can then be set up to be blacklisted either by module, by configuration key, or by wildcard:

Complete split (blacklist) can be set by module, configuration key, or by wildcard

Finally, environments need to be set up to use a given profile. This is handled by adding the following line to settings.php on the environment:

$config['config_split.config_split.PROFILEMACHINENAME']['status'] = TRUE;

Where PROFILEMACHINENAME is the machine_name from the profile you created.

Although blacklisted configuration does not become part of the exported archive, it is not ignored altogether. When an environment has the profile enabled, upon export, blacklisted configuration is extracted, then written to the folder specified in the profile. The remaining configuration is written to the default configuration directory. When importing configuration, environments with the profile enabled will first retrieve the configuration from the default configuration directory, then apply any configuration from the folder specified in the profile. Environments not set up to use the profile ignore the configuration in the blacklisted directory altogether on both import and export.

This means that a developer can enable the Devel module on their local environment, blacklist it, then export their configuration. The blacklisted configuration never becomes part of the default configuration, and therefore the module will not accidentally be installed on environments with the configuration profile enabled.

Conditional split (grey list) configuration

Grey lists, also provided by the Configuration Split module, allow for configuration to differ by environment. With a blacklist (previous section), the configuration only exists in the active database configuration for environments that are set up to use the configuration profile containing the blacklisted configuration. With a grey list, the configuration exists in the active configuration in all environments, but the configuration profiles can be set up to allow environments to use differing values for the configuration.

Imagine an integration with a remote API requiring a username, password, and endpoint URL. The production server needs integrate with the remote API's production instance, while other environments will integrate with the remote API's sandbox instance. As such, the values to be used will differ by environment:

Production Environment:

remoteapi.username: ProductionUsername
remoteapi.password: ProductionPassword
remoteapi.endpoint: https://example.com/api

Other Environments:

remoteapi.username: SandboxUsername
remoteapi.password: SandboxPassword
remoteapi.endpoint: https://sandbox.example.com/api

A grey list allows for the setup of these values by configuration profile.

You may be remembering that Part 3 of this series of articles discussed overriding configuration in settings.php, and thinking that a grey list sounds like the same thing. After all, the default values for the sandbox instance of the API could be set up as the configuration values, and the production values could be overridden in settings.php on the production environment, with the same end-result.

The difference is that with a grey list, the remote API values are saved to the configuration profile folder, which is tracked by Git, and therefore can be tracked and migrated between environments. When grey listed configuration is exported, the grey listed configuration is written to the configuration profile folder, in the same manner as blacklisted configuration. When configuration is imported, the default values are retrieved, and the grey list values are used to override the default values, after which the configuration is imported into active configuration.

With the configuration override method using settings.php, site builders need to store the various configuration values somewhere outside the project, communicating environment-specific configuration values to each other through some means, to be manually entered on the relevant environment(s). With a grey list, the configuration values are managed with Git, meaning site builders do not need to record them outside the project, nor communicate them to each other through some other means. Site builders simply need to enable the relevant configuration profile in settings.php, and the environment-specific values can then be imported into active configuration from the configuration profile directory. This means that the sandbox API values can be set up as the values used by default on all environments, and a production configuration profile can be enabled on the production environment using the values to connect to the production instance of the remote API.

Conditional split items can be selected either from a list, or by manually entering them into the configuration profile:

Conditional split (grey list) settings can be selected or manually entered

Finally, note that grey lists can actually be used in conjunction with configuration overrides in settings.php. Grey lists are applied during import and export of configuration from the database. Values in settings.php are used at runtime, overriding any active configuration. So a developer could choose to set up their local instance of the system to connect to an entirely different instance of the remote API altogether by overriding the values in settings.php.

Ignoring configuration (overwrite protection)

Sometimes developers will want to protect certain configuration items in the database from ever being overwritten. For example imagine a site named Awesome Site, with a module that supplies the core of the site, named awesome_core. Since this module provides the core functionality of the site, it should never be disabled under any circumstances, as that would disable the core functionality of the site. In this case, the configuration for this module can be set to be 'ignored'. Any attempts to import ignored configuration from the file system to the active configuration in database will be skipped, and not imported.

Configuration can be ignored using the Config Ignore module. The functionality this module provides is similar to the functionality provided by the Config Readonly module discussed earlier, however the Config Readonly module covers the entire configuration of an environment, while the Config Ignore module allows for choosing configuration that should be protected. This configuration is protected by ignoring it altogether on import.

Configuration can be ignored as follows:

  1. Enable Config Ignore module on all environments.
  2. Navigate to the config ignore UI page, and set the configuration item to be ignored. In the case of preventing the awesome_core module from being disabled, the following would be added:
    core.extension:module.awesome_core
    Configuration to be ignore is entered one item per line. Wildcards can be used.

This setting will ensure that any attempts to change or remove core.extension:module.awesome_core upon configuration import will be ignored. So if the module is enabled on production, and a developer pushes configuration changes that would uninstall this module, those changes will be ignored, and the module will still be set as enabled after import.

Summary

In this article, we looked at various modules that extend the Configuration API, use cases behind these modules, and how the modules worked. We looked at the Config Readonly module, the Configuration Split module, and the Config Ignore module, and how to use these modules to manage configuration differences between environments. In the next, final part of this series, we will look at configuration management for module developers, and how developers can define the schema for the configuration their module provides.

May 13 2019
May 13

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it's fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

Part 1 gives the background of the Configuration API, as well as discusses some terminology used within this article, so it's worth a read before beginning this article.

Active configuration is in the database

In Drupal 8, configuration used at runtime is stored in the database. The values in the database are known as active configuration. In Drupal 7, configuration was known as settings, and stored in the {variable} table. In Drupal 8, configuration is stored in the {config} table. The active configuration is used at runtime by Drupal when preparing responses.

Configuration is backed up to files

The Configuration API enables the ability to export the active database configuration into a series of YML files. These files can also be imported into the database. This means that a developer can create a new Field API field on their local development environment, export the configuration for the new field to files, push those files to to the production environment, then import the configuration into the production environment's active configuration in the database.

The configuration values in the database are the live/active values, used by Drupal when responding to requests. The YMLfiles that represent configuration are not required, and are not used at run-time. In fact, in a new system the configuration files don't even exist until/unless someone exports the active configuration from the database. The configuration files are a means to be able to back up and/or migrate configuration between environments. Configuration files are never used in runtime on a site.

Configuration architecture

Let's look at the Configuration API on a more technical level, using a real-world example. The Restrict IP module allows users to set a list of rules that whitelist or blacklist users based on their IP address. Upon visiting the module settings page, users are presented with a checkbox that allows them to enable/disable the module functionality.

From a data standpoint, checkboxes are booleans; they represent either a true or false value. When exporting the configuration of a site with the Restrict IP module enabled, the relevant configuration key will be saved with a value of either true or false to a .yml file. Modules are required to define the schema for any configuration the module creates. Developers can look at the configuration schema declarations to understand what file(s) will be created, and what values are accepted.

Modules declare the schema for their configuration in the [MODULE ROOT]/config/schema directory. In the case of the Restrict IP module, the schema file is restrict_ip/config/schema/restrict_ip.schema.yml. This file contains the following declaration:

restrict_ip.settings:
  type: config_object
  label: 'Restrict IP settings'
  mapping:
    enable:
      type: boolean
      label: 'Enable module'

Schema declarations tell the system what the configuration looks like. In this case, the base configuration object is restrict_ip.settings, from the first line. When this configuration is exported to file, the file name will be restrict_ip.settings.yml. In that file will be a declaration of either:

enable: true

Or:

enable: false

When the file restrict_ip.settings.yml is imported into the active configuration in another environment's database, the value for the enable key will be imported as defined in the file.

On top of this, enabled modules are listed in core.extension.yml, which is the configuration that tracks which modules are enabled in a given environment. When the Restrict IP module is enabled in one environment, and configuration files exported from that environment are imported into a different Drupal environment, the Restrict IP module will be enabled due to its existence in core.extension.yml, and the setting enable will have a value of either true or false, depending on what the value was exported from the original environment.

Note that if you were to try to import the configuration without having the Restrict IP module in the codebase, an error will be thrown and the configuration import will fail with an error about the Restrict IP module not existing.

Summary

In this article, we looked at how the Drupal 8 Configuration API works on a technical level. We looked at how active configuration lives in the database, and can be exported to files which can then be imported back into the database, or migrated and imported to other Drupal environments. In part 3 of the series, Using the API, we will look at how to actually use the Configuration API, as well as some contributed modules that extend the functionality of the Configuration API, allowing for more effective management of Drupal 8 projects.

May 10 2019
May 10

Background

We live in an age of Drupal complexity. In the early days of Drupal, many developers would have a single Drupal instance/environment (aka copy) that was their production site, where they would test out new modules and develop new functionality. Developing on the live website however sometimes met with disastrous consequences when things went wrong! Over time, technology on the web grew, and nowadays it's fairly standard to have a Drupal project running on multiple environments to allow site development to be run in parallel to a live website without causing disruptions. New functionality is developed first in isolated private copies of the website, put into a testing environment where it is approved by clients, and eventually merged into the live production site.

While multiple environments allow for site development without causing disruptions on the live production website, it introduces a new problem; how to ensure consistency between site copies so that they are all working with the correct code.

This series of articles will explore the Configuration API, how it enables functionality to be migrated between multiple environments (sites), and ways of using the Configuration API with contributed modules to effectively manage the configuration of a project. This series will consist of the following posts:

  • Part 1: The Configuration API
  • Part 2: How the API works (coming soon)
  • Part 3: Using the API (coming soon)
  • Part 4: Extending the API with contributed modules (coming soon)
  • Part 5: Module developers and the API (coming soon)

Terminology

Before we get started, let's review some terminology. A Drupal project is the overall combination of the core codebase and all of the environments that are used for development of the project. An environment is a copy/clone of the website that is accessible at a unique domain name (URL). When most users on the web think of a website, they think of the environment accessed at a single URL, like google.com or facebook.com. These URLs are the entry to the live production environments for these websites. However, in addition to the production environment, large projects will have additional environments, where code is developed and tested before being deployed to the production environment. The combination of these environments make up the project. Drupal 8 is built to enable developers to migrate functionality for a project between environments, using the Configuration API.

Components of a Drupal environment

What makes up an environment of a Drupal project? In other words, when making a 'copy' of a Drupal site, what are the elements that need to be migrated/copied to create that copy? A Drupal environment consists of the following elements:

  1. Codebase: At the core of any Drupal system is codebase that makes up the 'engine' that runs Drupal. The codebase of any Drupal site consists of Drupal core, modules (both contributed and custom), and themes (again, both contributed and custom). The codebase is a series of files, and these files provide the functionality of a Drupal site.
  2. Configuration: Configuration is the collection of settings that define how the project will implement the functionality provided by the codebase. The codebase provides the abstract functionality to create things like content types, fields and taxonomies. Developers configure a Drupal site through the admin interface, to create the actual content types, fields, and taxonomies used in the project, that will allow end-users to do things like create pages or make comments. As developers build out the functionality of the project through the admin interface, the settings they choose are saved as the configuration for that environment. Configuration is environment-specific, and the Configuration API provides a means of migrating configuration between environments.
  3. Content: Content is the data on a site that is specific to the given environment. Content may be created by end-users, or by content admins. While content, like configuration, sometimes needs to be migrated between environments, content should be considered environment-specific. The Migrate API provides a means of migrating content between environments.
  4. Uploaded files: Most Drupal projects will have files uploaded as content. These are not part of the codebase that provides the functionality of the system, rather they are files that are to be provided to end users. Examples are images, audio/video files and PDFs.
  5. Third party technologies: Drupal 8 is a hub framework, built to integrate with 3rd party libraries, APIs, softwares, and other technologies. For example, a project may use a 3rd party CSS/JS framework, or swap out the default database caching backend to a Redis caching backend, which provides significant performance improvements. While these third party technologies are not part of Drupal, often the technologies will be required in each environment for the project to run.

The above five elements together make up a Drupal environment. The environment is a fully-functioning Drupal copy (instance), with its own configuration and content. Each environment will have its own domain (URL) at which the environment can be accessed.

Types of environments

Typical projects these days will have a minimum of three environments:

  1. Production environment: The main live environment that end users will use.
  2. Staging environments: One or more environments on which site owners can test new functionality or bug fixes before they are deployed to the production environment.
  3. Development environments, where new functionality can be developed in isolation. Development environments may be publicly accessible, or may only exist on a developers computer, not accessible to the outside internet.

Content and configuration - both stored in the database

A major issue that existed with all versions of Drupal core up to and including Drupal 7, was that content and configuration were both stored in the database, and there was no way to separate them. This made it difficult to manage configuration changes, such as adding a field to a content type, between multiple environments. There was no way to export the configuration for that single field, so to ensure consistency full databases were migrated. The problem is that database migrations are an all-or-nothing venture; when migrating from one environment to another, the entire database of the source environment overwrote the entire database on the target environment, meaning all content and configuration was overridden on the target environment.

This opens up a problem though. Imagine this scenario:

  1. A developer copies the production database to their local computer.
  2. Someone creates new content on the production environment, that becomes part of the database.
  3. The developer adds a new field on their local environment.
  4. The developer migrates their local database to the production environment, overwriting all configuration and content on the production environment.

With the above scenario, the new field created on the developer's local environment is now on the production environment, however the content created in step 2 is overwritten and lost. As such, the above process is untenable. Databases cannot be migrated 'up' (to production) after a site has gone live. They can only be migrated 'down' (from production).

To overcome this problem, a common way to make changes to configuration before Drupal 8 was to make configuration changes on the production environment, then copy the production environment's database down to the other environments. For example, if a developer was writing code that depended on a new field, rather than create the field on their local development environment, they would create the field on the production environment, then copy the production environment database down to their local development environment. Now the field exists on both the production environment as well as their development environment. This solution works, but is akin to killing a mosquito with a sledgehammer - it's overkill. The developer did not need the whole database, only the configuration for that single field. And any changes they had already made in their own environment, such as content for testing, is now wiped out with data from the production environment.

While this process worked (and still does continue to work for many projects), it was not ideal. It led to issues where database migrations had to be coordinated with clients and other developers to ensure that it was safe to wipe out content or configuration someone else may still be working with.

The Configuration API

The Drupal 8 Configuration API was created to overcome the above issue, so that developers could migrate configuration - such as Field API fields, content types, and taxonomies - between environments, without having to overwrite the entire database. It decoupled configuration from content. To understand how it makes things better, let's again review the components of a Drupal site, and the storage mechanisms behind them:

  • Codebase (files)
  • Configuration (database)
  • Content (database)
  • Uploaded files (files)

With the Configuration API, configuration can be exported into files, and these files can be migrated between environments and imported into the new environment. This means that we can now migrate individual items of configuration without having to overwrite the entire database. The Configuration API decouples configuration from content. In the next parts of this series, we will explore how the API work

Note 1: the Drupal 7 Features module essentially does the same thing, and though the methodology is different, many of the concepts in the following articles will be relevant to that module as well.

Note 2: the Drupal 8 Migrate API was developed in parallel to the Configuration API, allowing for content also to be migrated, again without overwriting the entire database.

Summary

In this article, we looked at an overview of the Configuration API to understand what it is, and why it was created. After reading this article, you should have an understanding of what a Drupal environment is, and what it consists of. You should understand that in Drupal 8, configuration is stored in the database, but can be exported to files, which can then be used to migrate content between environments. In Part 2 - How the API works, we will take a closer, more technical look at how the API works, and in Part 3 - Using the API, we will discuss how to use the API, and some contributed modules that extend its functionality.

Jan 21 2019
Jan 21

As any developer working with Drupal 8 knows, working with Composer has become an integral part of working with Drupal. This can be daunting for those without previous experience working with command line, and can still be a confusing experience for those who do.

This is the fourth post in an explorative series of blog posts on Drupal and Composer, hopefully clearing up some of the confusion. The four blog posts on this topic will be as follows:

If you have not yet read part 1 and part 2, then before reading through this post, it is probably best to ensure you understand the concepts outlined in the summaries of those articles before moving forward.

Composer for Drupal Developers

The final section in this series addresses how Drupal developers can integrate profiles, modules and themes with Composer, to manage 3rd party libraries and/or other Drupal modules used by their code. To understand when and why a developer would wish to do this, let's have a look at a use case.

Use Case 1: Instagram Integration

Imagine a developer building a module to integrate with Instagram, pulling the most recent images from an Instagram account and displaying them on the Drupal site. Instagram provides an API for retrieving this information, but before Instagram provides this data, the Drupal site is required to authenticate itself to Instagram, using the OAuth2 protocol. After authenticating, various API calls can be made to retrieve data from the API using the the OAuth2 authentication token.

Up to Drupal 7, the Drupal way to do this would be to write a custom OAuth 2 integration (or use the OAuth2 Client module), then write custom functions that made the API calls to Instagram, retrieving the data.

Drupal 8 however, using Composer, acts like a hub, allowing for the inclusion of 3rd party libraries. This saves developers from having to re-create code within Drupal (aka rebuilding the wheel). For example, Instagram integration can be handled with the Instagram-PHP-API library. This library provides OAuth2 integration specific to Instagram, as well as various PHP wrappers that interact with Instagram's API to handle the retrieval of data from the API. Using Composer, our developer can integrate this 3rd party library into Drupal, and use it in their Drupal code. This saves our developer the time and effort of creating and testing the code that integrates with Instagram, as these tasks are handled by the library maintainers. If the library is well maintained, it will be updated in line with updates to Instagram's API, saving developer time (and therefore client money) in maintenance as well. And finally, as the library is open source, it will have the benefit of having eyes on it by PHP developers from various walks, rather than just Drupal developers, making for a stronger, more secure library.

Use Case 2: Drupal Module Dependency

Imagine that it is a requirement that the images retrieved from Instagram be shown in a popup (aka modal or lightbox). This can be handled using the Drupal Colorbox module. As such, the Colorbox module will be set as a dependency in the Instagram module's .info.yml file. When a site builder managing their Drupal project with Composer downloads/manages our developer's Instagram module with Composer, then tries to enable the Instagram module in Drupal, they will get an error from Drupal that the Colorbox module is missing. The problem here is that Drupal is being told the Colorbox module is a dependency, but we are not managing that dependency with Composer, and therefore the code has not been downloaded and does not exist.

At this point, it is easy to think that site builders could just add the Colorbox module to Composer with a require command. This would work, and after doing so, they would be able to enable the Instagram module, since the Colorbox module now exists. This opens up a problem however; imagine the site builder later decides to remove our developer's Instagram module. They disable/uninstall it in Drupal, then use composer remove to remove it. Everything looks good - the module has been uninstalled from Drupal, and the code has been removed from the codebase. However, there is one more step to return the system to its original state; disabling/uninstalling the now unnecessary Colorbox module from Drupal and removing the code using Composer. The necessity of this additional step opens up situations where the module will be left on the system due to forgetfulness, a lack of documentation, or changing site builders, creating site bloat due to unnecessary code existing and being executed.

The solution to this is to also manage Drupal profile/module/theme dependencies with Composer. In our use case, our developer will list the Colorbox module as a dependency of the Instagram module not just in the Instagram module's .info.yml file, but also as a Composer dependency in composer.json. This way, when the Instagram module is added to a project using Composer, Composer will also download and manage the Colorbox module. And when the site builder removes the Instagram module, Composer will remove the Colorbox module as well (if no other libraries list it as a dependency).

Integrating Drupal Code with Composer

Step 1: Create a composer.json file

Integrating with Composer requires that a composer.json file be created in the root of the profile/module/theme (from here on out referred to as the Drupal library) folder. This is explained in detail on the Drupal.org documentation page Add a composer.json file. Once this file has been added to the Drupal Library, it should be validated. This can be done by running composer validate on the command line in the same directory that the composer.json file lives in.

Note that Drupal.org automatically adds a composer.json file to projects that do not have one. This is so that Composer is able to manage all projects on Drupal.org, not just projects to which maintainers have explicitly added a composer.json file.

Step 2: Declare your dependencies to Composer

All library dependencies, whether 3rd party libraries, or contributed Drupal libraries (modules etc), need to be declared as dependencies to Composer. This is done by calling composer require [LIBRARY NAME], from within the folder of the Drupal library. This means that if you are adding dependencies for a module, you will navigate to the module folder, and add your dependencies. If you are adding dependencies for a theme, navigate to that folder, and add the dependencies there. The key point here is to not add your dependencies from the wrong folder, as they will be be added to the composer.json file of the wrong package.

Adding Remote Library Dependencies

In the use case example for this article, the Instagram-PHP-API library was suggested for integration with Instagram. This library has the Composer key cosenary/instagram. To set this library as a dependency of a module, navigate to the root of the module, and run the following:

composer require cosenary/instagram

Running this code results in the following:

  1. The cosenary/instagram library is added as a dependency to the composer.json file, so that Composer knows the package is managed by Composer. The composer.json file will contain something like the following:
     

    "require": {
      ...
      "cosenary/instagram": "^2.3",
      ...
    }

    Note that composer.json is committed to Git.

  2. The [MODULE ROOT]/vendor folder is created, and the Instagram-PHP-API library is downloaded into this folder. Composer by default creates the vendor folder in the same directory as the composer.json file. 

    Note that once the Drupal library has been pushed to a repository where it can be managed by Composer, installing the Drupal library with Composer will install the Instagram-PHP-API library to the project's vendor folder. As such, it's a good idea to delete the vendor folder in the module after the new composer.json and composer.lock files have been committed to the remote repository, before requiring the module with Composer.

    As a best practice, it is a good idea to create a .gitignore file in the module root, and include the vendor directory to ensure that it is never accidentally committed. 

  3. The composer.lock file is created, locking the installed version of the Instagram-PHP-API library to the Instagram module. This ensures that users of the Instagram library are working with a compatible version of the Instagram-PHP-API library. Note that this file is also committed to Git.

  4. Any dependencies of the Instagram-PHP-API library, declared in its own composer.json file, are downloaded.

  5. All libraries and dependencies are checked for version conflicts.

Adding Drupal Library Dependencies

In the use case example for the article, the Drupal Colorbox module (aka library) was mentioned to be a dependency of the Instagram module, and therefore is to be managed with Composer. Adding Drupal modules as dependencies requires an additional step, due to their being hosted on a non-default repository. To add a Drupal library as a dependency:

  1. Edit the composer.json file and add the Drupal repository to it, so that Composer knows where to look for the Drupal libraries, including the Colorbox module:

    "repositories": [
      {
        "type": "composer",
        "url": "https://packages.drupal.org/8"
      }
    ]

    Now Composer knows where to look for the Colorbox module when the Instagram module is installed.  If developers try to declare Colorbox module as a dependency before this step, they will get an error that Composer cannot find the drupal/colorbox library.
     

  2. Add the module with composer require:

    composer require drupal/colorbox

    Note that this will download the Colorbox module to [MODULE ROOT]/vendor/drupal/colorbox. This is NOT a good thing, for while Drupal will be able to find the module in this location, it is not the standard location, and if another copy of the module ends up in other directories that Drupal scans when looking for modules this will create hard to debug issues. So either delete the Colorbox module in the vendor folder right away, or immediately move it to the location where the rest of your contributed modules are located, so it's very clear where it is at.
     

After this, both the composer.json file and the the composer.lock file should be committed to Git. The module now has its dependencies declared on both the Instagram-PHP-API library and the Drupal Colorbox module. When site builders install the module to their Drupal project using Composer, the Instagram library will be downloaded to the project (site) vendor folder, and the Drupal Colorbox module will be installed to the web/modules/contrib folder.

Developing With Composer-Managed Drupal Libraries

Now that the module has been integrated with Composer, Composer can be used to manage the module during development. Let's imagine that the Instagram module being developed has the key drupal/insta, and is on Version 8.x-1.x. Downloading the -dev version of the module can be done by appending :1.x-dev when requiring the module:

composer require drupal/insta:1.x-dev

This will download the 8.x-1.x-dev version of module to the web/modules/contrib folder, where it can be worked with. A .git folder will be included in the module root, so that changes can be committed to Git. Sounds great, but when our developer tries to push their commits, they will get an error, as the remote URL of the repository is not the repository for maintainers of the module and does not allow commits. As such, we need to change the Git repository's remote URL to the URL of the repository for maintainers, allowing for code to be pushed, and releases to be made.

To find the correct Git repository URL, first go to the module's download page on Drupal.org. Make sure you are logged in as a maintainer of the module. If you are a maintainer and have the correct permissions, you will see a tab called version control. Click this tab. Next, copy the Git URL on the page. It will look something like this:

[email protected]:project/[MODULE NAME].git

Next, navigate to your module folder, and run the following commands:

git remote rm origin
git remote add [email protected]:project/
[MODULE NAME].git

The first line of this code removes the incorrect remote Git URL for non-maintainers, and the second line adds the correct Git remote URL for maintainers. After this, you will be able to push changes, including tags (releases).

You can also then easily switch between your development and release versions of the module by alternating the following:

composer require drupal/insta

composer require drupal/insta:1.x-dev

Note however that when doing any commits, you will need to switch up the remote URL each time you switch to the dev version of the module.

Summary

In this final part of the series on using Composer with Drupal, we have looked at how Drupal developers can integrate their custom code with Composer, to ensure that dependencies of the module are correctly managed. We have also looked at how to develop modules that are being managed with Composer. Using these techniques will allow site builders to keep a clean codebase that manages library conflicts.

And this concludes my four-part series on Drupal and Composer. I hope you have enjoyed it, and that it shows just how powerful Composer is, and how it can be effectively used to manage Drupal 8 sites. Happy Composing and Happy Drupaling!

Jan 03 2019
Jan 03

As any developer working with Drupal 8 knows, working with Composer has become an integral part of working with Drupal. This can be daunting for those without previous experience working with command line, and can still be a confusing experience for those who do.

This is the third post in an explorative series of blog posts on Drupal and Composer, hopefully clearing up some of the confusion. The four blog posts on this topic will be as follows:

If you have not yet read part 1 and part 2, then before reading through this post, it is probably best to ensure you understand the concepts outlined in the summaries of those articles.

Switching Management of your Drupal site to Composer

So you’ve worked your way through parts one and two of this series, and you now understand what Composer is, how it can be used to work with Drupal 8, and how to start a new Drupal 8 project using Composer. But, you started your current project without using Composer, and want to switch to managing your project using Composer. Where do you start? This article will discusses a few strategies behind converting your existing system to Drupal. Fortunately some automated tools exist for converting existing sites to Composer, and in the situation that neither of these tools work, an overview is provided on how to manually convert an existing site 

But, before moving on to any of these methods...

Take a backup! As this process will be destructive to your system, make sure you take a backup of your file system, and take a backup of your database. Then go and check to ensure that you have a full backup of the file system, and a full back up of the database.

If you skip this step, or do not do it properly, you may end up with an entirely broken system, so don’t skip this step.

Method 1: Composerize (Composer plugin)

Composerize Drupal is a Composer plugin that has been built to convert existing Drupal installations to use Composer. Instructions are on the download page. If this method doesn't work, you can try the Drupal Composerize module:

Method 2: Composerize (Drupal module)

The Composerize module is a Drupal module that is built to convert existing Drupal installations to use Composer. At the time of writing, this module has the following disclaimer on the page:

This module is still in development. It supports very basic Drupal 8 setups, but there are many necessary features it still lacks (e.g., support for patches, JavaScript libraries, distributions, etc). We're working on all that stuff, but this module is definitely not ready for prime time.

If this method doesn't work, you'll likely have to manually convert your site.

Method 3: Manual method

If the above steps fail, your last option is to convert your installation to using the Drupal Composer Template manually. Note that pretty much every system will be different, so these instructions are an overview of the end goal, rather than a complete set of steps that will convert your system. There is a good chance you’ll run into issues along the way that are not covered here, so make sure you took that backup!

Converting a system to the template requires achieving the following goals:

  • Setting up the file structure of the system to match the Drupal Composer Template
  • Delete old Composer files
  • Add the Template file system to your Drupal system
  • Set up the configuration directory and the private files directory
  • Set up your profiles, modules and themes to be managed by Composer

Step 1: Convert your file structure to match the Drupal Composer Template file structure

The Drupal Composer Template is set up to manage the directory above the webroot, as well as the webroot. The webroot is located in the [PROJECT ROOT]/web folder. Therefore you will need this structure:

  • / - Project root. Contains various scaffolding files such as composer.json and composer.lock, as well as the configuration export folder, and the webroot folder
    • /web - The webroot. Contains all Drupal core, profile, module and theme files.

You'll need to set up your Drupal installation to match this structure, and make sure that the server is set up so that the web root is at [PROJECT ROOT]/web.

Note: Some servers require the webroot to be in a directory with a specific name, such as public_html, or www. In this case, you can try setting up symlinks from the required directory name to the /web directory, so that, for example, [PROJECT ROOT]/public_html is a symlink pointing at [PROJECT ROOT]/web. Your Drupal files will then reside in the /web directory (satisfying the Drupal template), and also be accessible at /public_html (satisfying the server requirements). If this does not work, another option would be to edit the composer.json file (which is added in step 3) and change any paths that point at the /web directory, to point at the directory name you are actually using.

Step 2: Delete old Composer files

I'll say it again, because it needs to be said, make sure you took that backup in step 0!

If any of the following files or folders exist in your installation, delete them. Note however that you may want to save the composer.json file for reference if you've manually added any libraries into your existing installation using Composer.

  • [PROJECT ROOT]/vendor directory
  • [PROJECT ROOT]/composer.json file
  • [PROJECT ROOT]/composer.lock file
  • [PROJECT ROOT]/web/vendor directory
  • [PROJECT ROOT]/web/composer.json file
  • [PROJECT ROOT]/web/composer.lock file.

Step 3: Add the Template file system to your Drupal system

Go here, click 'clone or download' and download the .zip file (note - or clone the Git repository if you prefer)

  1. Save/move the zip file into the project root folder (the directory above the 'web' folder you created above). You can then unpack it using the following command. Before this step, the file /composer.json should not exist. After this step, if you've done it correctly, this file will exist.
    tar -zxvf [FILE] --strip-components=1
  2. Run the following command from the project root folder. This command will Install the Composer dependencies as well as create the /vendor directory.
    composer install
  3. Run the following to ensure your database is up to date, and caches are cleared.
    drush updb; drush cr;

Step 4: Set up the configuration directory and the private files directory (Optional)

This next step is optional, however it will make for a more secure system. First, the following directories need to be created if they don't already exist:

  • [PROJECT ROOT]/config/sync
  • [PROJECT_ROOT]/private

The first folder is where exports of the Drupal 8 configuration system will be exported to. The second folder is the private files folder. Creating both of these directories as siblings to the webroot adds security, as the files are not in a web-accessible location. The next thing to do is tell the system of the location of these files. This is done by declaring the folder paths in settings.php. You can do this by adding the following two lines to the bottom of settings.php:

$config_directories['sync'] = '../config/sync';
$settings['file_private_path'] = '../private';

After this, clear the registry (drush cr;). You can confirm that the configuration directory was properly set by running drush cex sync, and then checking that there are .yml files in the [PROJECT ROOT]/config/sync directory. You can confirm that the private files folder was properly set by going to Admin -> Configuration -> Media -> File System, and confirming that the private files directory is listed as ../private.

Step 5: Set up your profiles, modules and themes to be managed by Composer

The final step is to set up Composer to manage Drupal profiles, modules and themes to be managed by Composer. The Drupal Composer Template tracks Core by default, but needs to be informed of the rest of your code. Note that if you do not want to use the most recent version of these profiles/modules/themes, you will need to alter the commands below to set the version you want to install.

Drupal profiles, modules and themes can be installed with the following command:

composer require drupal/[PACKAGE NAME]

For example, if you were using the Administration Toolbar module (admin_toolbar), you would run:

composer require drupal/admin_toolbar

After you have done this, ensure you are up to date with a DB update and cache clear:

drush updb; drush cr;

At this point, your system should be converted to the Drupal Composer Template, with contributed code being managed by Composer.

Summary

This article looks at converting exiting Drupal 8 sites to being managed by the Drupal Composer Template. Doing this can potentially be automated using the Composerize Composer plugin or the Composerize Drupal module. In situations where this does not work, the manual directions in this article can be used as an alternative.

In the next and final part of this series, we'll look at how Drupal developers can integrate 3rd party libraries into their custom Drupal profiles, modules and themes.

Dec 03 2018
Dec 03

As any developer working with Drupal 8 knows, working with Composer has become an integral part of working with Drupal. This can be daunting for those who don't have previous experience working with the command line, and can still be a confusing experience for those who do. This is the second post in an explorative series of blog posts on Drupal and Composer, hopefully clearing up some of the confusion. The four blog posts on this topic will be as follows:

This article will be difficult to understand without first understanding the concepts explained in part 1, so If you have not read it, it would probably be worth your while to ensure you understand the concepts outlined in the summary of that article, before proceeding with this one.

Beginning a New Project

Fortunately a lot of work has been put into creating a Composer base (called a template) for Drupal projects. This Drupal Composer template can be found on Github at: https://github.com/drupal-composer/drupal-project. Instructions on how to use the template can be found there, and the same instructions are found in the README.md file that comes with the project when installed.

Starting a new project with the Drupal Composer template can be done with the following line of code:

composer create-project drupal-composer/drupal-project:8.x-dev some-dir --stability dev --no-interaction

This command does a number of things, many of which will be addressed below.

1) A new Composer project is created

The first thing the installation does is to create the directory specified as some-dir in the command, and initializes a Composer project in that directory. Change this to the appropriate name for your project. This is now your new project. The project contains the composer.json and composer.lock files that will be used to manage the code base of the project.

Note: The command provided lists the --stability as dev. This can be confusing as developers may think this means they are getting the -dev version of Drupal core. Don't worry, the command given above will always install the current full release of Drupal core, whatever it is at the time when the command is run.

2) Library dependencies (as well as their dependencies) are installed

The Composer template has a number of dependencies included by default, some of which we will take a look at here. These libraries are set by default as requirements in composer.json, and therefore are included when running the install command given earlier.

  • Drush: Drush is cool. If you don’t know what it is, it’s worth some Google-fu. Anything to be written about Drush has already been written somewhere else, so check it out - it will be worth your while!
  • Drupal Console: Drupal Console is also really cool. See comments on Drush.
  • Composer Patches: This one is very cool. This library in and of itself is worth using Composer to manage Drupal projects in my eyes. Even if Composer had no other benefits, this one would be great. First, an explanation of a patch is necessary. A patch is kind of like a band-aid that can be applied to code. Patches allow developers to submit changes, be they bug fixes or new functionality, to the library maintainer. The library maintainer may or may not add the patch to the source code, but in the meantime, other developers can apply the patch to their own systems, both to test if it works, as well as use it if it does. However, when the library the patch has been applied to is updated to a newer version, the patches have to be re-applied. What Composer Patches does is allow developers to track patches applied to the project, and have them applied automatically during the update process. This ensures that bugs don't arise from forgetting to re-apply patches after the update. Patches are tracked by adding them to composer.json. Here is an example:

    "extra": {
      "patches": {
        "drupal/core”: {
          “Patch description”: "https://www.drupal.org/files/issues/someissue-1543858-30.patch"
        }
      }
    }

     

  • With the above code, the next time composer update drupal/core is run, Composer will attempt to apply the patch found at https://www.drupal.org/files/issues/someissue-1543858-30.patch to Drupal core. Note that the description of the patch, given above as "Patch description", is arbitrary, and should be something descriptive. If there is an issue for the patch, a link to the issue is good to add to the description, so developers can quickly look into the status of the patch, see if any updated patches have been released, and check if the patch has been incorporated into the library, rendering the patch unnecessary.

  • And Others: There are many other libraries that are included as part of the Drupal Composer template, but the truth is that I haven’t looked into them. Also note that Drupal core alone has multiple dependencies which have their own dependencies. At the time of writing, 123 libraries are installed into the project with the install command.

But wait - I don’t use [fill in library here]

The Composer template is just that - a template. Some people don’t use Drush, some don’t use Drupal console. If these are not needed, they can be removed from a project in the same manner as any Composer library. Example:

composer remove drush/drush

The above command will remove Drush from the code managed by the Composer template.

3) The system folder architecture is created

The file system created by the Composer template deserves a close look.

  • /web: The first thing to notice is that Drupal is installed into the /web directory in the root of the project. This means that the root of the project created with the Composer template is one level above the webroot. When configuring the server, the domain for the server will need to point at the /web directory.
  • /config/sync: The Composer template sets up the project to store Drupal 8 configuration .yml files in this folder. When running drush cex sync to export configuration, the entire site configuration will be exported to this folder. This is folder is best kept out of the webroot for security purposes.
  • /drush: This folder holds a few Drush specific items in it. If multiple environments exist for your project, Drush aliases can be set in /drush/sites/self.site.yml, allowing for interaction with your various environments from anywhere within the project.
  • /scripts: At the time of writing, this folder contains only a single file, /scripts/composer/ScriptHandler.php. This is a really cool file that contains code run by Composer during various processes.

    The composer.json file in the Drupal Composer template contains the following:
     

    "scripts": {
      "drupal-scaffold": "DrupalComposer\\DrupalScaffold\\Plugin::scaffold",
      "pre-install-cmd": [
        "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
      ],
      "pre-update-cmd": [
        "DrupalProject\\composer\\ScriptHandler::checkComposerVersion"
      ],
      "post-install-cmd": [
        "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
      ],
      "post-update-cmd": [
        "DrupalProject\\composer\\ScriptHandler::createRequiredFiles"
      ]
    },

    The code above executes the stated code on pre-install, pre-update, post-install and post-update. Any time either composer install or composer update are executed on the system, the pre and post hook for that call are executed, calling the relevant functions above. Developers can create their own pre/post install and update hooks following the examples shown above.

  • /vendor: The first thing to note is that the location of this file differs from a vanilla Drupal 8 installation. When installing Drupal manually, the vendor folder is part of the webroot by default. This however could lead to security issues, which is why the Composer template installs it above the webroot. The vendor folder contains most of the libraries that Composer manages for your project. Drupal core, modules, themes and profiles however are saved to other locations (to be discussed in the next section). Everything else is saved to the /vendor folder.

4) Drupal File/Folder Installation Locations are set

As mentioned above, Drupal core is installed into the /web folder. The Composer template also sets up installation locations (directories) for Drupal libraries, modules, themes and profiles so that when Composer installs these, they are put in the appropriate Drupal folder locations. This is the code in composer.json that handles the installation locations:

"extra": {
  "installer-paths": {
    "web/core": [
      "type:drupal-core"
    ],
    "web/libraries/{$name}": [
      "type:drupal-library"
    ],
    "web/modules/contrib/{$name}": [
      "type:drupal-module"
    ],
    "web/profiles/contrib/{$name}": [
      "type:drupal-profile"
    ],
    "web/themes/contrib/{$name}": [
      "type:drupal-theme"
    ],
    "drush/contrib/{$name}": [
      "type:drupal-drush"
    ]
  }
}

The most common library type that a Drupal developer will install will be Drupal modules. So let’s look at the line of code specific to the module installation location:

"web/modules/contrib/{$name}": [
  "type:drupal-module"
],

This line of code says that if the type of Composer library is drupal-module, then install it to the /web/modules/contrib/[MODULE MACHINE NAME] folder.

But how does Composer know that the library being downloaded is type drupal-module? Well, the key to this is in how Composer libraries are managed in the first place. Throughout this article and the one that precedes it, we have repeatedly looked at the composer.json file that defines this project. Well, every Composer library contains a composer.json file, and every composer.json file that comes with packaged with Drupal modules contains a type declaration as follows:

"type": "drupal-module",

When Composer is installing libraries, it looks for a type declaration, and when it finds one, if there is a custom install location set in composer.json for that type, the library is installed to the declared folder. Drupal themes are of type drupal-theme, Drupal profiles are of type drupal-profile, and so on, and they are installed to the folders declared in composer.json.

Managing Custom Drupal Code with Composer

Custom code for a project can be managed with Composer as mentioned in Part 1 of this series. Drupal convention generally separates contributed (3rd party) modules and custom modules into separate folders. To install custom code in the locations according to this convention, the following lines should be added to the installer-paths declaration in composer.json:

"web/modules/custom/{$name}": [
  "type:drupal-custom-module"]
,
"web/profiles/custom/{$name}": [
  "type:drupal-custom-profile"
],
"web/themes/custom/{$name}": [
  "type:drupal-custom-theme"
],

This code adds three additional library types for custom modules, custom profiles, and custom themes.

Next, you’ll need to add a composer.json file to the custom module/theme/profile. Directions for this can be seen here: https://www.drupal.org/docs/8/creating-custom-modules/add-a-composerjson-file. Note that for the step named Define your module as a PHP package, you should set the type as drupal-custom-[TYPE], where [TYPE] is one of: module, theme, or profile.

Continuing on, make sure the composer.json file containing the type declaration has been pushed to the remote private repository.

The last step step is to add your private repository to you project’s composer.json, so that when running composer require my/privatelibrary, Composer knows in which repository to look for the library. Declaring private repositories in composer.json is explained here: https://getcomposer.org/doc/05-repositories.md#using-private-repositories.

With the above steps, when running composer install my/library, Composer will find the private repository declared in composer.json, search that repository for my/library, and download it. The composer.json file in my/library tells Composer that it’s of type drupal-custom-[TYPE], so Drupal will install it into the directory specified for Drupal custom [TYPE].

If using Git for version control on your system, you'll probably want to alter the .gitignore file in the Composer project root to ignore the custom folder locations. If you have created a custom module, and will be managing all custom modules with Composer and private repositories, you should probably add the /web/modules/custom folder to .gitignore. If you will be managing some custom modules with Git and not Composer, then you should probably add the custom module you have created to .gitignore as /web/modules/custom/[MODULE NAME].

Managing site settings: settings.php and settings.local.php

This section isn’t actually directly related to Composer and Drupal, but it’s a good step for setting up a project, and we can use the methodology to work with Composer template and Git.

Each Drupal installation depends on the file settings.php. This file is loaded as part of Drupal’s bootstrap process on every page load. Site-specific settings are added into this file, such as database connection information.

Towards the bottom of settings.php, the following lines can be found:

# if (file_exists($app_root . '/' . $site_path . '/settings.local.php')) {
# include $app_root . '/' . $site_path . '/settings.local.php';
# }

These lines are commented out by the # symbol at the start of each line. This means that the code is not executed. If these lines are uncommented, by removing the hash symbol at the start of each line, this code is executed. The code looks for a file named settings.local.php, and if it exists, it includes that file. This means any settings in settings.local.php become part of the bootstrap process and are available to Drupal. After uncommenting the code, it will look like this:

if (file_exists($app_root . '/' . $site_path . '/settings.local.php')) {
  include $app_root . '/' . $site_path . '/settings.local.php';
}

Why do this? This allows settings to be split into two types: settings that will be the same across all environments (eg. production, staging, local installations etc), and local settings that are only relevant to the current Drupal environment in which this file exists. This is done by committing settings.php to Git so it is shared amongst every environment (note - settings.local.php is NOT committed to Git, as it should not be shared). For example, the majority of Drupal installations have a separate database for each environment, meaning that database connection details will be specific to each environment. Therefore, database settings would be put into settings.local.php, as they are not shared between environments. The connection details to a remote API however may be the same regardless of environment, and therefore would be put into settings.php. This ensures any developer working on the system has access to the API details.

After splitting up the settings this way, settings.php is committed to Git so it is tracked and shared between environments. 

The full process is as follows:

  1. Install the Drupal Composer template as described earlier in this article
  2. Uncomment the code in settings.php as explained above
  3. Install Drupal as normal
  4. Run git diff settings.php to see what has been added to settings.php as part of the installation process. Anything that shows up should be added to settings.local.php, and removed from settings.php. This will definitely be the database connection, but could be other files as well.
  5. Edit .gitignore in the Composer project root, and remove this line:

    /web/sites/*/settings.php

    You can now add settings for all environments to settings.php and settings specific to the local environment to settings.local.php.

Setting Up the Private Files Directory

After setting up settings.php and settings.local.php as described above, you can now add the following to settings.php:

$settings['file_private_path'] = ‘../private’;

Next, create the folder /private in the root of your Composer project. Finally clear the Drupal cache, which will create the file /private/.htaccess. At this point you can now add settings.php and the /private folder to Git. Finally, edit .gitignore and add the following:

# Ignore private files /private/

This sets up the private file directories across all installations, saving developers having to set it up for each installation. Note that the .gitignore setting will ensure the contents of this folder are ignored by Git, as they should be.

What should I add to Git?

The Drupal Composer template project page states:

You should create a new git repository, and commit all files not excluded by the .gitignore file.

In particular, you will need to ensure you commit composer.json and composer.lock any time composer changes are made.

Is It Safe to Use Composer on a Production Server?

The actual answer to this question is not one I have. I am not a server guy overall, and for that matter, I’m not even that much of a Composer expert. It may be that Composer is entirely safe on a production server, but personally my thoughts are that having a program that can write files to the server from remote servers would seem to open up a potential security risk, and therefore it’s likely better to NOT have Composer on a production server. This comment may lead you to question why I would have wasted the time to write so much on Composer if it’s better to not use it in the first place. But not so fast partner! It really depends on the system architecture and the server setup. Some servers have been set up so that as part of the deployment process, the codebase is built using Composer, but then set up as a read-only file system or a Docker container or some other process ensuring security. This however is a particularly complex server set up. Fortunately there is an alternative for developers who are not working with servers configured in this fashion, which we'll look at next.

Using Composer, without having Composer installed on the production server

There is an in-between solution that allows us to use Composer to manage our projects, even with multiple developers, while not having Composer installed on the production server. In this case, we can use a hybrid of a Composer managed project and the old style of using pure Git for deployment.

First, we need to edit the .gitignore folder in the root of our Composer installation. In particular, we need to remove the following code:

# Ignore directories generated by Composer
/drush/contrib/
/vendor/
/web/core/
/web/modules/contrib/
/web/themes/contrib/
/web/profiles/contrib/
/web/libraries/

The above directories are all created by Composer and managed by Composer, which is why they were originally ignored by Git. However, we will not be managing our production server using Composer, and therefore we want to include these folders into the Git repository rather than ignoring them.

After setting up your project, commit the folders listed above to Git. This ensures all the files that are managed by Composer will be part of Git. That waycomposer install never needs to be run on the production server, since any code that command would download will already be part of Git.

What this means now is that any time a developer on the project add/updates code by running either composer update or composer install, they will need to commit not just the composer.json and composer.lock files, but also the added/updated source files that Composer manages, so that all code be available on the production server when checking out the code from Git.

Updating Drupal using Composer

In part one of this series, I discussed library versions. I am not going to go deep into how the versioning works internally, but I’ll explain how updating works specific to Drupal core. At the time of writing, the current version of Drupal is 8.6.3. The dependency set in composer.json is for Drupal 8.6.*. The * at the end of this means that your project uses upon any version of Drupal 8.6, so when a minor version update comes out for 8.6, for example 8.6.4, Drupal core will be updated to Drupal 8.6.4 when composer update drupal\core is run.

However, when Drupal 8.7 is released, it will not be automatically installed, since it does not fit the pattern 8.6.*. To upgrade to Drupal 8.7, the following command is run:

composer update drupal/core:~8.7

The ~/8.7 part of the above command tells Composer to use any version of Drupal 8.7. This means that in the future, when you run composer update drupal/core, minor releases of the 8.7 branch of Drupal core will be installed.

Summary

In this article, we have gone over how Composer is used with Drupal to make project deployment a little smoother, more stable, and consistent between environments. You should have an understanding of:

  • How to set up a new Drupal project using Composer
  • How the following folders relate to the project:
    • /config
    • /drush
    • /scripts
    • /web
    • /vendor
  • How Composer adds Drupal modules and/or themes
  • Drupal library dependencies
  • Managing custom Drupal code with Composer
  • Which items should be committed to Git
  • How to use Composer to manage Drupal projects where the production server does not have Composer
  • How to update Drupal using Composer

In the next post, coming soon, we'll look at how to convert an existing Drupal project to being managed by Composer.

Nov 25 2018
Nov 25

As any developer working with Drupal 8 knows, working with Composer has become an integral part of working with Drupal. This can be daunting for those without previous experience working with the command line, and can still be a confusing experience for those who do. This is the first post in an explorative series of blog posts on Drupal and Composer, hopefully clearing up some of the confusion. The four blog posts on this topic will be as follows:

So without further ado, let’s get started.

Composer: What is it?

The Wikipedia page (https://en.wikipedia.org/wiki/Composer_(software)) describes Composer as follows:

Composer is an application-level package manager for the PHP programming language that provides a standard format for managing dependencies of PHP software and required libraries.

That’s an accurate description, though a little wordy. So let’s break it down a little further to understand what it means.

Programmers like to use the term DRY - Don’t Repeat Yourself. This means that whenever possible, code should be re-used, rather than re-written. Traditionally, this referred to code within the codebase of a single application, but with Composer, code can now be shared between applications as well. DRY is another way of saying don’t re-invent the wheel; if someone else has already written code that does what you want to do, rather than writing code that does the same thing, it’s better to re-use the code that that has already been written. For example, the current standard for authentication (aka logging in) to remote systems is the OAuth 2 protocol. This is a secure protocol that allows sites or applications to authenticate with other sites, such as Facebook, Google, Twitter, Instagram, and countless others. Writing OAuth 2 integrations is tricky, as the authentication process is somewhat complex. However, other developers have written code that handles OAuth 2 integration, and they have released this code on the internet in the form of a library. A library is basically a set of code that can be re-used by other sites. Using Composer, developers can include this library in a project, and use it to authenticate to the remote API, saving the developer from having to write that code.

Composer allows developers to do the following:

  • Download and include a library into a project, with a single command
  • Download and include any libraries that library is dependent upon
  • Check that system requirements are met before installing the library
  • Ensure there are no version conflicts between libraries
  • Update the library and its dependencies with a single command 

So how does Composer work?

Composer itself is a software/program. After a user has installed Composer, they can then say ‘Composer: download Library A to my system’. Composer searches remote repositories for libraries. A repository is a server that provides a collection of libraries for download. When Composer finds Library A in a repository, it downloads the library, as well as any libraries that Library A is dependent upon.

A note on terminology

In this article, the term Library is used. Libraries are also known as Packages, and referred to as such on https://getcomposer.org/

A project is the codebase, generally for a website or application, that is being managed by Composer. 

By default, the main repository Composer looks at is https://packagist.org/. This is a site that has been set up specifically for Composer, and contains thousands of public libraries that developers have provided for use. When a user says ‘Composer download Library A’, the Composer program looks for Library A on https://packagist.org/, the main public Composer repository, and if it finds the Library, it downloads it to your system. If Library A depends upon (aka requires) Library B, then it will also download Library B to your system, and so on. It also checks to make sure that your system has the minimum requirements to handle both Library A and Library B and any other dependencies, and also checks if either of these packages have any conflicts with any other libraries you've installed. If any conflicts are found, Composer shows an error and will not install the libraries until the conflicts have been resolved.

While packagist.org is the default repository Composer searches, projects can also define custom repositories that Composer will search for libraries. For example, many developers use Github or Bitbucket, popular services that provide code storage, to store their code in the cloud. A project owner can set up Composer to look for projects in their private Github, Bitbucket, or other repositories, and download libraries from these repositories. This allows for both the public and private code of a project to be managed using Composer.

What happens when I install a library?

Composer manages projects on a technical level using two files: compser.json and composer.lock.  First we’ll look at the composer.json file. This file describes the project. If a developer is using private repositories, the repositories will be declared in this file. Any libraries that the project depends on are written in this file. This file can also be used to set specific folder locations into which libraries should be installed, or set up scripts that are executed as part of the Composer install process. It’s the outline of the entire project.

Each library has a name. The name is combined of two parts, first a namespace, which is an arbitrary string that can be anything but is often a company name, or a Github user name etc. The second part is the library name. The two parts are separated by a forward slash, and contain only lower case letters. Drupal modules are all part of the drupal namespace. Libraries are installed using Composer’s require command. Drupal modules can be installed with commands like:

// Drupal core:
composer require drupal/core
// Drupal module:
composer require drupal/rules
// Drupal theme:
composer require drupal/bootstrap

When the above commands are run, Composer downloads the library and its dependencies, and adds the library to the composer.json file to indicate that your project uses the library. This means that composer.json is essentially a metadata file describing the codebase of your project, where to get that code, and how to assemble it.

Composer and Git, Multiple Environments and Multiple Developers

Composer and Git work really well with each other. To understand how, let’s first look at traditional site management using Git. Developer A is creating a new Drupal project, purely managed with Git:

  1. Developer A downloads Drupal core
  2. Developer A creates a new Git repository for the code they have downloaded, and commits the code to the repository
  3. Developer A pushes the code to a central repository (often Github or Bitbucket)
  4. Developer A checks out (aka pulls) the code to this server.

This all sounds good, and it actually works very well. Now let’s imagine that Developer B comes onto the project. Developer B uses Git to download the code from the central repository. At this point, the codebase in Git exists in four locations:

  • Developer A’s computer
  • Developer B’s computer
  • The central repository
  • The production server 

At the moment, the codebase only consists of Drupal core. The Drupal core code is being managed through Git, which would allow for changes to be tracked in the code, yet it’s very unlikely that either Developer A or Developer B, or indeed any other developers that come on the project, will actually ever edit any of these Drupal core files, as it is a bad practice to edit Drupal core. Drupal core only needs to be tracked by developers who are developing Drupal core, not by projects that are simply using it. So the above setup results in sharing and tracking a bunch of code that is already shared and tracked somewhere else (on Drupal.org).

Let’s look at how to start and use Composer to manage a project. Note that this is NOT the best way to use Composer to manage a Drupal site, and is simply an example to show how to use Composer (see part 2 of this series for specifics on how to use Composer to manage a Drupal site).

  1. Developer A creates a new project folder and navigates into it.
  2. Developer A initializes the project with composer init, which creates a composer.json file in the project folder
  3. Developer A adds the Drupal repository at https://packages.drupal.org/8 to composer.json, so that Drupal core, modules and themes can be installed using Composer
  4. Developer A runs composer require drupal/core, which installs Drupal core to the system, as well as any dependencies. It also creates composer.lock (which we'll look at further down the article)
  5. Developer A creates a new Git repository, and adds composer.json and composer.lock to the Git repository
  6. Developer A pushes composer.json and composer.lock to the central repository
  7. Developer A sets up the production server, and checks out the code to this server. At this point, the code consists only of the composer.json and composer.lock files. Additional servers can be set up by checking out the code to any server.
  8. Developer A runs composer install on the production server. This pulls all the requirements and dependencies for the project as they are defined in composer.json

Now when Developer B comes on the project, Developer B uses Git to download the codebase to their local computer. This codebase contains only composer.json and composer.lock. However, when they run composer install they will end up with the exact same codebase as the production server and on Developer A’s machine.

Now the codebase exists in the same four locations, however the only code being tracked in the Git repository is the two files used to define the Composer managed project. When an update is made to the project, it is handled by running composer update drupal/core, which will update both composer.json and composer.lock. These files are then updated in the Git repository, as they are the files specific to our project.

The difference between the traditional Git method, and the above method using Composer, is that now Drupal core is considered to be an external library, and is not taking up space unnecessarily in our project's Git repository.

Project Versions

Projects can, and pretty much always do, have versions. Drupal 8 uses semantic versioning, meaning that it goes through versions 8.1, 8.2, 8.3… and so on. At the time of writing the current version is 8.6.3. If a new security fix is released, it will be 8.6.4. In time, 8.7.0 will be released.  Composer allows us to work with different versions of libraries. This is a good thing, however it opens up the risk of developers on a project working with different versions of a library, which in turn opens up possibility of bugs. Composer fortunately is built to deal with versions, as we will look at next.

Tracking Project Versions

So how does Composer handle versions, allowing developers to ensure they are always using the same library versions? Welcome the composer.lock file. The composer.lock file essentially acts as a snapshot of the all the versions of all the libraries managed by composer.json. Again, I’ll refer back to the Composer managed site described above. When we first run composer require drupal/core in our project, a few things happen:

  1. The current (most recent) version of Drupal is downloaded to the system
  2. All libraries that Drupal depends on are also downloaded to the system
  3. composer.json is updated to show that Drupal is now a dependency of your project
  4. composer.lock is created/updated to reflect the current versions of all Composer managed libraries

So composer.json tracks which libraries are used, and composer.lock is a snapshot tracking which versions of those libraries are currently being used on the project. 

Synchronizing Project Versions

The problem with developers using different versions of libraries is that developers may write code that only works on the version of the library that they have, and other developers either don’t yet have, or maybe they are using an outdated version of the library and other developers have updated. Composer projects manage library versions using the commands composer install and composer update. These commands do different things, so next we'll look at the differences between them.

Composer Install and Composer Update

Imagine that Composer didn’t track versions. The following situation would happen (again, this is NOT how it actually works):

  1. Drupal 8.5.6 is released.
  2. Developer A creates a new project, and sets Drupal core as dependency in composer.json. Developer A has Drupal 8.5.6
  3. Drupal 8.6.0 is released
  4. Developer B clones the Git project, and installs the codebase using composer install. Composer downloads Drupal core. Developer B has Drupal 8.6.0

The two developers are now working on different versions of Drupal. This is dangerous, as any code they write/add may not be compatible with each other's code. Fortunately Composer can track libraries. When a user runs composer install, the versions defined in composer.lock are installed. So when Developer B runs composer install, Drupal 8.5.6 is installed, even though Drupal 8.6.0 has been released, because 8.5.6 is listed as the version being used by the project in composer.json. As such, developers working on Composer managed projects should run composer install each time they pull updates from remote Git repositories containing Composer managed projects.

Updating versions

As has been discussed, the composer.lock file tracks the versions of libraries currently used on the project. This is where the composer update command comes in. Let’s review how to manage version changes for a given library (this is how it actually works):

  1. Drupal 8.5.6 is released.
  2. Developer A creates a new project, and sets Drupal core as dependency. The composer.lock file records the version of Drupal core used by the project as 8.5.6.
  3. Drupal 8.6.0 is released
  4. Developer B clones the Git project, and installs the codebase using composer install. The composer.lock file lists the version of Drupal core being used on the project as 8.5.6, so it downloads that version.
  5. Developer A sees that a new version of Drupal has been released. Developer A runs composer update drupal/core. Composer installs Drupal 8.6.0 to their system, and updates composer.lock to show the version of Drupal core in use as 8.6.0.
  6. Developer A commits this updated composer.lock to Git, and pushes it to the remote repository. 
  7. Developer B pulls the Git repository, and gets the updated composer.lock file. Developer B then runs composer install, and since the version of Drupal core in registered as being used is now 8.6.0, Composer updates the code to Drupal 8.6.0.

Now Developer A and Developer B both have the exact same versions of Drupal on their system. And still the only files managed by Git at this point are composer.json and composer.lock.

Tying it all together

Developers should always run composer.install any time they see that a commit has made changes in the composer.lock file, to ensure that they are on the same codebase as all other developers. Developers should also always run composer.install anytime they switch Git branches, such as between a production and a staging branch. The dependencies of these branches may be very different, and running composer install will update all dependencies to match the current composer.lock snapshot. The composer update command should only be used to update to new versions of libraries, and the composer.lock file should always be committed after running composer update. Finally, any time a developer adds a new dependency to the project, they need to commit both the composer.json file and the composer.lock file to Git.

Summary

Before moving on to the next blog post in this series, you should understand the following:

  • What the composer.json file does
  • What the composer.lock file does
  • When to use composer install
  • When to use composer update
  • How Git and Composer interact with each other

In Part 2, we'll look specifically at building and managing a Drupal project using composer.

Sep 27 2018
Sep 27

There are couple of online tools, and integration modules to get sharing widget to your site. They rely on JavaScript and the security of your users is questionable. This article will show you how to create a simple yet flexible and safer service sharing widget without line of JavaScript.

Background

The main reason why not to use some of the tools like AddToAny is the security. This is often a case for government or other public facing project such as GovCMS. Sharing widget of these services is not connecting directly to the social service, but it is processed on their servers first. And they can track the user on through the web because of the fingerprint they made. Another reason is that the JS code is often served from a CDN so you don’t know when the code changes and how? Have they put them some malicious script? I don’t want this on my site. And clients often as well. :)

Thankfully each service provide a simple way how to share content and we will use that.

Final example

You can see the final result in action with different styling applied at our example GovCMS 8 demo page (scroll down to the bottom of page).

Site build

First we need to prepare the data structure. For our purpose we will need to create a custom block type, but it can be easily done as a paragraph too.

Custom block name: Social Share
Machine name: [social_share]

And throw in few Boolean fields. One for each service.

Field label: [Human readable name] e.g. “Twitter”
Machine name: [machine_name] e.g. “social_share_twitter” – this one is important and we will use it later.

Go to the manage display screen of the block (/admin/structure/block/block-content/manage/social_share/display) and change the Output format to Custom. Then fill in the Custom output for TRUE with the text you like to see on the link e.g. "Share to twitter".

Now we are able to create a new block of the Social share type and check some of these checkboxes. Users will see only the Labels as result.

Theming

The fun part is changing the output of the field from simple label to actual share link.
First we need to know how the final link looks like.
Links examples:

Facebook: http://www.facebook.com/share.php?u=[PAGE_URL]&title=[PAGE_TITLE] Twitter: http://twitter.com/intent/tweet?status=[PAGE_TITLE]+[PAGE_URL] LinkedIn: http://www.linkedin.com/shareArticle?mini=true&url=[PAGE_URL]&title=[PA…] E-mail: mailto:?subject=Interesting page [PAGE_TITLE]&body=Check out this site I came across [PAGE_URL]

To get it work we need a current page Page URL, Page title, and Base path. Only the page URL is directly accessible from TWIG template. The other two needs to be prepared in preprocess. Lets add these in the theme_name.theme file.

/** * Implements template_preprocess_field(). */ function theme_name_preprocess_field(&$variables, $hook) { switch ($variables['field_name']) { case 'field_social_share_twitter': $request = \Drupal::request(); $route_match = \Drupal::routeMatch(); $title = \Drupal::service('title_resolver') ->getTitle($request, $route_match->getRouteObject()); if (is_array($title)) { $variables['node_title'] = $title['#markup']; } else { $variables['node_title'] = (string) $title; } $variables['base_path'] = base_path(); break; } }

As we probably will have more then one service we should use the DRY approach here. So we create extra function for the variable generation.

/** * Preprocess field_social_share. */ function _theme_name_preprocess_field__social_shares(&$variables) { $request = \Drupal::request(); $route_match = \Drupal::routeMatch(); $title = \Drupal::service('title_resolver') ->getTitle($request, $route_match->getRouteObject()); if (is_array($title)) { $variables['node_title'] = $title['#markup']; } else { $variables['node_title'] = (string) $title; } $variables['base_path'] = base_path(); }

And we than call it for various cases. If some service will need more variables it will be easy to add it in different function. So we don’t process whats not required.

/** * Implements template_preprocess_field(). */ function theme_name_preprocess_field(&$variables, $hook) { switch ($variables['field_name']) { case 'field_social_share_facebook': _theme_name_preprocess_field__social_shares($variables); break; case 'field_social_share_twitter': _theme_name_preprocess_field__social_shares($variables); break; case 'field_social_share_linkedin': _theme_name_preprocess_field__social_shares($variables); break; case 'field_social_share_email': _theme_name_preprocess_field__social_shares($variables); break; } }

Now we have the Node title and Base path prepared to be used in field templates.

Enable twig debug and look in the markup for the checkbox. You will see couple of suggestions, the one we are looking for is field--field-social-share-twitter.html.twig.

As the output should be single link item it is safe to assume we can remove all the labels condition and the single/multiple check as well. On the other hand we need to ensure that if the checkbox is unchecked it will not output any value. That is particularly hard in TWIG as it doesn’t have any universal information about the state of checkbox. It has only access to the actual value. But since we don’t know the value of custom label we cannot use it. However there is a small workaround we can use. Remember we hav not set the FALSE value.
We can check if the field is outputting any #markup. The empty FALSE value will not produce anything, hence the condition will fail.

{% if item.content['#markup'] %}

Here is the full code for field template:

{% set classes = [ 'social-share__service', 'social-share__service--twitter', ] %} {% for item in items %} {% if item.content['#markup'] %} <a {{ attributes.addClass(classes) }} href="http://twitter.com/intent/tweet?status={{ node_title }}+{{ url('<current>') }}" title="Share to {{ item.content }}">{{ item.content }}</a> {% endif %} {% endfor %}

For other services you need to adapt it. But it will still follow the same pattern.

And we are done. Now your block should return links to sharing current page to the service.

Pro tip:

So far we have not use any contrib module. But obviously your client would like to have some fancy staying applied. You can add everything in the theme, but that will be only one hardcoded option. For easier live of editors you can use Entity Class formatter module to easily add classes to the block from a select list. You can provide multiple select list for Size, Color, Rounded corners, Style etc.

Result

At this point we have the simple social share widget ready. We can select which predefined services will show in each instance and how will they look. E.g. On blog post you can have sharing for Twitter, Facebook and Email styled as small rounded icons. But with another instance of the block you can have only large squared LinkedIn icon + label shown on Job offering content type.

Further notes

After I wrote first draft of this article new module appeared which work in very similar way. Give it a try at Better Social Sharing Buttons. It will be quicker to get up ad running as it has predefined styles and services, but that can be a drawback at the same time. If I need different style, or extra service it can be harder to add it.

Nov 13 2017
Nov 13

13 November 2017

The Webform Mass Email module has now been ported to Drupal 8. Time to get emailing :)

The Webform module has long been a powerhouse for Drupal site builders, providing a flexible solution to one of the central problems of web development: collecting user data, storing it and notifying users when content is added. the Webform module has been extended in many directions, supporting ancillary workflows and use cases.

One such use case is sending a mass email to all subscribers to a Webform. This can be handy for when users are signing up to an event or have registered their interest in a topic. The Webform Mass Email module fills this gap.

The module works as follows

When installed, Webform Mass Email module adds a new sub-tab called "Mass Email" under all Webform's "Results" tab (next to "Submissions", "Download" and "Clear") and one settings sub-tab called "Mass Emails" under global Webform "Configuration" tab (next to "Exporters", "Libraries" and "Advanced").

  1. Site builder navigates to the "Mass Emails" tab and set base module settings once.
    • "Cron time" - How much time is being spent per cron run (in seconds).
    • "Allow sending as HTML" - If checked, all emails are processed as HTML formatted.
    • "Log emails" - Should the emails be logged to the system log at admin/reports/dblog?
  2. The site builder can then assign "Send Webform Mass Email" permission for roles which are able to send mass emails.
  3. Anyone with this permission can then navigate to the "Mass Email" sub-tab of any Webform and send messages.
    • "Email field" - The field that is going to be used as the recipient's email address.
    • "Subject" - Message subject for your email.
    • "Body" - Message text for your email.
  4. After you hit the "Send emails" button, messages are inserted into queue and sent when cron is running.

The module has now been ported to Drupal 8 and is being supported by the team at Morpht. Check it out and let us know what you think.
 

Nov 10 2017
Nov 10

The Entity Class Formatter is a nifty little module which allows editors to place classes on to entities, allowing their look and feel to be altered in the theme layer or with other modules such as Look and Modifiers. It is now available for Drupal 8.

Entity Class Modifier is a humble little module, however, it does open up a lot of possibilities. The most obvious is to use the theme layer to provide styles for the class which has been applied. This makes it possible for the “designer” at design time to can some different styles to pick from. It is however, possible to use the module in a more flexible way and combine it with Modifiers and Looks.

Once a class has been defined and added to a field, a “Look Mapping” can be defined, associating the class with a set of Modifiers. The site builder or skilled editor can then go in and define any number of Modifiers which will be fired with the class.

For example - a “my-awesome-class” class could be created which is wired into a “field_my_awesome” set of Modifiers. The Modifiers could include a blue linear gradient with white text overlay with generous padding. All of this configuration happens in the UI after deploy. It is a very flexible and powerful system. The system can be configured after deployment and adapted to the styles you wish to apply.

Basic use of Entity Class Formatter

The use of the module is very easy. We can for example define our own class on the article.

The first thing we need to do is to enable the module. Once installation is complete we can go and add our custom field. In this little tutorial we will basically add the class onto the article content type. So go to Structure > Content types > Article > Manage fields and add new text field. We can name the field simply "Class" and save it. As we keep everything in default we can save it on the next page too.

Now the last thing we need to do to make it work is set the Entity Class on the field in Manage display page. Go to Structure > Content types > Article > Manage display and change the format to "Entity Class". There's no need to any other manipulation with field because it won't render any values which would be visible to the visitor of the page.

That's it! No we can go to create an article (Content > Add content > Article). Input class to our field...

... voila, class is there!

Similar but different

There are a couple of modules out there which are similar but are different enough for them not to be totally suited to our requirements.

Classy Paragraphs, available in Drupal 8, has been influential in the Paragraphs ecosystem and has opened the way for creative designs. It was intended to apply to Paragraphs only and is quite structured in the way classes are applied through a custom field type. The Entity Class Formatter module is more flexible in that it has been designed to apply to all entity types. It can also handle a variety of field types (text, select lists, entity references) and is able to adapt to the data at hand. So, Entity Class Formatter has a similar aim - it is just somewhat more generic.

Field Formatter CSS Class, available in Drupal 7, also uses a field formatter approach to applying the class to the entity. It does have more complexity than this module because it deals with several layers of nesting. The Entity Class Formatter is very simple and only applies to the parent entity of the field.

Nov 07 2017
Nov 07

8 November 2017

The Enforce Profile Field is a new module which allows editors to enforce the completion of one or more fields in order to access content on a site. It is now available for Drupal 8.

Sometimes you need to collect a variety of profile data for different users. The data may be needed for regulatory compliance or marketing reasons. In some cases you need a single field and in others it may be several. You may also wish to collect the information when a user access certain parts of the site.

The Enforce Profile Field module comes to the rescue in cases such as these, forcing users to complete their profile before being able to move onto the page they want to see. This may sound harsh, however, collecting data as you need it is a more subtle way of collecting data than enforcing it all at registration time.

The implementation consists mainly from a new Field Type called "Enforce profile" and hook_entity_view_alter().

The module works as follows

  1. Site builder defines a “form display” for the user type bundle and specify fields associated with it to collect data.
    1. The fields should not be required, as this allows the user to skip them on registration and profile editing.
    2. In addition the Form Mode Manager module can be used to display the “form display” as a "tab" on a user profile page.
  2. The site builder places an Enforce profile field onto an entity type bundle, such as a node article or page.
  3. The Enforce profile field requires some settings:
    1. A "User's form mode" to be utilized for additional field information extraction (created in the first step).
    2. An "Enforced view modes" that require some profile data to be filled in before being able to access them. You should usually select the "Full content" view mode and rather not include view modes like "Teaser" or "Search".
  4. The editor creates content, an article or page, and can then select which fields need to be enforced.
    1. The editor is provided with multi-select of "User's form mode" fields.
    2. Selecting nothing is equal to no access change, no profile data enforcement.
  5. A new user navigates to the content and is redirected to the profile tab and is informed that they need to complete the fields.
  6. Fields are completed, form submitted and the user redirected back to the content.
    1. In case the user doesn't provide all enforced fields, the profile tab is displayed again with the message what fields need to be filled in.

Why to use the Enforce Profile Field to collect an additional profile data?

  • You may need customer's information to generate a coupon or access token.
  • You may just want to know better with whom you share information.
  • Your users know exactly what content requires their additional profile data input rather than satisfying a wide range of requirements during registration. It just makes it easier for them.
  • The new profile data can be synced to a CRM or other system if required to.

Let us know what you think.
 

Nov 07 2017
Nov 07

7 November 2017

The Webform Invitation module has now been ported to Drupal 8.

Most of us know how powerful the Webform module is in Drupal, allowing us to collect and retain data from users. It has been extended in many ways and is a vital part of most Drupal websites.

The Webform Invitation module extends Webform and allows submissions to be limited by invitation through the entry of a code which validates the submitted against a list of known codes. This pattern can be used to apply “coupons” and handle any situation where you need to limit the users who are able to submit.

All generated codes can be downloaded as XLS spreadsheet with direct links leading to the Webform URL with "code" parameter inside, so it's then automatically filled into "Invitation Code" text field - you can use this spreadsheet for any of your favorite mass emailing service.

The module works as follows

When installed, Webform Invitation module adds a new tab to all Webforms called "Invitation" (next to "View", "Test", "Results" and "Build"). There are also four sub-tabs called "Settings" (default action), "List codes", "Generate", and "Download".

  1. Site builder defines new Webform or update existing one, where invitations are needed.
  2. The Site builder navigates to the "Invitation" tab and enables invitations for the current Webform.
    • This option is adding new "webform_invitation_code" text field element on the first position inside "Build" section and you can adapt the position or other element details as required.
    • When option is disabled, the "webform_invitation_code" element is removed from any position.
  3. Then the Webform is locked for any submission, because there are no invitation codes available and site builder needs to generate some. It's possible by navigating to "Generate" sub-tab and setting number of generated codes and their type (MD5 or custom).
    • "Number of codes to generate" is 25 by default, but you can increase this number as needed.
    • "Type of tokens" is set to MD5 hash of 32 characters length, but you can switch to custom token and set their length and character sets to be used.
    • After clicking on "Generate", you are redirected to "List codes" sub-tab and you are ready for sending these invitation codes out and users are able to fill submissions. You can monitor if specific codes were used and for which submission.
    • The site builder can generate new codes of different type any time.
  4. If site builder wants to export all unused invitation codes, he can do it on "Download" sub-tab.
    • It provides XLS spreadsheet with direct links to the Webform URL with "code" parameter inside.
    • This code is automatically filled into "Invitation Code" text field.

The Webform Invitation module is supported by the team at Morpht. Give it a try and let us know what you think.
 

May 10 2017
May 10

For a recent project we needed to allow users to comment on content via email. The idea was that when new comments were created, Drupal would send out notification emails that could be replied to. The system had to be able to handle long email chains and a few edge cases. Email is tricky and we learned a lot about the wild world of email. We are sharing our experiences here.

Mail Comment lets you reply via email to notifications from your website. In this article we’ll show you how to reply to content via email and outline the mechanisms that make it work. There are quite a few modules involved in the process and they all have an important role to play.

Modules

You will need the following modules installed:

The Workflow

Here’s a typical workflow and how each module fits into the process:

  1. User creates a comment
  2. Drupal sends a notification email to recipients (Rules + Mail Comment Message Notify)
  3. A recipient replies to this notification email via their email client
  4. The reply email is sent to the “From” header which goes to our mailbox (Mailhandler)
  5. Drupal imports the email from this mailbox (Feeds importer)
  6. Drupal validates the email’s authenticity (Mail Comment)
  7. Drupal maps the email’s metadata to comment fields (Feeds + Mail Comment)
  8. Drupal creates a comment (Feeds Comment Processor)

The Steps

With the basic concepts in hand – let’s build out this functionality.

Create a Message type

The Message framework provides an entity type designed specifically for dealing with messages. Messages are great for creating and sending notifications. Create a message type via Structure -> Message types -> Add message type. This message type will represent a notification that occurs when someone posts a comment.

In the “Description” field label your message “New Comment”. The mandatory “Message text” field will become the notification email subject. The form requires a save before correctly displaying “Message text” fields and their associated view modes (a UI bug). Put any value in this field as we’ll be changing it after saving this form.

If you’ve previously enabled Mail Comment then you will see a very important checkbox “Allow users to reply to this message by email”. This has to be ticked.

Save the form and edit the newly created message type. Now you will see two special view modes which will be displayed as the notification email’s Subject and Body.

In “Message text -> View modes: Notify - Email subject” enter a subject:

New comment created on [nid]

In “Message text -> View modes: Notify - Email” enter a body:

[comment:user:mail] says:

[comment:value]

View:
[comment:node:url]

Save the form and then click “Manage fields”. Create two fields:

  • field_message_rendered_subject (Text field)
  • field_message_rendered_body (Text area)

These fields are used by Message Notify to save the output of the Subject and Body view modes for use with email.

Create a notification rule

Now that the message type is set up we can create a notification rule that is triggered whenever someone creates a comment.

  1. Create a rule that reacts on the event “After saving a new comment”.
  2. Add the action “Send Message with Message Notify”.
  3. In the “Data selector” select the comment that triggered this rule.
  4. Set “Rendered subject field” to “Rendered subject” and “Rendered body field” to “Rendered body”.
  5. Set the “The recipient of the email” to any email address you’d like.

Configure Mail Comment

Mail Comment works its magic on both sides. It modifies outgoing emails so that they can be replied to, and authenticates incoming emails and extracts metadata. Mail Comment does this by embedding a unique string in the outgoing email that contains email metadata and a hash, which we will go into more detail later.

Go to Configuration -> System -> Mail Comment (/admin/config/system/mailcomment).

Set “Reply-To address“ to the email account that the Feeds importer will be importing from.
“Insert reply to text” is a handy option to have enabled. It will help Mail Comment seperate the latest content in an email from the email’s thread. Other settings are fine to leave at their defaults.

Create a Mailhandler Mailbox

Email replies will be sent to a mailbox that you control. Mailhandler allows you to create a connection to this mailbox. A Feeds importer will then be importing emails from this mailbox via Mailhandler.

Create a mailbox via Structure -> Mailhandler Mailboxes -> Add (admin/structure/mailhandler/add).
The “Username” is likely the same value as Mail Comment’s “Reply-To” email address.

Create a Feeds importer

Drupal feeds importer

The Feeds importer extracts the email via Mailhandler's mailbox. It runs via cron.

Create a Feeds importer via Structure -> Feeds importers -> Add importer. (/admin/structure/feeds/create)
Label it “New Comments”.

Fetcher

Configure the fetcher’s “Message filter” to “Comments only”. This filter works by checking if an email’s In-Reply-To header is set. At this time of writing the filter doesn’t handle all email situations correctly. Please see this comment for a solution.

Parser

Select “Mailhandler IMAP stream parser” with the “Authentication plugin” as “Mail Comment Authentication”. Mail Comment authenticates and validates the incoming email to ensure that it was originally sent from the website and replied to by a valid user.

Processor

Feeds’ importer processor creates the comment. Select “Comment processor” as your processor. For our particular project at Morpht we created messages instead of comments with “Message entity processor” and a custom module – but that’s another story.

The Mapping configuration here is where all the magic happens. Metadata is pulled from the email and exposed as a source value. The following source values need to be mapped from source to target:

  • Message ID (message_id) -> GUID (guid)
  • Parent NID (parent_nid) -> Node id (nid)
  • User ID (authenticated_uid) -> User ID (uid)
  • Subject (subject) -> Title (subject)
  • Body (Text) (body_text) -> Comment (comment_body)

Message ID and Parent NID are source values created by Mail Comment from the email’s metadata.

Summary

The steps above detail how to reply to notifications sent when someone create a comment. Similar steps can be used to send notifications when someone creates a node. Create a new message type called “new_node” and follow the steps above, replacing the comment specific configuration with node config.

Testing

Test the system by creating a comment on a node through your website’s UI. A notification email should be sent out to recipients, including your testing email address. Reply to this email, run cron and the comment should be imported. To assist your debugging checkout Content -> Messages and the Maillog module.

Dealing with email clients

If you’d like to know what’s going on under the hood feel free to read below. There’s a reward for reading on. This section will also show you how to reply to long email chains and still have the email imported by Drupal. It also fixes a few bugs with the current version of Mail Comment at the time of this writing.

The Message-ID

<1234.4666.5056.1486965753.cb17604ac1434f792f04f724d2af7ee6 [at] example.com>

Mail Comment stores important information in an email by creating a custom string full of metadata about the original comment. This string is sent in the email inside the Message-ID header. Message-ID is a standard email header that email clients use to insert identifying information about the email. Usually it’s a big hash but in our case it’s a dot separated string of values containing:

  • Node ID
  • Comment ID
  • User ID
  • Timestamp
  • Hash

The format is four digits and one hash divided by dots, an @ symbol and finally the domain that the email was sent from.

Long email chains

Mail Comment’s metadata in the Message-ID header will be lost as soon as someone replies to an email. The email client replying to the email will overwrite Message-ID with its own value. Thankfully there is another email header specifically designed to store a history of all Message-IDs used in the email chain’s lifetime: References. The References header stores each Message-ID in chronological order like so:

<4431.5196.7356.1487571103.20e2a3d69bef550a990d7c751beb84de [at] example.com> 
<CAPbhD2K4MvcCPqSNRhp33P+CoNkY5xyb+qNqeJBuhMrxeMCBsg [at] mail.gmail.com>
<CAPbhD2JZdgQj==1AxDWqaR2N=jrQtvrz3hQ-03b9jkgJg5eoFA [at] mail.gmail.com>

As you can see the original Message-ID is on the first line.

Email clients send emails in unexpected formats and can sometimes break the Message-ID metadata. This is especially true with Microsoft Outlook/Exchange, which inserts a seemingly random value at the beginning of the References header (which is actually the third value of the dot separated string).

7356
<4431.5196.7356.1487571103.20e2a3d69bef550a990d7c751beb84de [at] domain.com>
<CAPbhD2K4MvcCPqSNRhp33P+CoNkY5xyb+qNqeJBuhMrxeMCBsg [at] mail.gmail.com>
<CAPbhD2JZdgQj==1AxDWqaR2N=jrQtvrz3hQ-03b9jkgJg5eoFA [at] mail.gmail.com>

Our Patch

We have created a patch to better detail with oddly-formatted References headers and long email chains. Feel free to apply.

HTML being sent as plain text

Microsoft Outlook/Exchange occasionally sends out badly formatted HTML in the plain text version of the email. There’s no silver bullet solution. We came up with a simple fix that just strips out all the bad HTML leaving you the plain text.

In your custom module create the following function:

<?php
/**
 * Implements hook_feeds_after_parse()
 */
function mymodule_feeds_after_parse(FeedsSource $source, FeedsParserResult $result) {
  foreach ($result->items as &$email) {
    // Filter html
    $email['body_text'] = _mymodule_filter_html($email['body_text']);
    $email['body_html'] = _mymodule_filter_html($email['body_html']);
  }
}

This hook will run just after feeds has parsed in values, and call the filter html function below:

<?php
/**
 * Strip html
 */
function _mymodule_filter_html($text) {
  // remove all tags
  $text_remove = strip_tags($text);
  // decode html special characters
  $text_remove = html_entity_decode($text_remove, ENT_QUOTES);
  // replace multiple empty lines with on
  $text_remove = preg_replace('/\n(\s*\n)+/', "\n\n", $text_remove);
  return $text_remove;
}

Once these two functions are pasted in, clear Drupal’s cache (drush cc all).

Conclusion

Now you know how to create content in Drupal from email! The basic ingredients are Mail Comment, Feeds importer and a few modules gluing it all together.

You can use similar techniques to create nodes from email too. Just be mindful that there are security implications doing this as you can’t always trust the sender and contents of an email. Mail Comment authenticates incoming mail by creating a hash and this is harder to do when importing an email without a Mail Comment generated Message-ID.

As Drupal heads more and more towards being a REST based backend it’s easy to forget the more traditional means of inputting data. Email is ancient on the technology timeline, but its convenience coupled with Drupal’s well defined data structure can be a great combo. Happy mailing!

May 09 2017
May 09

For a recent project we needed to allow users to comment on content via email. The idea was that when new comments were created, Drupal would send out notification emails that could be replied to. The system had to be able to handle long email chains and a few edge cases. Email is tricky and we learned a lot about the wild world of email. We are sharing our experiences here.

Mail Comment lets you reply via email to notifications from your website. In this article we’ll show you how to reply to content via email and outline the mechanisms that make it work. There are quite a few modules involved in the process and they all have an important role to play.

Modules

You will need the following modules installed:

The Workflow

Here’s a typical workflow and how each module fits into the process:

  1. User creates a comment
  2. Drupal sends a notification email to recipients (Rules + Mail Comment Message Notify)
  3. A recipient replies to this notification email via their email client
  4. The reply email is sent to the “From” header which goes to our mailbox (Mailhandler)
  5. Drupal imports the email from this mailbox (Feeds importer)
  6. Drupal validates the email’s authenticity (Mail Comment)
  7. Drupal maps the email’s metadata to comment fields (Feeds + Mail Comment)
  8. Drupal creates a comment (Feeds Comment Processor)

The Steps

With the basic concepts in hand – let’s build out this functionality.

Create a Message type

The Message framework provides an entity type designed specifically for dealing with messages. Messages are great for creating and sending notifications. Create a message type via Structure -> Message types -> Add message type. This message type will represent a notification that occurs when someone posts a comment.

In the “Description” field label your message “New Comment”. The mandatory “Message text” field will become the notification email subject. The form requires a save before correctly displaying “Message text” fields and their associated view modes (a UI bug). Put any value in this field as we’ll be changing it after saving this form.

If you’ve previously enabled Mail Comment then you will see a very important checkbox “Allow users to reply to this message by email”. This has to be ticked.

Save the form and edit the newly created message type. Now you will see two special view modes which will be displayed as the notification email’s Subject and Body.

In “Message text -> View modes: Notify - Email subject” enter a subject:

New comment created on [nid]

In “Message text -> View modes: Notify - Email” enter a body:

[comment:user:mail] says: [comment:value] View: [comment:node:url]

Save the form and then click “Manage fields”. Create two fields:

  • field_message_rendered_subject (Text field)
  • field_message_rendered_body (Text area)

These fields are used by Message Notify to save the output of the Subject and Body view modes for use with email.

Create a notification rule

Now that the message type is set up we can create a notification rule that is triggered whenever someone creates a comment.

  1. Create a rule that reacts on the event “After saving a new comment”.
  2. Add the action “Send Message with Message Notify”.
  3. In the “Data selector” select the comment that triggered this rule.
  4. Set “Rendered subject field” to “Rendered subject” and “Rendered body field” to “Rendered body”.
  5. Set the “The recipient of the email” to any email address you’d like.

Configure Mail Comment

Mail Comment works its magic on both sides. It modifies outgoing emails so that they can be replied to, and authenticates incoming emails and extracts metadata. Mail Comment does this by embedding a unique string in the outgoing email that contains email metadata and a hash, which we will go into more detail later.

Go to Configuration -> System -> Mail Comment (/admin/config/system/mailcomment).

Set “Reply-To address“ to the email account that the Feeds importer will be importing from.
“Insert reply to text” is a handy option to have enabled. It will help Mail Comment seperate the latest content in an email from the email’s thread. Other settings are fine to leave at their defaults.

Create a Mailhandler Mailbox

Email replies will be sent to a mailbox that you control. Mailhandler allows you to create a connection to this mailbox. A Feeds importer will then be importing emails from this mailbox via Mailhandler.

Create a mailbox via Structure -> Mailhandler Mailboxes -> Add (admin/structure/mailhandler/add).
The “Username” is likely the same value as Mail Comment’s “Reply-To” email address.

Create a Feeds importer

The Feeds importer extracts the email via Mailhandler's mailbox. It runs via cron.

Create a Feeds importer via Structure -> Feeds importers -> Add importer. (/admin/structure/feeds/create)
Label it “New Comments”.

Fetcher

Configure the fetcher’s “Message filter” to “Comments only”. This filter works by checking if an email’s In-Reply-To header is set. At this time of writing the filter doesn’t handle all email situations correctly. Please see this comment for a solution.

Parser

Select “Mailhandler IMAP stream parser” with the “Authentication plugin” as “Mail Comment Authentication”. Mail Comment authenticates and validates the incoming email to ensure that it was originally sent from the website and replied to by a valid user.

Processor

Feeds’ importer processor creates the comment. Select “Comment processor” as your processor. For our particular project at Morpht we created messages instead of comments with “Message entity processor” and a custom module – but that’s another story.

The Mapping configuration here is where all the magic happens. Metadata is pulled from the email and exposed as a source value. The following source values need to be mapped from source to target:

  • Message ID (message_id) -> GUID (guid)
  • Parent NID (parent_nid) -> Node id (nid)
  • User ID (authenticated_uid) -> User ID (uid)
  • Subject (subject) -> Title (subject)
  • Body (Text) (body_text) -> Comment (comment_body)

Message ID and Parent NID are source values created by Mail Comment from the email’s metadata.

Summary

The steps above detail how to reply to notifications sent when someone create a comment. Similar steps can be used to send notifications when someone creates a node. Create a new message type called “new_node” and follow the steps above, replacing the comment specific configuration with node config.

Testing

Test the system by creating a comment on a node through your website’s UI. A notification email should be sent out to recipients, including your testing email address. Reply to this email, run cron and the comment should be imported. To assist your debugging checkout Content -> Messages and the Maillog module.

Dealing with email clients

If you’d like to know what’s going on under the hood feel free to read below. There’s a reward for reading on. This section will also show you how to reply to long email chains and still have the email imported by Drupal. It also fixes a few bugs with the current version of Mail Comment at the time of this writing.

The Message-ID

[email protected]le.com>

Mail Comment stores important information in an email by creating a custom string full of metadata about the original comment. This string is sent in the email inside the Message-ID header. Message-ID is a standard email header that email clients use to insert identifying information about the email. Usually it’s a big hash but in our case it’s a dot separated string of values containing:

  • Node ID
  • Comment ID
  • User ID
  • Timestamp
  • Hash

The format is four digits and one hash divided by dots, an @ symbol and finally the domain that the email was sent from.

Long email chains

Mail Comment’s metadata in the Message-ID header will be lost as soon as someone replies to an email. The email client replying to the email will overwrite Message-ID with its own value. Thankfully there is another email header specifically designed to store a history of all Message-IDs used in the email chain’s lifetime: References. The References header stores each Message-ID in chronological order like so:

[email protected]le.com> [email protected]om> [email protected]>

As you can see the original Message-ID is on the first line.

Email clients send emails in unexpected formats and can sometimes break the Message-ID metadata. This is especially true with Microsoft Outlook/Exchange, which inserts a seemingly random value at the beginning of the References header (which is actually the third value of the dot separated string).

7356 [email protected]n.com> [email protected]om> [email protected]>

Our Patch

We have created a patch to better detail with oddly-formatted References headers and long email chains. Feel free to apply.

HTML being sent as plain text

Microsoft Outlook/Exchange occasionally sends out badly formatted HTML in the plain text version of the email. There’s no silver bullet solution. We came up with a simple fix that just strips out all the bad HTML leaving you the plain text.

In your custom module create the following function:

<?php
/**
 * Implements hook_feeds_after_parse()
 */
function mymodule_feeds_after_parse(FeedsSource $source, FeedsParserResult $result) {
  foreach ($result->items as &$email) {
    // Filter html
    $email['body_text'] = _mymodule_filter_html($email['body_text']);
    $email['body_html'] = _mymodule_filter_html($email['body_html']);
  }
}

This hook will run just after feeds has parsed in values, and call the filter html function below:

<?php
/**
 * Strip html
 */
function _mymodule_filter_html($text) {
  // remove all tags
  $text_remove = strip_tags($text);
  // decode html special characters
  $text_remove = html_entity_decode($text_remove, ENT_QUOTES);
  // replace multiple empty lines with on
  $text_remove = preg_replace('/\n(\s*\n)+/', "\n\n", $text_remove);
  return $text_remove;

}

Once these two functions are pasted in, clear Drupal’s cache (drush cc all).

Conclusion

Now you know how to create content in Drupal from email! The basic ingredients are Mail Comment, Feeds importer and a few modules gluing it all together.

You can use similar techniques to create nodes from email too. Just be mindful that there are security implications doing this as you can’t always trust the sender and contents of an email. Mail Comment authenticates incoming mail by creating a hash and this is harder to do when importing an email without a Mail Comment generated Message-ID.

As Drupal heads more and more towards being a REST based backend it’s easy to forget the more traditional means of inputting data. Email is ancient on the technology timeline, but its convenience coupled with Drupal’s well defined data structure can be a great combo. Happy mailing!

Nov 16 2016
Nov 16

The Entity Reference Display module allows editors of Drupal sites to select which view mode they would like to display a list of entities as.

The Paragraphs module has changed a lot of things in Drupal. The same can be said for Bean in Drupal 7 and Custom Blocks in Drupal 8. All of these entities open up a lot of freedom for editors as they allow for the embedding of components in a page. At Morpht we have pioneered these techniques and have written and presented on them extensively. The Paragraphs Demo website still stands as an example of what can be done with Paragraphs.

One of the key concepts we use for site building these days is that of the Node List, Media List and Item Block List. Each of these components is very simple, holding an unlimited entity reference allowing editors to add and sort entities as they wish. This approach is great for landing pages as items can be called out and placed as desired. You could think of it as a componentized version of Nodequeue and later Entityqueue.

But how are these items to be displayed? What view mode shall we use? How can editors select a teaser or a tile? That’s where Entity Reference Display comes in.

This module provides a select list interface to the Editor, allowing them to select the view mode they would like to display the list as. This is a very simple interface and makes content creation a breeze for editors. We have found this to be a more pleasurable way to build than say, adding items with IPE or embedding in the WYSIWYG, although that does have its place.

A final part of the recipe is to use the Entity Class Formatter module to also allow the editor to select the layout they would like to apply to the list. Gives these two modules a try and see how you go with building better editor experiences for your users.

Aug 21 2016
Aug 21

Client-side performance can be dramatically improved by removing render-blocking JavaScript with LABjs. No more waiting and no more spinning-wheel icons for users to see. You can get a user speed experience of a simple site with all advantages of complex functionality-rich site.

Have you ever needed to boost up page speed? Has your boss asked you to improve SEO, PageSpeed ranking or client-side performance specifically? Modern websites contains a lot of JavaScript and that is the reason why Google PageSpeed tool provides you with hints such as

“Your page has blocking script resources. This causes a delay in rendering your page.

Followed by description like this

None of the above-the-fold content on your page could be rendered without waiting for the following resources to load. Try to defer or asynchronously load blocking resources, or inline the critical portions of those resources directly in the HTML.

Remove render-blocking JavaScript:

        https://[script1].js
        https://[script2].js
        ...

Google now pays a lot of attention to blocking script resources recently. The reason is user experience. PageSpeed advises you to defer or asynchronously load blocking JavaScript, or inline the critical portions of JavaScript directly in the HTML.

There are a number of options available to those who want to speed the rendering of the page:

However not all of those options have the same effect. Some of them may even behave differently than expected.

When we were faced with this situation we wanted our solution to have a cross-browser support and best client-side performance results possible. We nvestigated the options and ended up with LABjs.

LABjs (Loading And Blocking JavaScript library)

LABjs is a JavaScript loader, used by many large sites such as Twitter, Vimeo, examiner.com. It loads and executes all scripts in parallel as fast as the browser will allow. In other words, it is an on-demand parallel loader for JavaScript with execution order dependencies support.

Kyle Simpson, the author of LABjs puts it this way:

“LABjs is almost 4 years old, and has been stable (no bug fixes/patches) for almost 2 years.”

Quoting from LABjs library github page:

“The defining characteristic of LABjs is the ability to load all JavaScript files in parallel, as fast as the browser will allow, but giving you the option to ensure proper execution order if you have dependencies between files.”

LABjs library can be found on github: https://github.com/getify/LABjsThere is also a Drupal LABjs contrib module, https://www.drupal.org/project/labjs, which helps to integrate LABjs library to Drupal. It is not a big module, so fully comprehending what it does and verifying that it can be trusted wasn't hard.

After LABjs deployment on Morpht website, we received significant PageSpeed and YSlow rating boost and even more obvious user experience improvement. The site feels like it would be a simple website dispite the fact of being build on Drupal 7. 

LABjs contrib module takes care of special cases and glitches like Google Analytics code or Advanced CSS/JS Aggregation module usage.

LABjs usage example

<script src="https://www.morpht.com/blog/remove-render-blocking-javascript-labjs/LAB.js"></script>
<script>
  $LAB
  .script("http://remote.tld/jquery.js").wait()
  .script("/local/plugin1.jquery.js")
  .script("/local/plugin2.jquery.js").wait()
  .script("/local/init.js").wait(function(){
      initMyPage();
  });
</script>

First the LABjs library is loaded, then the global $LAB variable is used to schedule asynchronous defer loading of additional JavaScript files or inline code. All specified resources are downloaded asynchronously and their execution is deferred till the page has finished parsing. The specific order of resources execution can be required in some cases (quite often). The wait() function call expresses an execution order dependency, so all resources specified below the wait() function call will not be executed till the waiting resource has finished its execution.

DOM content load speed cut by 34%

We found a significant speedup in the loading of teh DOM content. The site we tested on was relatively simple and we would expect better results on more complex sites.

Before LABjs

Waterfall without LABjs
  • No asynchronous or defered JavaScript load is incorporated yet.
  • DOMContentLoaded: 413ms
  • Page load: 604 ms

After LABjs

Waterfall with LABjs

Be careful

Asynchronous load or defer execution of JavaScript requires some caution. The usage of LABjs or any other third party library is usually not enough. Additional code changes may be required.

Inline JavaScript code is not allowed to have any dependencies on any asynchronously loaded JavaScript file without proper synchronization. If you have such dependencies, they need to be taken care of first. The above-the-fold content can have JavaScript dependencies, so inlining critical portions of JavaScript resources directly may be required. As an example we can imagine JavaScript code required to render some graphics. More appropriate example could be an order button that needs to be active immediatly after page load, but its action requires JavaScript processing.

Other options

There are other options out there but they do have their shortcomings.

The HTML <script> async attribute can be used to asynchronously load JavaScript. Async attribute make the load of JavaScript to be asynchronous, but it parses and executes the JavaScript immediately after it is loaded. As a result the HTML parsing is being blocked. Plus the order of scripts exectuion is not guaranteed (first loaded is first executed). Full cross-browser support is not guaranteed.

The HTML <script> defer attribute can be used to defer the execution of JavaScript. Defer attribute causes the JavaScript to be loaded asynchronously and executed after the HTML parsing is completed. JavaScript is parsed immediately after it is loaded. Full cross-browser support is not guaranteed.

LABjs is cross-browser reliable. It parses and executes the JavaScripts after the whole HTML DOM is parsed (page loading icon indicates that the page is fully loaded already).

Bryan McQuade showed us how Google Developers eliminate render-blocking JavaScript and CSS in his video presentation.

[embedded content]

Conclusion

You’ve just seen how to use LABjs library to turn the render-blocking JavaScripts into asynchronously loaded JavaScripts with deferred parsing and execution. As a real-life example the / website was presented. Its DOM content load speed was cut by 34%. That’s a good result for a site that uses JavaScript just sparingly. Imagine what could be accomplished on a complex website with a lot of JavaScript in use.

Remember that inlining just a critical portions of JavaScript resources directly in HTML and loading the rest asynchronously with postponed execution gives not only client-side performance boost, but also SEO boost that can make a difference.

My takeaway tip is to pay close attention to JavaScript code modularity so it can be easily separated based on the needs and use-cases required. Having good JavaScript means not only the functionality that’s needed, but also better user experience and better SEO ranking.

Helpful analytical tools

GTmetrix

  • https://gtmetrix.com/
  • Provides PageSpeed & YSlow reports together with Waterfall chart, page load time, total page size, number of request done and similar information.

PageSpeed

YSlow

Appendix

There are terms, expressions and constructs that should be known before any action is taken to remove render-blocking JavaScript.

Client-side

Client-side refers to operations that are performed by the client in a client–server relationship. In web application context it is a web browser, that runs on a user's local computer.

Client-side behavior forming user’s experience is significantly influenced by content received (HTML DOM), time interval between requesting and displaying page and code that needs to be parsed and executed (like JavaScript and CSS).

Server-side

Server-side refers to operations that are performed by the server in a client–server relationship. In web application context it is a web server, that runs on a remote server, reachable from a user's local computer. 

Server-side is responsible for http response content. Most of the web-application logic usually resides on server-side. The speed and amount of processing required on the server-side together with the server’s connection speed contribute to the overall responsiveness of the web application.

Client-side performance

Page load time is a sum of time required to load page content with its all external resources and time required to parse and finally execute internal and external resources (like HTML, CSS, JavaScript). Most of current pages are using JavaScript. The degree of overall JavaScript usage on the site influences page load time.

Performance is a ratio of work that has been done and time it took. Client-side performance can be considered as a ratio of data size downloaded combined with all operations performed and page load time. Quite often client-side performance notion is being simplified to just a a page load time. It is quite fair from user experience point of view, so this simplification works fine.

The way how JavaScript is loaded, parsed and executed is one of the important factors that influences client-side performance as it significantly influences page load time.

Above-the-fold content

When you land on a website, “above-the-fold” is any content you immediately see without scrolling the page. The “fold” is where the bottom of the screen is. Anything you have to scroll down to see is “below-the-fold”.

“Above-the-fold” is a term typically used for Newspapers, content above where the paper is folded horizontally.

Asynchronous JavaScript loading

By default, JavaScript execution is “parser blocking”: when the browser encounters an inline script in the document or scripts included via a script tag it must pause DOM construction, hand over the control to the JavaScript runtime and let the script execute before proceeding with DOM construction.

However, in the case of an external JavaScript file the browser will also have to pause and wait for the script to be fetched from disk, cache, or a remote server, which can add tens to thousands of milliseconds of delay to the critical rendering path.

Inlining the relevant external JavaScripts can help to avoid extra network roundtrips. 

Defer loading of JavaScript

The loading and execution of scripts that are not necessary for the initial page render may be deferred until after the initial render or other critical parts of the page have finished loading. Doing so can help reduce resource load burden and improve performance.

Quite often defer loading can mean that the resource is loaded asynchronously but the parsing and execution of that resource is deferred till the HTML DOM is loaded or page has finished parsing.

Inlining the critical portions of JavaScript

As a critical portion of JavaScript can be considered any JavaScript code that is required to make the above-the-fold page content to look and behave as intended, immediatelly after the page is loaded. According to the principal of inlining, the amount of critical portion of JavaScript should be the bare minimum.

Extracting bare minimum of JavaScript to satisfy page needs immediatelly after load seems to not be a common practice yet. It is going to change and new approaches in JavaScripts development can help with that (components, typesafty, more structure and code separation, TDD, new JavaScript compilers like TypeScript and others: https://github.com/jashkenas/coffeescript/wiki/list-of-languages-that-co...).

Client-side vs Server-side rendering

Client-side rendering means JavaScript running in the browser produces HTML or manipulates the DOM.

Server-side rendering means when the browser fetches the page over HTTP, it immediately gets back HTML describing the page. It maintains the idea that pages are documents, and if you ask a server for a document by URL, you get back the text of the document rather than a program that generates that text using a complicated API.

Client-side JavaScript code should be optimized to be lightweight. The rule is to do as much on your server-side as you possible can. Such optimization can lead to significant improvement of page load performance.

It is even possible to render JavaScript templates on the server to deliver fast first render, and then use client-side templating once the page is loaded.

Exceptions can be found of course. Some calculations can be put to client-side JavaScript deliberately. The intention maybe to reduce server load for heavy operations that can be easily handled by JavaScript on background asynchronously to not impair page load.

HTML <script> async attribute

The definition of async attribute on w3schools.com is incorrect. It says:

“When present, it specifies that the script will be executed asynchronously as soon as it is available.”

But in fact what happen is that the async attribute tells the browser to not block the DOM construction while it waits for the script to become available. It means that the script is loaded/downloaded asynchronously, but when the loading is finished the HTML parser is paused to execute the script synchronously. This is still a huge performance win though.

There are few downsides

  • Internet Explorer supports async attribute only from IE10 up.
  • Ordering is not preserved when the async attribute is used.
  • Any dependencies must be designed carefully. Synchronization could be required, as the exact timeline of operations and events is unknown upfront.

Example:

<script src="https://www.morpht.com/blog/remove-render-blocking-javascript-labjs/demo_async.js" async></script>

HTML <script> defer attribute

When the defer attribute is present, it specifies that the script is executed when the page has finished parsing.

Defer scripts are also guaranteed to execute in the order that they appear in the document.

Example:

<script src="https://www.morpht.com/blog/remove-render-blocking-javascript-labjs/demo_defer.js" defer></script>

Related videos

  • "Optimizing the Critical Rendering Path for Instant Mobile Websites - Velocity SC - 2013" by Ilya Grigorik

[embedded content]

Sep 15 2015
Sep 15

Creating responsive websites is easy these days with frameworks like Zurb’s Foundation and Bootstrap. However it may not be as easy for content editors to set up content responsively. Using the WYSIWYG API template plugin module, we can predefine templates in WYSIWYG, that helps editors to layout content in a mobile friendly website.

Creating custom templates in WYSIWYG

 

Install WYSIWYG API template plugin module

https://www.drupal.org/project/wysiwyg_template

Once you installed the module, enable the WYSIWYG Template button in your WYSIWYG editor. Be sure you enable the “Insert templates” button, not “Templates”.

Enabling WYSIWYG template button

Create template

Define your templates in /admin/config/content/wysiwyg-templates 

Create templates

Make use of the grid system provided from you framework. The following is an example of 3 columns bootstrap layout.

<div class="row">
  <div class="col-md-4">Left</div>
  <div class="col-md-4">Middle</div>
  <div class="col-md-4">right</div>
</div>

Set up style in WYSIWYG

In order for your template to show up correctly in WYSIWYG, define the theme stylesheets in your WYSIWYG settings. You can add extra styling to enhance user experience for your editors.

%b%t/assets/stylesheets/screen.css, %b%t/assets/stylesheets/ckeditor.css
Define css in WYSIWYG

Responsive layout is now a click away

Template in WYSIWYG

Source code:

Source code

Tips

It is always a good idea to add a blank space at the end to separate your template with other elements. You can also use margins and paddings in the style as an alternative.

Sep 14 2015
Sep 14

Each year the Sydney Drupal Community puts on a two day "DrupalCamp", generally held over a weekend at a location where attendees can get away from it all and get down to some serious Drupal talk. This year we held DrupalGlamp Sydney on Cockatoo Island, located in the middle of Sydney Harbour.

DrupalGlamp Sydney 2015 was held last weekend and it went off pretty well. We had all the important things covered: the venue, the programme, internet, a bar, cafe and people! The weather even managed to hold out for us.

Overall we had:

  • 50 attendees,
  • 11 speakers,
  • 2 Drupal 8 training sessions,
  • 5 lightning talks,
  • 2 BOFs,
  • 2 gold sponsors (PNX, Catalyst),
  • 6 silver sponsors (Morpht, Acquia, Linux Australia, Infinity, Demonz, This Little Duck),
  • 2 supporting partners (Drupal Association, Xyber Data Recovery),
  • 10 people camping,
  • 1 bicycle.

Highlights

From my perspective there were a few highlights of the event which are worth mentioning:

We try hard to come up with an interesting venue. This year was no different and thanks to a suggestion from Chloe, we were very happy to find the venue at Cockatoo Island. I think that everyone enjoyed the ferry ride over to the island and getting away from it for a little while. We had a good number of people who were prepared to "rough it" and camp out on the Saturday night. A couple even made it for two. On the Saturday night the firepit was lit up and about 15 of us enjoyed a few roasted marshmallows around the fire. The fire may not have been as big as Ivan expected (sorry about that mate) but it did the trick. This was Sydney at its best

We got a good range of sessions this time around with frontenders representing more than usual. We had talks on styleguide driven development, require JS, CSS units, and beautiful accessible CSS. Who says that all camps are backend focussed? :)

The Drupal 8 training provided by Kim and Magda was very well received and these sessions were very well attended, packing out the training rooms we had booked. This goes to show that there is a lot of interest for Drupal 8 knowledge and that people are receptive to training materials at a Camp, especially when it is free :) I think that this could serve as a blueprint for future Camps we put on.

Some reflections

This was the third Camp the Sydney community has put on in the last 15 months - all have been around the same size of 40 to 50 people, which seems to be the long term size of the community. Interestingly, there are many new faces coming along to the events these days, indicating that the community is being renewed as time goes by. There is perhaps a slight increase in momentum as we continue to put on events which have a good programme and are well attended. We hope to continue this momentum with the monthly meetups and socials which are held. If you haven't come along to a Sydney meetup yet, we'd love to see you there.

This time around we decided to go with the Drupal Association for the backer of the event. The DA provided us with a bank account and insurance to cover the venue hire. One important feature of the offering from the DA was the ability to retain any leftover "profits" in the account for future use by the community, after the deduction of a small 10% fee. Whilst we did not have the intention of making money from the event, the contributions from sponsors meant that we did have some left over funds. This money can now be used to back future events, making it much easier to commit to securing a venue next time. This does help the stability of the community and will hopefully allow us to put on more ambitious events over time.

We were pleasantly surprised by the number of sponsors the event managed to attract. Our two gold sponsors (PNX, Catalyst) were quick to sign up at the $1500 level. This was very important for securing the venue and giving us the confidence to proceed. The pricepoint of $1000 proved too high for the silver sponsors and we decided to reduce it to $500. Dropping the price was a big hit with 6 sponsors coming on board pretty quickly at this level. This was a good lesson in how to plan out future sponsorsip levels. Its good to know that there is good support there at the grassroots level, with many smaller to medium agencies prepared to help out.

As usual, attendees are quite slow to sign up to events. We tried to minimise this effect through the use of an early bird scheme which had a 20% saving on the tickets. This worked somewhat, but it seems that there is no avoiding the last minute rush on tickets. In the end we were very happy with the numbers we got, filling the venue whilst avoiding overcrowding. 

It's my belief that attendees are swayed by the speaker list. They want to know what they can expect from the event. Other Camps, such as Melbourne take an unconference approach where the programme is decided on the day. We have some aspects of this, with BOFs and lightning talks, however, we do have a set programme of presentations. It appears that attendees to start to sign up a lot more readily when they can see some of the programme taking shape. I would say that sorting the venue and then a few speakers early is one way to get things moving.

We do try to keep things interesting by having a variety of spaces where people can interact. The venue was pretty good for this with a main presentation area, a coding and training room, a breakout area and a cafe just outside. This had all the bases covered and I would say that any future events we held should have a similar mix.

What is the effort in putting on a Camp? For DrupalCamp Katoomba 2014 I tracked around 40 hours of time to organise it. This time around it was a little more involved. Magda (sponsors, speakers, accounts, misc) and I (speakers, venue, wifi, food, bar, insurance, misc) would have put in around 70 hours into the leadup to the event. This is a fair amount of work but it's well worth it when we get a good number of people coming along.

Massive thanks to Colin (for the logo), Chloe (for the site), Plamen (for the wifi), Ivan (for all round help) and Ian (for working the door). Big shout out to all of the speakers and trainers who took the time to prepare and presents such great material.

We will no doubt be looking to put on a similar event next year. We would love for some new people to step up and bring their own energy and ideas into the mix, so if you feel up to it, please get in touch with any of the Sydney Organisers and let them know that you would like to be involved. Its not so hard putting one of these things on and there are lots of people around who can help out and give advice.

Thanks to everyone who came along - I'm sure we all had a great time and learnt a thing or two as well.

Some pics

The full photo gallery on the Meetup page for the event has a stack of photos of the event - with 65 pics and counting. Here are a few selected images copied from the gallery with the permission of the photographers involved.

Feb 23 2015
Feb 23

One of the trickier aspects of any data migration is migrating users and ensuring that the authentication details which worked on the legacy site also continue to work on the new Drupal site. Neither Drupal nor probably the legacy system actually stores the password in plain text. A one way hash, often salted will be used instead. This level of security is essential for ensuring that user passwords cannot be easily captured, however, it does provide a challenge when the method of hashing is unknown.

This article will take a look at how Drupal handles authentication and how it can be extended to handle new methods, such as those used by a legacy system. We will then go on to take a look at the hashing algorithm used on a .Net site and how we were able to implement some Drupal code to ensure that the hashes could be understood by Drupal. This took a fair bit of detective work and the main point of the article is to document how we did it.

Authentication in Drupal

Drupal contains the logic for user authentication in /includes/password.inc. An important function is user_check_password() where the first three characters of the $stored_hash are used to define the $type of password. A Drupal 7 password is denoted as '$S$' as can be seen from the code below.

/includes/password.inc

function user_check_password($password, $account) {
  if (substr($account->pass, 0, 2) == 'U$') {
    // This may be an updated password from user_update_7000(). Such hashes
    // have 'U' added as the first character and need an extra md5().
    $stored_hash = substr($account->pass, 1);
    $password = md5($password);
  }
  else {
    $stored_hash = $account->pass;
  }

  $type = substr($stored_hash, 0, 3);
  switch ($type) {
    case '$S$':
      // A normal Drupal 7 password using sha512.
      $hash = _password_crypt('sha512', $password, $stored_hash);
      break;
    case '$H$':
      // phpBB3 uses "$H$" for the same thing as "$P$".
    case '$P$':
      // A phpass password generated using md5.  This is an
      // imported password or from an earlier Drupal version.
      $hash = _password_crypt('md5', $password, $stored_hash);
      break;
    default:
      return FALSE;
  }
  return ($hash && $stored_hash == $hash);
}

The secret to defining your own hashing algorithm is to replace this function with another, and this can be done with a small module which swaps out the password.inc file. By implementing your own "myauthmodule" you can swap out the password.inc file and include your own logic as desired.

/sites/all/modules/myauthmodule/myauthmodule.install


/**
 * @file
 * Supply alternate authentication mechanism.
 */

/**
 * Implements hook_enable().
 */
function myauthmodule_enable() {
  variable_set('password_inc', drupal_get_path('module', 'myauthmodule') . '/password.inc');
}

/**
 * Implements hook_disable().
 */
function myauthmodule_disable() {
  variable_set('password_inc', 'includes/password.inc');
}

As you can see we use our own custom password.inc when the modules is enabled, and revert back to the old one when the module is disabled.

This is what our new user_check_password() looks like.

/sites/all/modules/myauthmodule/password.inc

/**
 * @file
 * Alternate authentication mechanism implementation.
 */

function user_check_password($password, $account) {
  if (substr($account->pass, 0, 2) == 'U$') {
    // This may be an updated password from user_update_7000(). Such hashes
    // have 'U' added as the first character and need an extra md5().
    $stored_hash = substr($account->pass, 1);
    $password = md5($password);
  }
  else {
    $stored_hash = $account->pass;
  }

  $type = substr($stored_hash, 0, 3);
  switch ($type) {
    case '$S$':
      // A normal Drupal 7 password using sha512.
      $hash = _password_crypt('sha512', $password, $stored_hash);
      break;
    case '$H$':
      // phpBB3 uses "$H$" for the same thing as "$P$".
    case '$P$':
      // A phpass password generated using md5.  This is an
      // imported password or from an earlier Drupal version.
      $hash = _password_crypt('md5', $password, $stored_hash);
      break;
    case '$X$':
      // The legacy .Net method
      $hash = _myauthmodule_crypt($password, $stored_hash);
      break;
    default:
      return FALSE;
  }
  return ($hash && $stored_hash == $hash);
}

In this case, if the hash starts with '$X$' our custom algorithm will kick in and check the $password as entered against the $stored_hash. It's up to you to define the correct algorithm so that the $password is transformed into something which can be compared against the $stored_hash. If there is a match, the user will be authenticated.

.Net hashing algorithm

We will now go on to examine the hashing algorithm used in the .Net web application. The most difficult piece of the puzzle was working out exactly what algorithm was being used for the hash. After a lot of poking around we discovered that the .Net application was using SHA1 with 1000 iterations (RFC 2898). We also have to pick apart the string we were given by base64 decoding the string and then pulling the salt off the front of it. 

Playing with .Net hashing algorithm, working out what was what and then how to implement it was my job. This Stack Overflow article held the clues for how to solve it.

/sites/all/modules/myauthmodule/crypt.inc

/**
 * @file
 * Hash algorithm based on .Net hashing algorithm.
 */

/**
 * Legacy hashing constants.
 */
define('LEGACY_HASH_SUBKEY_LENGTH', 32);  
define('LEGACY_HASH_ALGORITHM', 'sha1');
define('LEGACY_HASH_ITERATIONS_NUMBER', 1000);
define('LEGACY_HASH_PREFIX', '$X$');

/**
 * Returns a hash for the password as per the .Net legacy mechanism.
 */
function _myauthmodule_crypt($password, $stored_hash) {
  // Remove the prefix.
  $legacy_hash = substr($stored_hash, strlen(LEGACY_HASH_PREFIX));
  // Calculate the hash.
  $legacy_hash_decoded = base64_decode($legacy_hash);
  $legacy_salt = substr($legacy_hash_decoded, 1, 16);
  $subkey = hash_pbkdf2(LEGACY_HASH_ALGORITHM, $password, $legacy_salt, LEGACY_HASH_ITERATIONS_NUMBER, LEGACY_HASH_SUBKEY_LENGTH, TRUE);
  // Hash = null char + salt + subkey.
  $hash = chr(0x00) . $legacy_salt . $subkey;
  return LEGACY_HASH_PREFIX . base64_encode($hash);
}

As we were unable to rely on the hash_pbkdf2() function existing in PHP (supported from PHP 5 >= 5.5.0) we had to code our own as a fallback. The PHP code for the hash algorithm was taken from comment posted on PHP hash_pbkdf2 manual page here.

/sites/all/modules/myauthmodule/hash_pbkdf2_fallback.inc (this file is not needed if you are using PHP 5 >= 5.5.0)


/**
 * PBKDF2 key derivation function as defined by RSA's PKCS #5: https://www.ietf.org/rfc/rfc2898.txt
 * $algorithm - The hash algorithm to use. Recommended: SHA256
 * $password - The password.
 * $salt - A salt that is unique to the password.
 * $count - Iteration count. Higher is better, but slower. Recommended: At least 1000.
 * $key_length - The length of the derived key in bytes.
 * $raw_output - If true, the key is returned in raw binary format. Hex encoded otherwise.
 * Returns: A $key_length-byte key derived from the password and salt.
 */
if (!function_exists("hash_pbkdf2")) {

  class pbkdf2 {

    public $algorithm;
    public $password;
    public $salt;
    public $count;
    public $key_length;
    public $raw_output;
    private $hash_length;
    private $output = "";

    public function __construct($data = null) {
      if ($data != null) {
        $this->init($data);
      }
    }

    public function init($data) {
      $this->algorithm = $data["algorithm"];
      $this->password = $data["password"];
      $this->salt = $data["salt"];
      $this->count = $data["count"];
      $this->key_length = $data["key_length"];
      $this->raw_output = $data["raw_output"];
    }

    public function hash() {
      $this->algorithm = strtolower($this->algorithm);
      if (!in_array($this->algorithm, hash_algos(), true))
        throw new Exception('PBKDF2 ERROR: Invalid hash algorithm.');

      if ($this->count <= 0 || $this->key_length <= 0)
        throw new Exception('PBKDF2 ERROR: Invalid parameters.');

      $this->hash_length = strlen(hash($this->algorithm, "", true));
      $block_count = ceil($this->key_length / $this->hash_length);
      for ($i = 1; $i <= $block_count; $i++) {
        // $i encoded as 4 bytes, big endian.
        $last = $this->salt . pack("N", $i);
        // first iteration
        $last = $xorsum = hash_hmac($this->algorithm, $last, $this->password, true);
        // perform the other $this->count - 1 iterations
        for ($j = 1; $j < $this->count; $j++) {
          $xorsum ^= ($last = hash_hmac($this->algorithm, $last, $this->password, true));
        }
        $this->output .= $xorsum;
      }

      if ($this->raw_output) {
        return substr($this->output, 0, $this->key_length);
      }
      else {
        return bin2hex(substr($this->output, 0, $this->key_length));
      }
    }

  }

  function hash_pbkdf2($algorithm, $password, $salt, $count, $key_length, $raw_output = false) {
    $data = array('algorithm' => $algorithm, 'password' => $password, 'salt' => $salt, 'count' => $count, 'key_length' => $key_length, 'raw_output' => $raw_output);
    try {
      $pbkdf2 = new pbkdf2($data);
      return $pbkdf2->hash();
    }
    catch (Exception $e) {
      throw $e;
    }
  }

}

Getting it all to work

Now that our custom code is in place we can give it a spin with some real live data. The first step of the process is to import the data into your users table. The string you write into the pass column must include the following concatenated items:

  • the hash type, in this case '$X$',
  • the hash

In our case the hash string was base64 encoded and was concatenation of 3 parts:

  • null character (8 bits)
  • 16 characters of salt (128 bits)
  • 32 characters of subkey (256 bits)

The hash string, as extracted from the legacy database looked similar to the following:

AFnR63Ykym/kDXLFEM5tlL450Y+drbfdwRGhsOCOMlcR273QYod3QZdKwhiKHKHjXw==

After it was written to the password column in the users table it looked like:

$X$AFnR63Ykym/kDXLFEM5tlL450Y+drbfdwRGhsOCOMlcR273QYod3QZdKwhiKHKHjXw==

With that knowledge (we had to learn and investigate to acquire them in first place) we were able to extract the legacy .Net salt and use it for calculation of our new subkey. The subkey is generated by hash_pbkdf2() function mentioned before, but to get the right subkey we need to provide correct settings and inputs (the order is exactly the same as the hash_pbkdf2() function requires):

  • the hash algorithm to be used (in our case sha1)
  • a password provided by a user, the legacy .Net salt
  • the number of iterations for the key derivation process (in our case 1000)
  • the length of the derived key in bytes (in our case 32)
  • the raw output set to TRUE to get our new subkey in raw binary format

To see clearly how the mechanism of getting .Net legacy hash is working, here's the code again:

/**
 * Legacy hashing constants.
 */
define('LEGACY_HASH_SUBKEY_LENGTH', 32);  
define('LEGACY_HASH_ALGORITHM', 'sha1');
define('LEGACY_HASH_ITERATIONS_NUMBER', 1000);
define('LEGACY_HASH_PREFIX', '$X$');


/**
 * Returns a hash for the password as per the .Net legacy mechanism.
 */
function _myauthmodule_crypt($password, $stored_hash) {
  // Remove the prefix.
  $legacy_hash = substr($stored_hash, strlen(LEGACY_HASH_PREFIX));
  // Calculate the hash.
  $legacy_hash_decoded = base64_decode($legacy_hash);
  $legacy_salt = substr($legacy_hash_decoded, 1, 16);
  $subkey = hash_pbkdf2(LEGACY_HASH_ALGORITHM, $password, $legacy_salt, LEGACY_HASH_ITERATIONS_NUMBER, LEGACY_HASH_SUBKEY_LENGTH, TRUE);
  // Hash = null char + salt + subkey.
  $hash = chr(0x00) . $legacy_salt . $subkey;
  return LEGACY_HASH_PREFIX . base64_encode($hash);
}

The _myauthmodule_crypt() function returns a newly calculated hash based on the password provided and the salt we extracted from the stored hash. This is combined with the prefix and the whole result is returned back to the calling function where it is compared with the stored hash.

The hash construction by .Net:

To make the hash construction more clear, we can look at it from .Net perspective. The .Net web has its own specific salt. It's represented in hexadecimal format, let it be: 59d1eb7624ca6fe40d72c510ce6d94be (32 chars). User has provided some password, let it beUserPassword . After the necessary operations are done we have the hash: AFnR63Ykym/kDXLFEM5tlL450Y+drbfdwRGhsOCOMlcR273QYod3QZdKwhiKHKHjXw== . So now we need to understand what are the necessary operations that creates hash from salt and user password inputs. Below is PHP code with hash_example() function that provides step by step example:

/sites/all/modules/myauthmodule/doc/hash_example.inc

require_once 'hex2bin_fallback.inc';

/**
 * Legacy hashing constants.
 */
define('LEGACY_HASH_SALT', '59d1eb7624ca6fe40d72c510ce6d94be'); // 32 chars in hexadecimal form
define('LEGACY_HASH_SUBKEY_LENGTH', 32);
define('LEGACY_HASH_ALGORITHM', 'sha1');
define('LEGACY_HASH_ITERATIONS_NUMBER', 1000);

function hash_example() {
// Convert the salt into binary form.
$salt_binary = hex2bin(LEGACY_HASH_SALT); // 16 chars in binary form
// Get the password from user.
$password = 'UserPassword';

$hash = legacy_hash($password, $salt_binary);
// $hash = AFnR63Ykym/kDXLFEM5tlL450Y+drbfdwRGhsOCOMlcR273QYod3QZdKwhiKHKHjXw==

return $hash;
}

/**
 * Legacy hash construction based on .Net procedure.
 */
function legacy_hash($password, $salt) {
  $subkey = hash_pbkdf2(LEGACY_HASH_ALGORITHM, $password, $salt, LEGACY_HASH_ITERATIONS_NUMBER, LEGACY_HASH_SUBKEY_LENGTH, TRUE);
  // Hash = null char + salt + subkey.
  $hash = chr(0x00) . $salt . $subkey;
  return base64_encode($hash);
}

As we are unable to rely on the hex2bin() function existing in PHP (supported from PHP 5 >= 5.4.0) we had to code our own as a fallback. The PHP code for the hash algorithm was taken from comment posted on PHP hex2bin manual page here.

/sites/all/modules/myauthmodule/doc/hex2bin_fallback.inc

/**
 * hex2bin() decodes a hexadecimally encoded binary string.
 * http://php.net/manual/en/function.hex2bin.php
 * (PHP >= 5.4.0)
 */
if (!function_exists('hex2bin')) {

  function hex2bin($str) {
    $sbin = "";
    $len = strlen($str);
    for ($i = 0; $i < $len; $i += 2) {
      $sbin .= pack("H*", substr($str, $i, 2));
    }
    return $sbin;
  }

}

Conclusion

This kind of approach is used for all migrations from legacy systems where users need to be migrated. Generally it is a fairly simple approach as the hashing algorithm is either simple or well documented. In this case we had to do a fair deal of sleuthing to work out how to do it. We hope that this article will be of help to other developers who are neck deep in salts and hashes.

Feb 12 2015
Feb 12

OpenLayers module is a popular solution for mapping in Drupal. The biggest benefit is the ability to use different map providers, complete Feature support and, last but not least, the simplicity of creating custom markers.

Good selection of markers can have a big influence on the overall success of the site. It can improve user experience because with good icons and color codes, the user can easily see what the marker is about without clicking it. The default set of markers in Open Layers is too generic, they are just basic points with no added value – and that is perfectly fine. For a simple site with few points and basic categorization, there is no need for fancy markers. But if the map needs to be clear to users and the markers look good on high resolution screens then you'll need to create custom markers.

OpenLayers Markers default

Creation of marker

Let's create a marker

The first step is to create your own marker images.

Open your favorite image editing or vector drawing application. It doesn’t matter which you use, what matters is the final exported file. I use Adobe Photoshop or Illustrator, but you can use Corel Draw, Gimp, Fireworks etc...

Create a new document, make it at least two times larger than your desired marker size in both directions. On the new blank canvas make sure the marker point is on the bottom border. In fact, the marker should fill the whole canvas.

Exporting the marker image

Once you have the image(s) ready, you have to export them in the correct format. OpenLayers works with PNG, GIF and SVG. Many of you will cheer that SVG is supported, but I strongly recommend to stick with PNG files. The time for SVG has not arrived. There are still some backward compatibility issues, and I’ve noticed jagged lines on some browsers/devices combination. If you create the PNG marker with alpha transparency and make it two times larger, the resulting PNG will be crisp everywhere.

Creating custom marker

Now that we have the image ready upload it to the site. You can store it in a simple module or add inside a Feature module.

Go to /admin/structure/openlayers/styles and from this page you can see all the current styles. Instead of creating a new marker, just clone an existing one. This will save you time because you won't have to fill out the whole form. 

Marker clone

 

In the marker edit form, you have to specify the correct path to the marker.

Marker path Open Layers

The most important part of this form is where you set the dimensions and offset for your image. Offset needs to be changed so the point of the marker is in the middle of the preview square on style list. This will require some trial and error to get right. Here, you also set half the real image size – we have created the images double the size, remember? This will provide us with crisp looking images on high resolution ('retina') devices.

OpenLayers Marker setup

Once you've finished configuring the marker, save the form and it's ready to be used in a map.

OpenLayers Custom Markers

As you can see, it's relatively easy to change the look and feel of a map using custom markers. It's now more readable and descriptive for users. You could further improve things by creating a legend which will explain the markers.

If you have any questions, or want to show others your new marker set, please leave a comment.

Dec 17 2014
Dec 17

Ctools content types, not to be confused with general content types, are Panels version of blocks. Another name for them are panel panes, but just think of them as more powerful blocks. Because they are Ctools plugins, the code to implement them is easy to write and maintain. Each plugin has their own file, whereas with blocks, you have to cram everything into hook_block_view().

They also integrate with Ctools contexts, which means you get a context object passed directly in the content type and most of the time this is an entity object. No longer will you have to write code which grabs the entity ID from the URL and load the entity.

In this post, we'll create a Ctools content type which displays a "Read more" link, with the link text being configurable. You'll learn how to create a plugin and a settings form for it. Basic PHP knowledge is required if you want to follow along.

Step 1: Integrate your Module with Ctools

Go into your custom module and add the following hook:

/**
 * Implements hook_ctools_plugin_directory().
 */
function morpht_ctools_content_type_ctools_plugin_directory($module, $plugin) {
  if ($module == 'ctools' && !empty($plugin)) {
    return "plugins/$plugin";
  }
}

This hook is used to tell Ctools where plugins are stored. It'll look for the content type plugin in "plugins/content_types".

Step 2: Create Plugin File

Create a file called "read_more.inc" and place it in "plugins/content_types", create these folders if required. Once created, the path to the file should be "plugins/content_types/read_more.inc".

Step 3: Describe Plugin

Open up "read_more.inc" and add the following bit of code at the top:

$plugin = array(
  'title' => t('Read more link'),
  'description' => t('Displays a read more link.'),
  'single' => TRUE,
  'content_types' => array('read_more'),
  'render callback' => 'read_more_content_type_render',
  'required context' => new ctools_context_required(t('Node'), 'node'),
  'edit form' => 'read_more_content_type_edit_form',
  'category' => array(t('Morpht Examples'), -9),
);

The $plugin array is used to describe the plugin. Let's break down this array:

  • 'title': This is the title of the plugin. It'll be used in the Panels administration area.
  • 'description': This is used as the description of the plugin, it's only used in the Panels administration area.
  • 'single': Content types have a concept called subtype, which is an advanced feature and beyond the scope of this tutorial. For basic content types leave this as TRUE. If you want to learn about subtypes check this Drupal Answers post.
  • 'content_types': This is the machine name of the plugin.
  • 'render callback': This is a callback to a function which will be used to render the content type.
  • 'required context': This tells Ctools which context is required for this content type. By using ctools_context_required(t('Node'), 'node'), this content type will only be available for node entities.
  • 'edit form': This is a callback for the edit form. Please note, this edit form must be implemented if you define a 'required context' key.
  • 'category': This allows you to add a content type into a Panels category.

Step 4: Settings Form

The "Read more" link for the content type needs to be configurable. Creating a configuration form for it is pretty easy. In the $plugin array we added "read_more_content_type_edit_form" as our form.

Go ahead and add the following bit of code after the $plugin:

/**
 * Ctools edit form.
 *
 * @param $form
 * @param $form_state
 * @return mixed
 */
function read_more_content_type_edit_form($form, &$form_state) {
  $conf = $form_state['conf'];
  $form['read_more_label'] = array(
    '#type' => 'textfield',
    '#title' => t('Label'),
    '#description' => t('The read more label.'),
    '#default_value' => !empty($conf['read_more_label']) ? $conf['read_more_label'] : 'Read more',
  );
  return $form;
}

/**
 * Ctools edit form submit handler.
 *
 * @param $form
 * @param $form_state
 */
function read_more_content_type_edit_form_submit($form, &$form_state) {
  foreach (array('read_more_label') as $key) {
    $form_state['conf'][$key] = $form_state['values'][$key];
  }
}

I won't go into too much detail about this because it's a basic Drupal form. The read_more_content_type_edit_form() function is the form and read_more_content_type_edit_form_submit() is the submit handler.

Important: If you want to use the 'required context' key, then you must implement an edit form. Thanks Sam152 for the heads up.

Step 5: Render Function

The last piece of the puzzle is the render function. This is the main function which will be used to render the content type. It is the equivalent of using hook_block_view() if you're creating a block.

Add the following function into the plugin:

/**
 * Render callback function.
 *
 * @param $subtype
 * @param $conf
 * @param $args
 * @param $context
 * @return stdClass
 */
function read_more_content_type_render($subtype, $conf, $args, $context) {
  $node = $context->data;
  $block = new stdClass();

  $block->content = l(t("@read_more", array('@read_more' => $conf['read_more_label'])), 'node/' . $node->nid);

  return $block;
}

The HTML that you want to render needs to be added to the "content" property on the $block object, e.g, $block->content.

The "Read more" label from the settings form can be accessed via the $conf variable. And finally, you can get the node entity from the $context via $context->data.

Summary

Now that you know how powerful Ctools content types are I hope this opens your eyes to new possibilities. Best of all, content types can also be used with Display Suite. This means that you can write your content type once and use it with Panels or Display Suite.

Nov 01 2014
Nov 01

Murray and I presented a session at Drupal Camp Sydney 2014 on the Paragraphs module. The presentation had a demo, lecture and a bit of roleplay. We talked about the benefits of chunking content and how Paragraphs could be used.

The Paragraphs module allows you to chunk content together into "paragraphs". Each paragraph type can have its own set of fields. For example, you could have a paragraph type called "List" with an entity reference field that allows you to select and display articles. Once your paragraphs are created you can reorder them by moving them up or down.

To cut a long story short, the module gives editors an amazing amount of flexibility without giving them too much control to break things.

With that said, here are our slides from the presentation:

Oct 18 2014
Oct 18

At this month's Sydney Drupal Users Group, I did a short presentation about using Bootstrap in Drupal 7.

The presentation was broken down into two parts: Bootstrap themes and modules.

The mentioned themes are:

There are a few useful modules which will help you use Bootstrap components. Here's a list of the modules:

May 09 2014
May 09

It's been a while since the Sydney Drupal community put on a Drupal camp. It's time for that to change.

A few of us were sitting around the airport bar at the conclusion of DrupalSouth and hatched a plan to put one on "sometime soon". We kicked around a few crazy locations such as Broken Hill and Bali - somewhere a bit different. We eventually got our act together and decided on a weekend in the Blue Mountains as the venue for the event.

Drupal Camp Sydney 2014 is being held at the Katoomba YHA on the weekend of June 28 to June 29. Katoomba YHA on June 28 to 29 2014

It's going to be an initimate event with 40 tickets up for sale over both the days. The venue is a large, single room with a few different spaces for collaboration. There will be presentation, chillout, mentoring and coding areas. We also have access to a kitchen and the room 24 hours. Basically we can make the space our own for the weekend.

We hoping to get a good range of people up there from those just starting out their Drupal path to old hands. There will be volunteers on hand to answer questions people may have.

To find out more about the event please visit the Drupal Camp Sydney 2014 microsite, the GDO announcement or the Eventbrite page to book your tickets.

We hope to see you there.

Feb 05 2014
Feb 05

Cyclone is a new Drupal module which enables users to deploy sites to a variety of platforms including Pantheon and Aegir.

The main aim of Cyclone is to put control into the hands of the end user, to give them the ability to spin up sites when they desire. A secondary aim is to make it easy for publishers, such as agencies, entrepreneurs and ditro builders, to publish site templates which can easily be browsed and selected by users.

The desired outcome is that we have more Drupal sites being created and more happy users of Drupal.

Drupal hosting APIs

The Drupal hosting market has come along in leaps and bunds over the last few years. Companies such as Pantheon, Aberdeen Cloud and Acquia have developed platforms which are very developer friendly, allowing for the easy deployment of code between environments: dev, stage and production. They augment the open source Aegir project which provides a powerful web front end to the provision project, allowing sites to be installed, cloned and migrated between platforms.

Each of these platforms offer APIs to control various part of the system. Aegir is perhaps the most direct, being built entrely around Provision and Drush. It is essentially just a front end to components designed to handle sites and platforms. Pantheon is another standout. Their Terminus CLI allows control of many aspects of sites including creating sites from Drush archives and installing for base "products. Acquia offer a RESTful CloudAPI which is able to control all aspects of installed sites. Aberdeen has command line tools and is soon to release a RESTful interface as well.

In order to faciitate user provisioned sites, these APIs must support basic site clone or install functionality. Once these methods are available, Cyclone is able to use them to create sites. At the time of writing only Aegir and Pantheon offered such methods. Cyclone therefore targets either of these two platforms.

Introducing Cyclone

Morpht will be presenting Cyclone and the companion project Cyclone Jenkins, at DrupalSouth, in Wellington. We will be demonstrating how it works as well as some of the code behind it. We will post the presentation here once it has been uploaded.

Aug 17 2013
Aug 17

Marji will be presenting his server monitoring talk at the upcoming DrupalCon Prage conference in September.

If you are a Drupal developer or sys admin who has ever needed to tune or troubleshoot a server running Drupal you need to check out this talk. Marji will cover the basics of why you need to monitor servers and will address the following areas.

  • Why it is necessary to proactively monitor your Drupal server.
  • You need data in order to diagnose.
  • Meet Nagios, Munin and some others.
  • What can these graphs tell you about your Drupal site?
  • Practical example of how to configure Nagios.
  • Could we make it easier? Puppet can!

As an added bonus Marji will distribute a Puppet manifest to help you get started monitoring your own servers.

Time slot:Wednesday · 13:00-14:00
Room: North Hall · Exove

Morpht provides monitoring and workflow support to Drupal shops running their own servers. If you have any questions for us, get in touch.

Feb 21 2013
Feb 21

Video of DrupalCon Sydney 2013: Information architecture for site builders

Information architecture for site builders video presentation presented at DrupalCon Sydney 2013. Find out about the horizontal aspects of Drupal, modelling multi typed objects, relationships and tips for building faceted classification.

There were heaps of great presentations and conversations. It came and went way to fast. Morpht was lucky enough to give one presentation for the site building track: Information architecture for site builders. Thanks to all those who came along and asked questions at the end.

Aug 04 2012
Aug 04

Video of CDNs made simple, fast &amp; cheap - Murray Woodman

The time it takes for a page to be delivered to a user is very important. The shorter the better. Sites which suffer long download times will typically have higher bounce rates and lower conversions than sites which are faster. Google has recognised the importance of a snappy site by rewarding faster sites with better search results rankings.

Cdn great deliveryHaving a fast site is a no brainer. The question becomes, "how do I make sure I have a fast site"?

Hosting your site on an optimised platform with well placed caching is a big part of having a fast site. However, the majority of download time experienced by a user is in fact due to waiting for resources to be downloaded and rendered in the browser. What happens on the server plays only a small role in delivering a site promptly.

The use of a Content Delivery Network, or CDN for short, allows for "heavy" resources such as videos, audio and images to be downloaded quickly from a server which is close to the user. Instead of requesting a file which may reside on the other side of the planet, a CDN will attempt to serve a file which is much closer, resulting in faster download speeds.

Morpht can easily configure your website to use a CDN if your site requires it. This will be very helpful for sites which fall into one of the following categories: heavy use of media files or many overseas visitors.The end result will be pages which are served and rendered faster for your users.

Photo credit: http://www.flickr.com/pheanixphotos/5835605434/

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web