Author

Jul 27 2017
Jul 27

WordPress to Drupal 8

In this post I will show you how to migrate thumbnail content from Wordpress to Drupal 8. My goals are to help you better understand the content migration process, give you starting point for future migrations, and teach you how to write process plugins and migration sources. Taxonomy terms and users migration is more straightforward so I won't cover it here.

This migration example contains templates to migrate thumbnails content. For this post, I assume the image/thumbnail field is using the Media module field. I will be using the Migrate drush module to run migrations.

First, make sure to configure your connection in your settings.php file. Add the following with proper credentials:

$databases['migrate']['default'] = [
    'driver'   => 'mysql',
    'database' => 'wordpress_dbname',
    'username' => 'wordpress_dbuser',
    'password' => 'wordpress_dbpassowrd',
    'host'     => '127.0.0.1',
];

Here is the module structure I will be using:

wp_migration/
  - wp_migration.info.yml
  - wp_migration.module
  - migration_templates/
    - wp_content.yml
    - wp_thumbnail.yml
    - wp_media.yml
  - src/
    - Plugin/
      - migrate/
        - process/
          - AddUrlAliasPrefix.php
          - DateToTimestamp.php
        - source/
          - SqlBase.php
          - Content.php
          - Thumbnail.php

Contents of the wp_wordpress.info.yml file:

name: Wordpress Migration
type: module
description: Migrate Wordpress content into Drupal 8.
core: 8.x
dependencies:
  - migrate
  - migrate_drupal
  - migrate_drush

Contents of the migration_templates/wp_thumbnail.yml file:

id: wp_thumbnail
label: 'Thumbnails'
migration_tags:
  - Wordpress
source:
  plugin: wordpress_thumbnail
  # This is WP table prefix  (custom variable)
  # DB table example: [prefix]_posts
  table_prefix: wp
  constants:
    # This path should point ot WP uploads directory.
    source_base_path: '/path/to/source/wp/uploads'
    # This is directory name in Drupal where to store migrated files
    uri_file: 'public://wp-thumbnails'
process:
  filename: filename
  source_full_path:
    -
      plugin: concat
      delimiter: /
      source:
        - constants/source_base_path
        - filepath
    -
      plugin: urlencode
  uri_file:
    -
      plugin: concat
      delimiter: /
      source:
        - constants/uri_file
        - filename
    -
      plugin: urlencode
  uri:
    plugin: file_copy
    source:
      - [email protected]_full_path'
      - [email protected]_file'
  status: 
    plugin: default_value
    default_value: 1
  changed: 
    plugin: date_to_timestamp
    source: post_date
  created: 
    plugin: date_to_timestamp
    source: post_date
  uid: 
    plugin: default_value
    default_value: 1
destination:
  plugin: 'entity:file'
migration_dependencies:
  required: {}
  optional: {}

Contents of the migration_templates/wp_media.yml file:

id: wp_media
label: 'Media'
migration_tags:
  - Wordpress
source:
  plugin: wordpress_thumbnail
  # This is WP table prefix  (custom variable)
  # DB table example: [prefix]_posts
  table_prefix: wp
  constants:
    bundle: image
process:
  bundle: 'constants/bundle'
  langcode:
    plugin: default_value
    default_value: en
  'field_image/target_id':
    -
      plugin: migration
      migration: wp_thumbnail
      source: post_id
    -
      plugin: skip_on_empty
      method: row
destination:
  plugin: 'entity:media'
migration_dependencies:
  required: {}
  optional: {}

Contents of the migration_templates/wp_content.yml file:

id: wp_content
label: 'Content'
migration_tags:
  - Wordpress
source:
  plugin: wordpress_content
  # Wordpress post type (custom variable)
  post_type: post
  # This is WP table prefix  (custom variable)
  # DB table example: [prefix]_posts
  table_prefix: wp
process:
  type:
    plugin: default_value
    default_value: article
  'path/pathauto':
    plugin: default_value
    default_value: 0
  'path/alias':
    # This will add the following to URL aliases in Drupal
    plugin: add_url_alias_prefix
    # url/alias/prefix/2017/07/21/[post-title]
    prefix: url/alias/prefix
    source: path_alias
  promote: 
    plugin: default_value
    default_value: 0
  sticky: 
    plugin: default_value
    default_value: 0
  langcode:
    plugin: default_value
    default_value: en
  status: 
    plugin: default_value
    default_value: 1
  title: post_title
  created: 
    plugin: date_to_timestamp
    source: post_date
  changed: 
    plugin: date_to_timestamp
    source: post_modified
  field_image:
    -
      plugin: migration
      migration: wp_media
      source: thumbnail
    -
      plugin: skip_on_empty
      method: row
  'body/summary': post_excerpt
  'body/format':
    plugin: default_value
    default_value: full_html
  'body/value': post_content
destination:
  plugin: 'entity:node'
migration_dependencies:
  required: {}
  optional: {}

Contents of the src/Plugin/migrate/process/AddUrlAliasPrefix.php file:

<?php

namespace Drupal\wp_migration\Plugin\migrate\process;

use Drupal\migrate\ProcessPluginBase;
use Drupal\migrate\MigrateExecutableInterface;
use Drupal\migrate\Row;

/**
 * Add prefix to URL aliases.
 *
 * @MigrateProcessPlugin(
 *   id = "add_url_alias_prefix"
 * )
 */
class AddUrlAliasPrefix extends ProcessPluginBase {

  /**
   * [email protected]}
   */
  public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) {
    $prefix = !empty($this->configuration['prefix']) ? '/' . $this->configuration['prefix'] : '';
    return $prefix . $value;
  }

}

Contents of the src/Plugin/migrate/process/DateToTimestamp.php file:

<?php

namespace Drupal\wp_migration\Plugin\migrate\process;

use Drupal\migrate\ProcessPluginBase;
use Drupal\migrate\MigrateExecutableInterface;
use Drupal\migrate\Row;

/**
 * Date to Timetamp conversion.
 *
 * @MigrateProcessPlugin(
 *   id = "date_to_timestamp"
 * )
 */
class DateToTimestamp extends ProcessPluginBase {

  /**
   * [email protected]}
   */
  public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) {
    return strtotime($value . ' UTC');
  }

}

I like to keep my module code clean and organized so I use base classes that I later extend in individual migration source files.

Here is contents of the src/Plugin/migrate/source/SqlBase.php file:

<?php

namespace Drupal\wp_migration\Plugin\migrate\source;

use Drupal\migrate_drupal\Plugin\migrate\source\DrupalSqlBase;
use Drupal\migrate\Row;

class SqlBase extends DrupalSqlBase {

  /**
   * Get database table prefix from the migration template.
   */
  protected function getPrefix() {
    return !empty($this->configuration['table_prefix']) ? $this->configuration['table_prefix'] : 'wp';
  }

  /**
   * Get Wordpress post type from the migration template.
   */
  protected function getPostType() {
    return !empty($this->configuration['post_type']) ? $this->configuration['post_type'] : 'post';
  }

  /**
   * Generate path alias via pattern specified in `permalink_structure`.
   */
  protected function generatePathAlias(Row $row) {
    $prefix = $this->getPrefix();
    $permalink_structure = $this->select($prefix . '_options', 'o', ['target' => 'migrate'])
      ->fields('o', ['option_value'])
      ->condition('o.option_name', 'permalink_structure')
      ->execute()
      ->fetchField();
    $date = new \DateTime($row->getSourceProperty('post_date'));
    $parameters = [
      '%year%'     => $date->format('Y'),
      '%monthnum%' => $date->format('m'),
      '%day%'      => $date->format('d'),
      '%postname%' => $row->getSourceProperty('post_name'),
    ];
    $url = str_replace(array_keys($parameters), array_values($parameters), $permalink_structure);
    return rtrim($url, '/');
  }

  /**
   * Get post thumbnail.
   */
  protected function getPostThumbnail(Row $row) {
    $prefix = $this->getPrefix();
    $query = $this->select($prefix . '_postmeta', 'pm', ['target' => 'migrate']);
    $query->innerJoin($prefix . '_postmeta', 'pm2', 'pm2.post_id = pm.meta_value');
    $query->fields('pm', ['post_id'])
      ->condition('pm.post_id', $row->getSourceProperty('id'))
      ->condition('pm.meta_key', '_thumbnail_id')
      ->condition('pm2.meta_key', '_wp_attached_file');
    return $query->execute()->fetchField();
  }

}

Contents of the src/Plugin/migrate/source/Thumbnail.php file:

<?php

namespace Drupal\wp_migration\Plugin\migrate\source;

use Drupal\migrate\Row;

/**
 * Extract content thumbnails.
 *
 * @MigrateSource(
 *   id = "wordpress_thumbnail"
 * )
 */
class Thumbnail extends SqlBase {

  /**
   * [email protected]}
   */
  public function query() {
    $prefix = $this->getPrefix();
    $query = $this->select($prefix . '_postmeta', 'pm', ['target' => 'migrate']);
    $query->innerJoin($prefix . '_postmeta', 'pm2', 'pm2.post_id = pm.meta_value');
    $query->innerJoin($prefix . '_posts', 'p', 'p.id = pm.post_id');
    $query->fields('pm', ['post_id']);
    $query->fields('p', ['post_date']);
    $query->addField('pm2', 'post_id', 'file_id');
    $query->addField('pm2', 'meta_value', 'filepath');
    $query
      ->condition('pm.meta_key', '_thumbnail_id')
      ->condition('pm2.meta_key', '_wp_attached_file')
      ->condition('p.post_status', 'publish')
      ->condition('p.post_type', 'post');
    return $query;
  }

  /**
   * [email protected]}
   */
  public function fields() {
    return [
      'post_id'   => $this->t('Post ID'),
      'post_date' => $this->t('Media Uploaded Date'),
      'file_id'   => $this->t('File ID'),
      'filepath'  => $this->t('File Path'),
      'filename'  => $this->t('File Name'),
    ];
  }

  /**
   * [email protected]}
   */
  public function getIds() {
    return [
      'post_id' => [
        'type'  => 'integer',
        'alias' => 'pm2',
      ],
    ];
  }

  /**
   * [email protected]}
   */
  public function prepareRow(Row $row) {
    $row->setSourceProperty('filename', basename($row->getSourceProperty('filepath')));
  }

}

Contents of the src/Plugin/migrate/source/Content.php file:

<?php

namespace Drupal\wp_migration\Plugin\migrate\source;

use Drupal\migrate\Row;

/**
 * Extract content from Wordpress site.
 *
 * @MigrateSource(
 *   id = "wordpress_content"
 * )
 */
class Content extends SqlBase {

  /**
   * [email protected]}
   */
  public function query() {
    $prefix = $this->getPrefix();
    $query = $this->select($prefix . '_posts', 'p');
    $query
      ->fields('p', [
        'id',
        'post_date',
        'post_title',
        'post_content',
        'post_excerpt',
        'post_modified',
        'post_name'
      ]);
    $query->condition('p.post_status', 'publish');
    $query->condition('p.post_type', $this->getPostType());
    return $query;
  }

  /**
   * [email protected]}
   */
  public function fields() {
    return [
      'id'            => $this->t('Post ID'),
      'post_title'    => $this->t('Title'),
      'thumbnail'     => $this->t('Post Thumbnail'),
      'post_excerpt'  => $this->t('Excerpt'),
      'post_content'  => $this->t('Content'),
      'post_date'     => $this->t('Created Date'),
      'post_modified' => $this->t('Modified Date'),
      'path_alias'    => $this->t('URL Alias'),
    ];
  }

  /**
   * [email protected]}
   */
  public function getIds() {
    return [
      'id' => [
        'type'  => 'integer',
        'alias' => 'p',
      ],
    ];
  }

  /**
   * [email protected]}
   */
  public function prepareRow(Row $row) {
    // This will generate path alias using WP alias settings.
    $row->setSourceProperty('path_alias', $this->generatePathAlias($row));
    // Get thumbnail ID and pass it to the wp_media migration plugin.
    $row->setSourceProperty('thumbnail', $this->getPostThumbnail($row));
  }

}

IMPORTANT: You must run migrations in their proper order. In this example you have to run wp_thumbnail first, wp_media second and wp_content last.

Comments Not Loading?

Due to some temporarily SSL cert issue please refresh the page using this link in order to be able to leave comments.

Jul 27 2017
Jul 27
GSOC

I am working on Adding support for The league oAuth and new implementers forsocial auth and social post under the mentorship of Getulio Sánchez "gvso"(Paraguay) and Daniel Harris “dahacouk” (UK).

Last week, I started working on creating the first social_post implementer using theleague library. Also this week to reach the my 2nd evaluations milestone I worked on adding new social provider to social auth implementers list.

Here are some of the things that I worked on during the 6th week of GSoC coding period.

Change in Additional Data Entity Type In Social Auth [Link to Commit] -  As we’re storing additional data in some cases, the data received is more than the limit of previously set 255 characters.

New Social Auth Implementers  and improving the previously created implementers-

  • Social Auth Google [Link to PR] - We’re using league/oauth2-google as the OAuth2 client library. We’ll be using this module as base and example for other social auth implementers.

  • Social Auth Facebook [Link to PR] - We’re using league/oauth2-facebook as the base library of the league.

  • Social Auth Instagram [Link to Code]  - We’re using league/oauth2-instagram as the base library of the league.

  • Social Auth Github [Link to Code]  - We’re using league/oauth2-github as the base library of the league.

  • Social Auth Linkedin [Link to Code]  - We’re using league/oauth2-linkedIn as the base library of the league.

Implementer using third party league library. I will thoroughly test these modules before pushing them, here's the zip file containing current work on social auth implementers.

  • Social Auth Dropbox - We’re using stevenmaguire/oauth2-dropbox as the base library of the league.

  • Social Auth Digital Ocean- We’re using chrishemmings/digitalocean as the base library of the league.

  • Social Auth Box  - We’re using stevenmaguire/oauth2-box as the base library of the league.

  • Social Auth Gitlab - We’re using omines/oauth2-gitlab as the base library of the league.

  • Social Auth Twitch  - We’re using depotwarehouse/oauth2-twitch as the base library of the league.

  • Social Auth Uber  - We’re using lstevenmaguire/oauth2-uber as the base library of the league.

  • Social Auth Paypal  - We’re using stevenmaguire/oauth2-paypal as the base library of the league.

  • Social Auth Heroku  - We’re using stevenmaguire/oauth2-heroku as the base library of the league.

  • Social Auth Yelp - We’re using stevenmaguire/oauth2-yelp as the base library of the league.

  • Social Auth Mailru - We’re using aego/oauth2-mailru as the base library of the league.

  • Social Auth Reddit  - We’re using lrtheunissen/oauth2-reddit as the base library of the league.

These were some of the implementers that I worked on as part of adding new social providers to social auth, I was thrilled by the sixth week of Google Summer Of Code coding phase and completing my 2nd GSoC evaluation. My goal for the next week is to fix some issues like adding setters and getters for adding and retrieving data Social Auth Entity and to complete social_post_facebook which I started working on during last week. Also I will create project on D.O. by the next week for new implementers.

Jul 27 2017
Jul 27

The improved blocks system in Drupal 8 provides a lot of flexibility to the site builder.  But, have you ever had the problem of blocks showing on undesired pages because of the limits to visibility patterns?

The Problem Scenario

Say you added a custom block that only shows for users, so you set a visibility path with a wildcard like so: “/user/*”. All works great, as it should, and life is great!

Oh but no! Don’t celebrate too fast says your project manager. Here comes the problem. Your project manager, now (despite that it all works like it should) for no reason whatsoever, wants to hide that neat custom block from an specific page under “user/*”path.

Ok, all hell breaks loose. How do you prevent that specific page from showing the block when you have already set to show on all pages under the “/user*”path? Sure, in Drupal 7 you had the option of adding some custom PHP code to accomplish that. But what about Drupal 8, where you are not allowed to use any PHP code in the interface?

At Texas Creative, we work for the most part on large scale websites with data connection from multiple sources such as API’s and interactive pages. The block excluding problem was a common issue that we kept stumbling across. Each developer was coming up with creative ways to work around it, but that made it hard to maintain as each site is different and some solutions were just too “hacky”. It was time to create a module, called Block Exclude Pages, that would solve this issue using the standard Drupal block visibility interface.

How to Exclude Pages From Block Visibility

When installed in your Drupal website, the Block Exclude Pages module adds a new pattern for page visibility that will exclude a path pattern. Simply prefix the path pattern with an ‘!’.

When the page visibility option is set to: “Show for the listed pages”, then the excluded paths, prefixed with “!” will hide the block on those pages despite the other matches. On the other hand, if the page visibility option is set to “Hide for the listed pages” the excluded paths, prefixed with “!”, will show the block on those pages despite other matches in the list.

In the following example, this block is placed on a user page, but the block is excluded from the ‘jc’ user page and all subpages for that user page:

<code>
/user/*
!/user/jc
!/user/jc/*
</code>

This module has become an important tool in our arsenal for complex websites. I hope it will become a good tool for you too. 

Happy Coding! ;)

Below are some other Drupal related blogs by Texas Creative:

3 Tips For Client Friendly Paragraphs In Drupal 8

Update Extended Module: Drupal Updates…No Regressions!

Common Drupal 7 Coding Mistakes
 

Jul 27 2017
Jul 27

Starting in Drupal 8, we've added the notion of Experimental Modules, to help provide an early look at core features which are not yet complete. A major focus of Drupal 8.4.0 has been stabilizing these experimental modules, so that they can "graduate" to stable modules which can be installed in production and leveraged by other core and contrib modules.

Here's a document that outlays the current status of each experimental module, as well as their goals with respect to the forthcoming 8.4.0 alpha deadline (which is this coming Monday, July 31). If you're looking for a productive way to help your favourite initiative during 8.4.0's alpha/beta/RC phase, check it out!

Here's the TL;DR:

  • Content Moderation: Move from alpha to beta
  • Workflow: Move from alpha to beta
  • DateTime Range: Move to stable
  • Inline Form Errors: Move to stable
  • Layout Discovery: Move to stable
  • Media Entity: Move to stable (so contrib can rely on it), but hide module from UI (so end users don't accidentally turn this on solo, as it causes UX regressions)
  • Migrate / Migrate UI: Get as close to stable as possible.
  • Place Block: Hide module from UI (so end users don't turn it on), propose instead as patch to Block module for 8.5.0
  • Settings Tray: Move from alpha to beta
Jul 27 2017
Jul 27

Lately, a lot of our attention has been dedicated to Drupal modules. We have explored the most popular ones and the best for Drupal 8. But we will not stop here. We'll also look at the experimental modules, which may confuse some Drupal users. As you will see, there is also some risk in having them.

What are the experimental modules?

As stated on the official website of Drupal, experimental modules are modules that are included in Drupal core but are for testing purposes, so they are not (yet) fully supported. This new approach was introduced in Drupal 8. New experimental modules can only be added in minor releases. They may change between patch releases, while still being experimental. That differs them from the other features. Experimental modules allow site builders and core contributors to receive feedback and test out functionality that might eventually be supported in an upcoming minor release and might be included as a stable part of Drupal core. However, not any module can be experimental, because they too have to meet the minimal standards.

 

Experimental modules not stable

 

Alpha, Beta, Release candidate …

There are different stability levels of the experimental modules. Alpha experimental modules are still under development. They are, however available for testing, but may include bugs, security issues and the developers should not rely on their APIs. Beta experimental modules are, on the other hand, considered API and feature complete. They are still not fully supported and may still have bugs. If critical bugs are removed, the experimental module can become a release candidate, which means that it is release-ready. Once they are judged as stable, they are labelled as stable core modules. But they can become so only in minor or major releases.

Examples

These are experimental modules in Drupal 8.0, released in November 2015.

 

Experimental modules

 

However, the number of the experimental modules has at least doubled since then. Until recently, it was only the Big Pipe module that »has evolved« from being an experimental module to being an official module. But there might (!) be two more in Drupal 8.4.0 on 4th October 2017, as the Drupal 8.4.x Media API has become stable, after an enormous effort by the Drupal Media Initiative. The second one is DateTime Range, which also became stable for Drupal 8.4.0.

On the other hand, several experimental modules have 8.4.x alpha deadlines, which is on Monday next week (31th July!), when Drupal 8.4.0-alpha1 will be released. If these experimental modules will not reach their follow-up requirements until that deadline, they may be removed from core. These experimental modules are:

1.      Workflows and Content Moderation 

2.      Inline Form Errors

3.      Place Blocks

4.      Settings Tray

 

Workflow

 

Drupal 8.4.0-beta1 will be then released on the week of August 14th, while the release candidate phase will begin the week of September 4th.

Possible risks

Despite the fact that new capabilities can be added faster to Drupal – for that you have previously required a new major version – there are some possible risks of having the experimental modules. As you may have found out, not all experimental modules became or will become stable core modules. If they turn out not to be a good fit, they are removed from the core in future versions. Moreover, they can even change. That basically means that if you have found a solution to optimize your work, you may, after some time, be left without it. So, there were some arguments about, what kind of message does the Drupal send to the end users by giving them something that might not work.

 

Experimental modules at own risk

 

There are of course some other reasons for experimental modules to not become the stable core modules. They may not have made a sufficient progress or a better solution in core supersedes the module. Nevertheless, the experimental modules do not share core's version. When you want to enable them, you get the message saying "Use at your own risk", which is also a big problem, especially with the users that do not like to take risks. The risks are, in general, connected with APIs, bugs, security and other issues.

All in all, it will be once again up to everyone to choose whether to use experimental modules. But as the founder of Drupal Dries Buytaert said on DrupalCon Baltimore, it’s probably not wise to use them in production.

Jul 26 2017
Jul 26

One of the most exciting aspects of preparing for a DrupalCon is selecting its sessions. It’s always incredibly impressive and humbling to see the great ideas that our community comes up with—and they’re all so good that making the official selections is definitely not an easy process!

This time, the Track Chairs had over 500 sessions to evaluate, and only 108 hours worth of time to select. With the addition of the 25-minute talk option, we were able to accept 132 sessions to fill our programming time.

Four tracks—Being Human, Coding and Development, Business, and Site Building—were our most competitive. With 60+ sessions each, this really shows the diversity of content to which our community can talk, and wants to share.

After reading through each session, we're happy to present our selected sessions. We are happy to share that of the 132 sessions, 42 of them include at least one speaker who self-identified as part of one or more underrepresented groups (based on the Big Eight social identifiers question we had in our CFP). There will be voices from almost 90 companies. We're also excited to announce that a little over 37% of the sessions featured will include a new-to-DrupalCon speaker to provide new and different perspectives. 

See the Selected Sessions

Sessions will be presented on Tuesday, Wednesday, and Thursday of DrupalCon, along with daily morning keynotes and exciting sponsor activities in the Exhibit Hall. Make sure to purchase your ticket at the early-bird price by 4 August 2017 before prices go up.

Join us at DrupalCon

Jul 26 2017
Jul 26

Introduction

One of the greatest improvements added in Drupal 8 was the Configuration Management (CM) system. Deploying a site from one environment to another involves somehow merging the user-generated content on the Production site with the developer-generated configuration from the Dev site. In the past, configuration was exported to code using the Features module, which I am a primary maintainer for.

Using the D8 Configuration Management system, configuration can now be exported to YAML data files using Drupal core functionality. This is even better than Features because a) YAML is a proper data format instead of the PHP code that was generated by Features, and b) D8 exports *all* of the configuration, ensuring you didn’t somehow miss something in your Features export.

“Drupal 8 sites still using Features for configuration deployment
need to switch to the simpler core workflow.”

Complex sites using Features for environment-specific configuration, or multi-site configurations should investigate the Config Split module. Sites using Features to bundle reusable functionality should consider if their solutions are truly reusable and investigate new options such as Config Actions.

Features in Drupal 8

When we ported Features to Drupal 8, we wanted to leverage the new D8 CM system, and return Features to its original use-case of packaging configuration into modules for reusable functionality. New functionality was added to Features in D8 to help suggest which configuration should be exported together based on dependencies. The idea was to stop using Features for configuration deployment and instead just use it to organize and package your configuration.

We’ve found that despite the new core configuration management system designed specifically for deployment, people are still using Features to deploy configuration. It’s time to stop, and with a few exceptions, maybe it’s time to stop using Features altogether.

Problems using Features

Here is a list of some of the problems you might run into when using Features to manage your configuration in D8:

  1. Features suggests configuration to be exported with your Content Type, but after exporting and then trying to enable your new module, you get “Unmet dependency” errors.

  2. You make changes to config, re-export your feature module, and then properly create an update-hook to “revert” the feature on your other sites, only to find you still get errors during the update process.

  3. You properly split your Field Storage config from your Field Instance so you can have multiple content types that share the same field storage, but when you revert your feature it complains that the field storage doesn’t exist yet. This is because you didn’t realize you needed to revert the field storage config *first*.

  4. You try to refactor your config into more modular pieces, but still run into what seems like circular dependency errors because you didn’t realize Features didn’t remove the old dependencies from your module.info file (nor should it).  

  5. You decide to try the core CM process using Drush config-export and config-import commands, but after reverting your features your config-export reports a lot of uuid changes. You don’t even know what uuid it’s talking about or which uuids changes you should accept.

  6. You update part of your configuration and re-export your module. When you revert your feature on your QA server, you discover that it also overwrote some other config changes that were made via the UI that somebody forgot to add to another feature.

  7. The list goes on.

Why Features is still being used

Given all of the frustrating complications with Features in D8, why do some still use it?  After all, up until a few months ago it was the default workflow even in tools such as Acquia BLT.

Most people who still use Features typically fall into two categories:

  1. “My old D7 workflow using Features still seems to mostly work, I’m used to it and just deal with the new problems, and I don’t have resources to update my build tools/process.”

  2. “I am building a complex platform/multi-site that needs different configuration for different sites or environments and having Features makes it all possible. I don’t have to worry about non-matching Site UUIDs.”

People in the first category just need to learn the new, simpler, core workflow and the proper way to manage configuration in Drupal 8. It’s not hard to learn and will save you much grief over the life of your project. It is well worth the time and resource investment.

Until recently, people in the second category had valid concerns because the core CM system does not handle multiple environments, profiles, distributions, or multi-site very well. Fortunately there are now some better solutions to those problems.

Handling environment-specific config

Rather than trying to enable different Features modules on different environments, use the relatively new Config Split module. Config Split allows you to create multiple config “sync” directories instead of just dumping all of your config into a single location. During the normal config-import process it will merge the config from these different locations based on your settings.

For example, you split your common configuration into your main “sync” directory, your production-specific config into a “prod” directory, and your local dev-specific config into a “dev” directory. In your settings.php you tell Drupal which environment to use (typically based on environment variables that tell you which site you are on).

When you use config-import within your production environment, it will merge the “prod” directory with your default config/sync folder and then import the result. When you use config-import within your local dev environment, it will merge the “dev” directory with your default config and import it. Thus, each site environment gets its correct config. When you use config-export to re-export your config, only the common config in your main config/sync folder is exported; it won’t contain the environment-specific config from your “dev” environment.

Think of this like putting all your “dev” Features into one directory, and your “prod” Features into another directory. In fact, you can even tell Config Split which modules to enable on different environments and it will handle the complexity of the core.extension config that normally determines which modules are enabled.

Acquia recently updated their build tools (BLT) to support Config Split by default and no longer needs to play its own games with deciding which modules to enable on which sites. Hopefully someday we’ll see functionality like Config Split added to the core CM system.

Installing config via Profiles

A common use-case for Features is providing common packaged functionality to a profile or multi-site platform/distribution. Features strips the unique identifiers (UUIDs) associated with the config items exported to a custom module, allowing you to install the same configuration across different sites. If you just use config-export to store your site configuration into your git repository for your profile, you won’t be able to use config-import to load that configuration into a different site because the UUIDs won’t match. Thus, exporting profile-specific configuration into Features was a common way to handle this.

Drupal 8 core still lacks a great way to install a new site using pre-existing configuration from a different site, but several solutions are available:

Core Patches

Several core issues address the need of installing Drupal from pre-existing config, but for the specific case of importing configuration from a *profile*, the patch in issue #2788777 is currently the most promising. This core patch will automatically detect a config/sync folder within your profile directory and will import that config when the profile is installed and properly set the Site UUID and Config UUIDs so the site matches what was originally exported. Essentially you have a true clone of the original site. If you don’t want to move your config/sync folder into the profile, you can also just specify its location using the “config_install” key in the profile.info.yml file.

This patch isn’t ideal for public distributions (such as Lightning) because it would make the UUIDs of the site and config the same across every site that uses the distribution. But for project-specific profiles it works well to ensure all your devs are working on the same site ID regardless of environment.

Using Drush

Another alternative is to use a recent version of “drush site-install” using its new “--config-dir=config/sync” option. This command will install your profile, then patch the site UUID and then perform a config-import from the specified folder. However, this still has a problem when using a profile that creates its own config since the config UUIDs created during the install process won’t match those in the config/sync folder. This can lead to obscure problems you might not initially detect that cause Drupal to detect entity schema changes only after cron is run.

Config Installer Project

The Config Installer project was a good initial attempt and helped make people aware of the problem and use-case. It adds a step to the normal D8 install process that allows you to upload an archived config/sync export from another site, or specify the location of the config/sync folder to import config from. This works for simple sites, but because it is a profile itself, it often has trouble installing more complex profile-based sites, such as sites created from the Lightning distribution.

Reusable config, the original Features use case

When building a distribution or complex platform profile, you often want to modularize the functionality of your distribution and allow users to decide which pieces they want to use. Thus, you want to store different bits of configuration with the modules that actually provide the different functionality. For example, placing the “blog” content type, fields, and other config within the “blog” module in the distro so it can be reused across multiple site instances. This was often accomplished by creating a “Blog Feature” and using Features to export all related “blog” configuration to your custom module.

Isn’t that what Features was designed for? To package reusable functionality? The reality is that while this was the intention, Feature modules are inherently *not* reusable. When you export the “blog” configuration to your module, all of the machine names of your fields and content types get exported. If you properly namespaced your machine names with your project prefix, your project prefix is now part of your feature.

When another project tries to reuse your “Blog Feature”, they either need to leave your project-specific machine names alone, or manually edit all the files to change them. This limits the ability to properly reuse the functionality and incrementally improve it across multiple projects.

Creating reusable functionality on real-world complex sites is a very hard problem and propagating updates without breaking or losing improvements that have been made makes it even harder. Sometimes you’ll need cross-dependencies, such as a “related-content” field that is used on both Articles and Blogs and needs to reference other Article and Blog nodes. This can seem like a circular dependency (it’s not) and requires you to split your Features into smaller components. It also makes it much more difficult to modularize into a reusable solution. How is your “related-content” functionality supposed to know what content types are on your specific site that it might need to reference?

Configuration Templates

We have recently created the Config Actions and Config Templates modules to help address this need. It allows you to replace the machine names in your config files with variables and store that as a “template”. You can then use an “action” to reference that template and supply values for the variable and import the resulting config.

In a way, this is similar to how reusable functionality is achieved in a theme using SASS instead of CSS. Instead of hardcoding your project-specific names into the CSS, you create a set of SASS files that use variables. You then create a file that provides all the project-specific variable values and then “include” the reusable SASS components. Finally, you “compile” the SASS into the actual CSS the site needs to run.

Config Actions takes your templates and actions and “runs” them by importing the resulting config into your D8 site, which you then manage using the normal Configuration Management process.  This allows you to split your configuration into reusable templates/actions and the site-specific variable values needed for your project. Config Templates actually uses the Features UI to help you export your configuration as templates and actions to make it more useable.

Stay tuned for my next blog where I will go into more detail about how to use Config Actions and Config Templates to build reusable solutions and other configuration management tricks.

Conclusion

While the Drupal 8 Configuration Management system is a great step forward, it is still a bit rough when dealing with complex real-world sites. Even though I have blogged in the past about “best practices” using a combination of Features and core CM, recent tools such as Config Split, installing config with profiles, and Config Templates and Actions all help better solve these problems. The Features module is really no longer needed and shouldn’t be used to deploy configuration. However, Features still provides a powerful UI and plugin system for managing configuration and in combination with new modules such as Config Actions it might finally achieve its dream of packaging reusable functionality.

To learn more about Advanced Configuration Management, come to my upcoming session at GovCon 2017 or DrupalCamp Costa Rica.  See you there!

Jul 26 2017
Jul 26
week8

 

The eighth week of Google Summer of Code 2017 has come to a close, bringing forth the onset of the Second Evaluation Period. Presently, I've been working on porting the 'Uc Wishlist' module to Drupal 8, under the guidance of my mentor, Naveen Valecha. The conceptualization and understanding developed during the first phase of coding have been of immense help and has provided notable assistance for the completion of the assigned ports.

This month, concerning this week in particular saw the completion of a significant amount of work concerning the ports assigned for the Second phase of the coding regime. The work done this week can be briefly described as follows:

  • Fixed the 'Add to wishlist'/''Remove from wishlist' submit handler for the second port. Previously, on clicking on the add to wishlist button, the screen showed an error page, and the product couldn't get registered. However, the error has been fixed and products can now be added/removed from the wishlist, with the drupal_set_message function displaying the addition/removal of the ports.

The visuals of the current status of the port is displayed here:

addrem

 

The earlier week I had summarised the implementation for the third port, i.e, porting ‘View/Update wishlist’ functionality. So, the work done this week for the completion of the third port can be summarised below:

  • The first thing I did was to define an interface my UcWishlist entity type class can implement and that extends the default ContentEntityInterface alongwith EntityOwnerInterface. So inside of the module's src/Entity folder, create a file called UcWishlistInterface.php

  • Next,  I focussed on the crux of defining my own Content Entity class and created a folder inside the src/ directory called Entity, and within it, a file called UcWishlist.php.

  • What we have here is a simple class defining the entity properties required. This class    extends the default ContentEntityBase class and implements the UcWishlistinterface. Using annotations, we are basically telling Drupal about our UcWIshlist entity type. The @ContentEntityType tells Drupal that this is a Content Entity type. Within its definition, we have an array-like structure describing the wish list contents.

  • The next thing we need to do is create the form we referenced in the annotations above: for adding, editing and deleting wish list entities. The cool thing is that the form for adding can be reused for editing as well. For that we create a form by the name WishlistViewForm.php in the src/Form directory.

  • As I declared when defining the content entity class, I created a class file responsible for building the overview page of the entities. So straight in the src/ folder of the module I created a UcWishlistListBuilder.php class file extending EntityListBuilder.

When no products are added, and a user clicks on the 'Wishlist' menu link, the user is appropriately directed to the page controlled by the myWishlist Controller class and displays the correct message through drupal_set_message. This part works as planned.

However, when a product is added to the wishlist and then the wishlist menu link is accessed, it is directed to an error page. Also when /admin/store/customers/wishlists route is accessed the current wishlist status shows Expired by default. This might also lead to the problem. This is the only problem that's occurring as of now. Rest everything seems to work as planned and the third port is done.

D8 visuals for the completed port, ‘View/Update wishlist’:

wisj

 

Regarding the fourth port, i.e., ‘Search Wishlist’ functionality, the implementation is to be carried out in the following manner for its completion:

  • The WishlistSearchForm for creating the search wishlist textfield to be accessed through Search API rather than Drupal Core Search, which includes the following fields:

    • Search submit button: To execute the required search.

    • Search results list: Displaying the expected search results.

    • Search text box (textbox): If there are keywords, check for the user, wish list title, or address matches.

  • After installing the Search API, an index object can be created followed by inserting the fields to the index which have to be searched.

  • This way we have all the required data inside the index object and then execute the search method by triggering index->query form in the submit form section.

Winding up, these were the objectives and concepts learned during the eighth week of coding phase in GSoC’17. Hope to learn many more new concepts in the coming weeks for the successful completion of the port.

Cheers!


 

Jul 26 2017
Jul 26

DrupalCon is brought to you by the Drupal Association with support from an amazing team of volunteers. Built on COD v.7, the open source conference and event management solution. Creative design by sixeleven.

DrupalCon Vienna is copyright 2016. Drupal is a registered trademark of Dries Buytaert.

Jul 26 2017
Jul 26

I have been working on  Example for Developer from Drupal 7 to Drupal 8 as part of this year’s Google Summer of Code (GSoC), under the guidance of Navneet Singh, currently we are working on porting AJAX(AJAX stands for Asynchronous JavaScript and XML. AJAX is a new technique for creating better, faster, and more interactive web applications with the help of XML, HTML, CSS, and Java Script. ) examples out 21 examples we were able to port 16 of them and those 16, 14 modules are running properly without any error but 2 of them are having few errors with the logic and my mentor asked me rebuilt the code from the scratch and check for any error and the codes are maintained inside my GitHub repo.  Also, visit my previous blog post blog post to get a clear cut idea of the module I am working with and also to know more about AJAX.

Things I ported this week:

  • Dependent dropdown degrades without JS
dropAJAX examples- depend dropdown w/JS turned off
  • Dynamic Section (with graceful degradation ) and Dynamic Section with JS turned off: This example demonstrates a form which dynamically creates various sections based on the configuration in the form. It deliberately allows graceful degradation to a non-javascript environment. In a non-javascript environment, the "Choose" button next to the select control is displayed; in a javascript environment it is hidden by the module CSS. The basic idea here is that the form is built up based on the selection in the question_type_select field, and it is built the same whether we are in a javascript/AJAX environment or not.
ajax dynamicAJAX example- Dynamic sectionwithout JavaAJAX example - Dyanmic Section with JS turned off
  • Add more button (with graceful degradation): This example shows an add-more and a remove-last button. The AJAX version does it without page reloads; the non-js version is the same code but simulates a non-javascript environment, showing it with page reloads.

ajax addmore

addmore

Well, for the next week I will be working on the remaining module which is almost completed but we are having few issues.

Jul 26 2017
Jul 26

Oftentimes, when we receive inbound leads from companies looking for a new website or new website redesign, we get a list of wants and needs and how fast and how much it will cost, but the potential client and we forget the people that will be using the platform the most.

WHO REALLY  USES THE WEBSITE?

It is no surprise that marketing teams own the internet site once it is completed. As I sit here and write this blog, I can assure you that I am the main user of the website. I write blogs and do a lot of the content changes that occur throughout the website. 

Before CMS (Content Management Systems), most websites were static and never changing brochure sites that when a change was required, more than likely changes had to be made through the IT Department/Developer.

In recent years, a website has become the storefront for most business, including ours, which serves as a guide to how we work, what services we provide, and educational material as in white papers or blog posts. Marketing and Sales teams now have more changes than ever in order to bring in potential customers, and that involves changing or adding to a once brochure website. 

What to look for in Content Management Systems

As mentioned before, websites used to always have to be updated, regarding content, by a developer. Today's popular CMS allow digital marketers take control of the website without having through go to a person for changes, especially when some content is timely.

In today's ever growing sales and marketing integration of teams, CMS and Marketing Automation services have had to learn to work together.

Handle Content Easily

If you have to still go through developers to get any kind of content change for your website, then the platform has got to go! Empowering marketing and sales team to do their own work on the site should be high in priority. Think of the workflows and time spent if they could do their work efficiently.

Integrations for a Holistic Website

Marketing Automation has grown exponentially in the last couple of years because of their ability to allow marketers know who their target audience is, retain them, and market exactly what they're searching for and eventually make them paying customers. Having a CMS that allows integration with marketing technology is where website builds are trending towards.

Platforms like HubSpot, Marketo, and Pardot, to name a few, are known to integrate with Drupal CMS and WordPress CMS almost seamlessly. Now, not only will marketers have access to the CMS but also to the Marketing Automation software to build forms and landing pages without the help of developers. 

Analytics Dashboards Built Right Into the CMS

Marketers, digital marketers, to be specific, are very aware of analytic metrics that make or break the success of a website. Knowing the traffic to content and the RTC (rate-through-clicks) on CTAs (calls-to-actions) is necessary in order to know when it needs to be changed or left alone.

It’s very convenient when a CMS has a plugin or module that integrates that allows for a quick and accessible with website metrics without ever having to leave the website because it is built in. One such integration is IntelligenceWP, the WordPress plugin allows for easy Google Analytics setup as well as a built-in dashboard to display you marketing metric success.    

SEO Friendly

Nothing is worse than having a website that is beautifully designed but has no traffic or optimized pages for it to rank in the search engines. Drupal and WordPress CMS are both terrific at search engine optimization. Either through modules or plugins, both CMS’s allow for marketers to customize individual pages for optimization based on the content of the page.

When choosing a platform for your new website, it will benefit the content and digital marketers significantly if they can add meta descriptions, titles, keywords, and redirects by themselves without the help of someone in IT.

Scalability

As most websites these days serve as more than a brochure, most often than not, marketing teams have content calendars full of potential blog posts, additional pages, content changes, and adding landing pages. Why would you have a website designed without the ability to grow?

Choosing a CMS platform should also address the ability that it will withstand 5+ years of content growth without running into problems. Avoid running into the issue of a few years down the road and having to choose a different platform because it was not able to handle the content growth.

What now?

Now, you have 5 key components to look for in a CMS when looking for a new platform to use when you decide to build a new website, redesign, or even migrate. Need some more details? We're good at Drupal . We have a blog post on Why You Should Be Using Drupal for Your Website, take a read and then give us a call when you're ready for a new site! 

Jul 25 2017
Jul 25

We are keeping busy this summer! Immediately following Drupal Govcon we are sending Aimee and AmyJune to Drupal Camp Los Angeles. Fun in the sun while doing the Drups'.

Come find us at our sessions, panels, or in the hall to pick up some cool stickers and pleasant conversation.

Sessions:

Please check the DrupalCamp LA sessions schedule for more details about times and locations as they become available.

Dred(itor) the Issue Queue? Don't - It's Simple(lytest) to Git in!

AmyJune Hineline

Every newbie dreams of being a contributor to the Drupal project. But where do you begin? And more importantly, what are some of the tools to help navigate the adventure successfully? In this session, we will go over a couple of the gadgets necessary for working in the Drupal issue queue when being a novice. We will also have a lightning round demonstrating the process of creating an issue, writing a patch, uploading the fix to Drupal.org, and then reviewing the patch for RTBC.

Tools of the trade:

Simplytest.me -

  • What is Simplytest.me? And how does it make life easy?
  • Evaluating a module and its dependencies
  • Applying a patch
  • Uploading new modules to current test site

Git Client -

  • Why use a Git client versus only using Command line
  • How to create a branch and properly name it for the issue
  • Committing changes to the repository
  • Creating a patch

Dreditor -

  • What is Dreditor?
  • Installing Dreditor and a quick demo of functionality
  • How does it help in the issue queue?

Planning & Managing Migrations

Aimee Degnan

Drupal 8 is great! Yay! Now it’s time to migrate!

There are many moving parts in a migration project compared to a new website build. Ensure your migration is a success with these architectural guidelines, helpful planning tools, and lessons learned.

The Order of Operations is important in migration projects. Keeping excellent, detailed documentation of each phase of migration is tantamount to success. These migration projects can be lengthy. Working in an efficient manner will provide your team the fitness to complete the project!

Topics Covered:

  • Types of migrations (Single Pass, Incremental, Hybrid).
  • Major phases of a migration project.
  • Planning efforts and documentation created for each phase.
  • Architectural considerations to migration - content, infrastructure, etc.
  • What migration support is provided “out of the box” and what is “custom development”?
  • Role-specific considerations, tools, and needs.
  • Gotchas, facepalms, and “remember tos”.

What level of knowledge should you have coming into this session?

  • Be familiar with basic Drupal terminology, such as: a node, content type, and field.
  • Understand simple project management concepts such as: resources, dependencies, tasks, and estimation.
  • Have a passion for (or fear of) juicy migration projects.

What will your session accomplish?

  • Prepare the community for Drupal 8 migrations!
  • Identify key success and failure points within migration projects.
  • Provide tools for project managers and development teams to plan and architect migrations.

What will attendees walk away having learned?

  • Understand the scope of a migration project.
  • Terminology and concepts to communicate with management and development teams.
  • Practical samples of migration planning documents.
  • How much time and money can be wasted if a migration isn't well planned and documented.

Harness the Power of View Modes

Aimee Degnan

View Modes are the site-building glue that brings your content strategy, design, media strategy, and UX development together to actually create your web displays.

View Modes have been in Drupal for some time, but what do they really do? Why are they so powerful? With Drupal 8, View Modes are now even more relevant with the standardization of Entity and Field management across entity types.

Think beyond the Teaser and harness the power of View Modes!

Topics Covered:

  • View Modes in core:
    • Anatomy of a view mode.
    • Common applications of view modes across entity types.
    • View modes and media (media entities and file displays!).
    • What the “Default” view mode does vs. Full Content.
  • Architecting View Modes for your site:
    • Planning your View Mode + Content + UX + Component Library strategy.
    • Interacting with layout solutions. (Panels / Display Suite / Views)
    • Extending view modes in code.
  • Lessons Learned with View Modes:
    • Interactions of view modes across entity types.
    • Naming is important. What does “Teaser” really mean?!
    • But why can’t I use that view mode?!

What level of knowledge should you have?

This session listed as "Beginner", but the concepts and information can be applied to more advanced architectures.

  • Coming as a Site-Builder? You should know how to create content types and have reordered fields on "Manage Display" at least once.
  • Are you a project manager or designer? Be familiar with basic Drupal terminologies like a node, content type, and field.
  • Are you a Drupal Developer? You know enough to join this session. :)

What will This session accomplish?

  • Share with the community that there can be more than just Full Content and Teaser.
  • Provide tools for Site Builders to create powerful displays with little or no coding.
  • Explain why View Modes are a powerful tool in your Drupal tool chest.

What will attendees walk away having learned?

  • Terminology and concepts to connect design, content, and technical execution.
  • View Modes applied to different entities may mean different things.
  • Practical knowledge to apply for their own site extension.
  • Layers of View Modes can and will interact with each other. Layering must be deliberate.

By Role:

  • Project Managers: Understand that View Mode creation and extension requires effort which you need to include in planning.
  • Content Strategy / Analysts: How do view modes interact with content and functionality through the site.
  • Designers: The language and concepts to communicate your design vision to the development team.
  • Site Builders: Build what they are asked by the design and project management team. :)
  • Drupal Developers: Understand why all these non-coders on your team have created View Modes when you are asked to help possibly extend their displays. :D

Panel Discussions:

Finding a Work-family Balance as a Drupal Professional

Rain Breaw Michaels, with panelists AmyJune Hineline and Oliver Seldman

This session is meant to be a panel discussion to help

  • New parents navigating work/family life
  • Soon to be parents who intend to continue working
  • Drupal business owners hoping to retain team members transitioning into family life

It is possible to do both. It is awesome to be able to do both. But it is not easy, and should not be attempted in a vacuum.
Another important note: this description talks about parenthood, but the session is really about being a caregiver and a professional, which is not limited to parenthood and children.

Fostering Diversity in the Professional Drupal Community

Rain Breaw Michaels, with panelists Aimee Degnan, Zakiya Khabir, Ashok Modi

A panel discussion about how to foster diversity and inclusion in the professional Drupal community. This is intended to be an honest conversation about where our industry is today, how it is doing well, how it can be doing better, the resources available, and the cultural or practical realities holding us back, as well as solutions for overcoming them. Diversity and inclusion refers to all levels of difference, including gender, race, physical ability, age, religion, etc.

Remember to find us to get some awesome SWAG...

(HINT: You might find AmyJune at the beer track as well!)

Jul 25 2017
Jul 25

We started receiving reports of broken Lightning builds due to the release of doctrine/common:2.8.0 and/or doctrine/inflector:1.2.0 which require php ~7.1 and php ^7.0 respectively.

Lightning actually doesn't have a direct dependency on anything under the doctrine namespace. The dependencies come from drupal/core. So should drupal/core constrain doctrine/common to <=2.7.3 so that it continues to support php 5.6? No.

If you follow the dependency tree for what happens when you run composer update for a Lightning project in a php 5.6 environment, it looks like this:

  • acquia/lightning:2.1.7 requires:
  • drupal/core:~8.3.1, drupal/core:8.3.5 requires:
  • doctrine/common:^2.5, doctrine/common:2.8.0 requires php ~7.1, so it will resolve to 2.7.3, which requires:
  • doctrine/inflector:1.*, doctrine/inflector:1.2.0 requires php:^7.0, so it will resolve to 1.1.0, which simply requires php:>=5.3.2

So why are we getting reports of broken builds?

The problem arises when:

  1. Your project commits its composer.lock file (which it generally should)
  2. Your development environment has a different php version than your CI or production/test environment

If you have php 7.0 installed locally, the dependency resolution for doctrine/inflector will look like this:

  • acquia/lightning:2.1.7 requires:
  • drupal/core:~8.3.1, drupal/core:8.3.5 requires:
  • doctrine/common:^2.5, doctrine/common:2.8.0 requires php ~7.1, so it will resolve to 2.7.3, which requires:
  • doctrine/inflector:1.*, which will resolve to doctrine/inflector:1.2.0

Which will lock doctrine/inflector to v1.2.0; which requires php ^7.0. Then when you push to your php 5.6 CI environment, you'll get an error like this:

Problem 1
    - Installation request for doctrine/inflector v1.2.0 -> satisfiable by doctrine/inflector[v1.2.0].
    - doctrine/inflector v1.2.0 requires php ^7.0 -> your PHP version (5.6.24) does not satisfy that requirement.
Problem 2
    - doctrine/inflector v1.2.0 requires php ^7.0 -> your PHP version (5.6.24) does not satisfy that requirement.
    - doctrine/common v2.7.3 requires doctrine/inflector 1.* -> satisfiable by doctrine/inflector[v1.2.0].
    - Installation request for doctrine/common v2.7.3 -> satisfiable by doctrine/common[v2.7.3].

The solution, of course, is to run composer update in a dev environment that matches your CI/test/production environment.

Jul 25 2017
Jul 25

We started receiving reports of broken Lightning builds due to the release of doctrine/common:2.8.0 and/or doctrine/inflector:1.2.0 which require php ~7.1 and php ^7.0 respectively.

Lightning actually doesn't have a direct dependency on anything under the doctrine namespace. The dependencies come from drupal/core. So should drupal/core constrain doctrine/common to <=2.7.3 so that it continues to support php 5.6? No.

If you follow the dependency tree for what happens when you run composer update for a Lightning project in a php 5.6 environment, it looks like this:

  • acquia/lightning:2.1.7 requires:
  • drupal/core:~8.3.1, drupal/core:8.3.5 requires:
  • doctrine/common:^2.5, doctrine/common:2.8.0 requires php ~7.1, so it will resolve to 2.7.3, which requires:
  • doctrine/inflector:1.*, doctrine/inflector:1.2.0 requires php:^7.0, so it will resolve to 1.1.0, which simply requires php:>=5.3.2

So why are we getting reports of broken builds?

The problem arises when:

  1. Your project commits its composer.lock file (which it generally should)
  2. Your development environment has a different php version than your CI or production/test environment

If you have php 7.0 installed locally, the dependency resolution for doctrine/inflector will look like this:

  • acquia/lightning:2.1.7 requires:
  • drupal/core:~8.3.1, drupal/core:8.3.5 requires:
  • doctrine/common:^2.5, doctrine/common:2.8.0 requires php ~7.1, so it will resolve to 2.7.3, which requires:
  • doctrine/inflector:1.*, which will resolve to doctrine/inflector:1.2.0

Which will lock doctrine/inflector to v1.2.0; which requires php ^7.0. Then when you push to your php 5.6 CI environment, you'll get an error like this:

Problem 1
    - Installation request for doctrine/inflector v1.2.0 -> satisfiable by doctrine/inflector[v1.2.0].
    - doctrine/inflector v1.2.0 requires php ^7.0 -> your PHP version (5.6.24) does not satisfy that requirement.
Problem 2
    - doctrine/inflector v1.2.0 requires php ^7.0 -> your PHP version (5.6.24) does not satisfy that requirement.
    - doctrine/common v2.7.3 requires doctrine/inflector 1.* -> satisfiable by doctrine/inflector[v1.2.0].
    - Installation request for doctrine/common v2.7.3 -> satisfiable by doctrine/common[v2.7.3].

The solution, of course, is to run composer update in a dev environment that matches your CI/test/production environment.

Jul 25 2017
Jul 25

Configuration page for my module

Before I did any additional work on the encryption/decryption processes I had to implement a more robust way of declaring which entities and fields to encrypt. After talking to Colan and Talha I decided to go with following approach, first step was to create a configuration page:

Configuration page

Administrators will be able to choose which entities to encrypt and then have a set of checkboxes with field names to select from. After selecting which fields to encrypt, this information will be saved to Drupal’s default config storage option.

Settings

In order to create above page I had to get a list of all content entities, then create a route with {entity} parameter, then I retrieved all fields for given entity. The next step was to filter out all read-only ones. I used checkboxes with field names loaded from config storage.

Fields list

After doing this I was able to retrieve an array of fields that site’s admin wants to encrypt on the JavaScript side. I have also added few lines to the .install file so it now adds fields title and body to the article node type by default to improve testing and user experience in the future.

Table for storing encrypted fields

As mentioned in the previous week I was changing the design a bit as I reached some limitations. Now that I have a list of what fields need to be encrypted, I can make my encryption function more robust, instead of using hard coded list of fields’ IDs.

Ecrypted fields list

Now that I have reworked the encryption system to include an array of field names - with the entity type as the key. The next step was to use similar approach for decrypting fields. Now I have a robust way of determining which fields to encrypt, it works great for basic title and body fields but other fields may need some more tweaking.

Removing the initial development scaffolding

As my module is more stable now, I went through my code and removed sandbox and  scaffolding functions which I used to support the initial development phase.

Few examples are removing the private key field and REST resource method for retrieving it, removing the sandbox page, and I went through my JavaScript code and removed debugging options etc.

Start designing test cases

Now that my module has the main functionality done and it’s fairly stable I can think of writing initial tests. If I change any of the core features I can always go back a step and change already existing tests. I have been trying to write tests as I coded, but did not get satisfactory results.

Another interesting thing to look into is testing the creation of encrypted nodes, as my module is mostly based on JavaScript features. Starting the following week I will include JavaScript tests for encryption and decryption on the front-end using BrowserTests.

Here is some good resources on writing tests in Drupal 8:

Interesting issues

Suddenly the method for creating new nodes through REST that used to work - now started to give above error. I realized that I switched to a different Drupal instance and that in the previous one I manually enabled this REST call along with JSON format.

That reminded me that I have to gather such issues in a document I can then use to create a first time user guide, a set of instructions for the installation of my module.

Plans for week 9

  • Start writing tests for PHPUnit and JavaScript features using BrowserTests.

  • Decrypting ciphertext when viewing nodes.

  • Start implementing re-encryption process for given node outside of an edit/create forms.

  • Write few PHP functions which are safe to be handled on the server side:

Jul 25 2017
Jul 25

If you've been doing Drupal development for any amount of time, chances are that you have installed the Drupal Code to help you write clean, compliant code. Coder allows you to check your Drupal code against the Drupal coding standards and other best practices using PHP_CodeSniffer.  It can be configured to work in your IDE, and also works on the command line.

Writing code according to standards helps avoid common errors, and helps teams understand the code faster.

I installed Coder using Composer per the well written instructions.  Using this method installs it globally, so I can use it on all of my projects, and installs all the dependencies, including PHP_CodeSniffer.

I recently was tasked with working on a Wordpress site, and I started looking into the WordPress Coding Standards.  My setup didn't jive with the standard installation method since I already had PHP_CodeSniffer installed globally using composer.  I had to do a little digging to add these additional standards to my already installed setup.

Here is a quick recap on how to install Coder using composer.

Install Coder

composer global require drupal/coder

To make the commands available globally, add this line to your .~/bash_profile, and that it is sourced (or restarted your terminal).

# Composer recommended PATH
export PATH="$PATH:$HOME/.composer/vendor/bin"

Tell phpcs where the Drupal and DrupalPractice standards are located:

phpcs --config-set installed_paths ~/.composer/vendor/drupal/coder/coder_sniffer

Verify it worked with:

phpcs -i

You should see:

The installed coding standards are MySource, PEAR, PHPCS, PSR1, PSR2, Squiz, Zend, Drupal, and DrupalPractice

You can now navigate to your Drupal project and run the following command to use:

phpcs --standard=Drupal file.name

Install Wordpress Coding Standards

Thanks to some help I found in the issue queue, here are the steps to install the Wordpress Coding Standards globally using composer.

composer global require wp-coding-standards/wpcs:dev-master

Again, to make these commands available globally, make sure you have this line in your ~/.bash_profile, and that it is sourced (or restarted your terminal).

# Composer recommended PATH
export PATH="$PATH:$HOME/.composer/vendor/bin"

Like we did with Drupal, we need to tell phpcs where the Wordpress standards are located. We use the same installed_paths configuration set, and use a comma to list both the Drupal and Wordpress paths.

phpcs --config-set installed_paths $HOME/.composer/vendor/drupal/coder/coder_sniffer,$HOME/.composer/vendor/wp-coding-standards/wpcs

Verify it worked with:

phpcs -i

You should now see:

The installed coding standards are MySource, PEAR, PHPCS, PSR1, PSR2, Squiz, Zend, Drupal, DrupalPractice, WordPress, WordPress-Core, WordPress-Docs, WordPress-Extra and WordPress-VIP

You can now navigate to your Wordpress project and run the following command to use:

phpcs --standard=Wordpress file.name

Add aliases

If you've worked with me, or read my posts before, you know I love aliases. They streamline your process and help make you more productive. Add these aliases into your .bash_profile, .bashrc, or wherever you keep your aliases, and source it, or restart your terminal.

alias drupalcs="phpcs --standard=Drupal --extensions='php,module,inc,install,test,profile,theme,css,info,txt,md'"

alias wpcs="phpcs --standard=Wordpress"

After this you can simply type drupalcs folder_name or wpcs file.name and start writing better code!

Jul 25 2017
Jul 25

Migration to Drupal 8 will save your time, effort and money in the future. It’s a fact! Discover
the great news about easy upgrades and backwards compatibility.

Technologies rush to the future, and website-building platforms run to keep up with them. Drupal is no exception — indeed, it’s a great example of it. Drupal 8 has had a great leap ahead thanks to its mobile-first nature, multi-language, accessibility, and editing enhancements, modern PHP, handy configuration management and so much more!

That’s Drupal’s essence. Each major release is a real gift box of brand-new features and better usability to make customers and developers happy. However, in this ultimate happiness, there always used to be a little spot of darkness.

A little shadow that used to hang over Drupal new releases

New versions seemed to burn the bridges between themselves and the old ones. It was like starting with a clean slate. Backwards compatibility was never a priority. Moreover, providing it could hamper website performance. So backwards compatibility was sacrificed in the name of progress.

As a result of these abysses between versions, new versions traditionally presented a challenge to developers across the globe who had to learn the fresh release from top to bottom.

Outdated releases used to go “overboard,” just like happened to Drupal 6, which became officially unsupported and stopped getting security or other updates. This left Drupal 6 website owners with the options of upgrade or support.

Depending on the amount of custom functionality, major website upgrades (from Drupal 6 to Drupal 7, from Drupal 7 to Drupal 8, etc.) could often be lengthy and costly. Yes, it’s worth it! But it’s also a little bothersome.

What if it were possible to get smoother upgrades? This was the dream of Drupal’s founder Dries Buytaert and his team, which they have now successfully brought to life.

Easy upgrades and backwards compatibility starting with Drupal 8

Now the shadows are removed and the abysses bridged! Congrats to developers and site owners, because Drupal is finally becoming backwards compatible.

This means that each new update will be compatible with previous releases. Moreover, when Drupal 9 comes out, it is going to be backwards compatible with Drupal 8.

This is a sure path to fast and easy upgrades, both between minor versions and between major versions — provided you are using the latest APIs and avoid the deprecated code. Dries Buytaert made an announcement about easy upgrades that raised a lot of excitement. Let’s look at the details.

The continuous innovation model

Drupal 8 is the first version to adopt the continuous innovation model. You no longer need to wait for years to see a brand-new release. The process will be more gradual. Minor versions will come regularly, about twice a year, and offer lots of lucrative functional niceties, and, as an awesome bonus, a smoother upgrade path from one to the next.

Deprecated code

New functionality and backwards compatible changes will be regularly introduced to Drupal 8. In this process, more and more code will be marked as deprecated. When there is too much deprecated code, Drupal 9 will be released without the deprecated systems. According to Dries, the modules using the freshest Drupal 8 APIs and avoid deprecated code will be fully functional in Drupal 9.

Drupal 9 and beyond

In other words, Drupal 9 will be almost the same as the latest minor release of Drupal 8, but without the deprecated code. Almost the same? Yes, but still different in one very important way. Moving to Drupal 9 will remain a very lucrative decision, because, when it comes out, Drupal 8 will stop getting nitty-gritty features, and the ninth version will become the new focus of the Drupal community’s attention.

And so it goes again and again, with Drupal 10, 11 and more! The progress is never-ending, which is awesome. And this progress is now more available than it has ever been.

A beneficial decision

So there is just one step separating Drupal websites owners from being forever free to make fast and easy upgrades with no hassle, with a considerable saving of time, money and effort. This step is migrating to Drupal 8 now, if you are on an older version.

Come to the bright side, and move to Drupal 8 with us ;) We have expert Drupal 8 developers on the team. Migrate once and enjoy the benefits of backwards compatibility forever!

Jul 25 2017
Jul 25

This is an example of anti-virus implementation with an Ubuntu server.

Our back office management solution allows users to upload files in various sections of the application for storage or file sharing. For this reason, checking of files for virus is an important advantage.

We use the ClamAV module integration from Drupal 8.

1) Install ClamAV on Ubuntu

Installation on Ubuntu server is straight forward.  However, it is better to install with clamav-daemon clamav-freshclam options for later settings

You can test with clamscan -r /home for instance

For further options you may refer to ClamAV website.

2) Install and set-up Drupal module

Module installation on Drupal 8 has no specific requirements.

As indicated on the module page, "Daemon mode" is preferred when executing the scan.

In the settings page (/admin/config/media/clamav), select Daemon mode (over Unix socket) in scan mechanism

You need to indicate the path for the socket pointing file; it can be found in the configuration file  : /etc/clamav/clamd.conf.

Input the file path into next setting:

3) Test

When uploading a file on the server via any upload interface, the file is scanned and validated. Scanning process is logged:

The Eicar test virus file is filtered when uploaded:

If you have implemented ClamAV with Drupal and have further comments, please feel free input your own.

Thank you.

Jul 25 2017
Jul 25

Last time, we have looked at the most popular Drupal modules. There are around 12 000 modules available for Drupal 7 and 3 000 for Drupal 8, of whom only 1 000 are in a stable version. Not so much as some would perhaps expect. However a lot of them make our lives easier each day, so this time we will look at the top Drupal 8 modules.

Firstly, we must point out that modules that were already used in the blog post Most popular Drupal modules will be left out. Namely, we already presented, which are available for Drupal 8 and their popularity makes them useful for the newest version of Drupal as well. Where to start then?

We will kink-off with the module, which successfully replaced the Administration menu. It's Admin Toolbar, which is probably the first module you have to add to your Drupal 8 site. The entire menu is responsive and you can access the subitems in the toolbar quickly.

The second choice is Metatag. It's one of the SEO modules we have already presented. A module doesn't include just the description and the title, but it also makes sure that your content is going to look good when you share it on social media. It gives you the control of how your content appears when shared.

 

Drupal 8 Module Metatag

 

We continue with Devel, a module which has been with the Drupal community for a long time. It has almost four million downloads. It is a great module for developer debugging. Moreover, its sub modules include some other useful features for developers and themers as well.

In Drupal 7, the module for managing media was Media, which in Drupal 8 broken up into smaller modules. That means that you can manage media with Entity Browser, Media Entity, File Entity Browser and Entity Embed. The Entity Browser is a starting one, with whom you are able to add File Entity Browser to your Drupal 8 site, which allows you to reuse images or files across different pieces of content.

The Media entity transforms a YouTube video, tweet on Twitter, Instagram photo, local file, or other media resource into an Entity, while Entity Embed allows any entity to be embedded within a text area using a WYSIWYG editor. Latter is also a worthy mention. It allows Drupal to replace text area with CKEditor. If you don’t have the described media modules, content editors have to upload the same picture again and again. But that changed with Drupal minor version 8.3.0. There, you can drag and drop images into image fields in Quick Edit mode. For something more, we’ll have to wait until next minor or perhaps major releases.

 

Drupal 8 Module File Entity Browser

 

If you have troubles with spams, Honeypot is a right solution for you. If you want to link content with a nice interface for editors, you should use Linkit. It allows auto-complete feature for all internal and external links.

A list can go on because there are a lot of Drupal 8 modules, which deserve to be mentioned, but every story needs the ending. So, we will conclude our list with a Webform, a module, used for making forms and surveys in Drupal site. But don't worry, that was not the last post from Drupal module's action because, in the future, we will also look at some of the most popular Drupal (8) modules we use at AGILEDROP.

Jul 24 2017
Jul 24

DevOps is the union of development, operations, and quality assurance -- but it's really the other way around. You start with the quality -- developing tests to ensure that things that have broken in the past don't break in the future, making sure the production environment is in a known, fully reproducible state, and setting protections in place so you can roll back to this state if anything goes wrong. Next comes operations automation -- building out operational tools that ensure snapshots are happening, deployments happen in an entirely consistent way, and environments are monitored carefully for performance degradation as well as simple uptime. Finally, you're in a safe place where you can develop new functionality without risking major breakage.

This is the philosophy we employed while building out our DevOps practice. We employ these principles to developing the very process we use to develop new sites as well as additions to existing sites -- as we flesh out one part of our pipeline, the next one becomes easier to get in place.

DevOps as a project

Our best example of a DevOps project is what we have developed internally to manage dozens of production Drupal and WordPress websites. At the center of it all is a chat bot we've written, named Watney. Watney gets input from all sorts of internal systems, including git repositories, configuration management tools, continuous integration tools, project management systems, and developers. It then enables/disabled pipelines and kicks off jobs as needed (and authorized), updates issues, logs activities, and sends notifications as jobs are completed.

Each day Watney triggers a job that checks the environment of every production website we manage. It alerts us of any changes that have not been properly tracked in configuration or code management. It also alerts us to any sites that have changes staged but not yet deployed. As new work is done, Watney kicks off Behavior-Driven Design tests automatically, and reports the results. If the tests pass, the site is automatically marked as "Ok to stage," otherwise it requires attention from a developer. When sites are staged, they are automatically run through visual regression testing, comparing stage sites with production. These often fail due to fresh content on production, different slides visible in carousels, different ads loading -- so we allow a person to approve a test run even if the number of pixels different exceeds the base threshold.

For the actual deployment, Watney assembles release notes from git commits, notes from the developers, cases/issues/user stories included in the release. It takes a fresh snapshot of the database, verifies that all tests have passed or been approved, tags and then rolls out the code. The next day, Watney creates a fresh sanitized copy of the production database to put on stage, resets the pipeline, and bumps the version number

In the process of fleshing out this pipeline, we had to address a lot of requirements and concerns, many of them particular issues around Drupal and WordPress sites:

  • How to prevent test sites that integrate with 3rd party APIs from polluting external production databases from test data
  • How to make sure we only work with sanitized data
  • For WordPress, how to consistently change domain names so the site functions on test and stage environments
  • How to deploy configuration changes for Drupal 8's new configuration management system
  • How to enable development-only modules on dev sites, and disable for production, and vice-versa
  • How to generate privileged logins while maintaining security around the sites
  • How to turn pipelines off and on when we reached some hard limits on scaling a pipeline across dozens of sites
  • How to improve pipeline performance to be able to push out a fully tested security update to all maintained sites within 4 hours
  • How to integrate our pipeline for a variety of hosting environments along with dedicated Drupal hosts, including Acquia, Pantheon, Amazon AWS, Microsoft Azure, Google Cloud Engine, Digital Ocean, and self-hosting

Other Operational systems

Operations is all about systems -- processes and tools to keep things running smoothly, without interruption. Surrounding our core chat bot and deployment process are more typical operations systems -- server monitoring, performance monitoring, ticketing systems, time tracking, billing/invoicing, phone systems, email, file sharing services. At Freelock we have rolled out self-hosted open source systems across the board, along with some custom glue holding it all together.

As a result, we sometimes provide consulting services to other organizations to help them build their own DevOps practices.

If we can help your organization build out a tailored Continuous Integration pipeline, or other DevOps process, let us know!

Jul 24 2017
Jul 24

Migrations provide an excellent opportunity to take stock of your current content model. You’re already neck deep in the underlying structures when planning for data migrations, so while you’re in there, you might as well ensure the new destination content types will serve you going forward and not present the same problems. Smooth the edges. Fill in some gaps. Get as much benefit out of the migration as you can, because you don’t want to find yourself doing another one a year from now.

This article will walk through an example of migrating part of a Drupal 7 site to Drupal 8, with an eye toward cleaning up the content model a bit. You will learn:

  • To write a custom migrate source plugin for Drupal 8 that inherits from another source plugin.
  • To take advantage of OO inheritance to pull field values from other entities with minimal code.
  • To use the Drupal 8 migrate Row object to make more values available in your migration yaml configuration.

Scenario: A music site moving from Drupal 7 to Drupal 8

Let’s say we have a large music-oriented website. It grew organically in fits and starts, so the data model resembles a haphazard field full of weeds instead of a well-trimmed garden. We want to move this Drupal 7 site to Drupal 8, and clean things up in the process, focusing first on how we store artist information.

Currently, artist information is spread out:

  • Artist taxonomy term. Contains the name of the artist and some other relevant data, like references to albums that make up their discography. It started as a taxonomy term because editors wanted to tag artists they mentioned in an article. Relevant fields:

    • field_discography: references an album content type.
       
  • Artist bio node. More detailed information about the artist, with an attached photo gallery. This content type was implemented as the site grew, so there was something more tangible for visitors to see when they clicked on an artist name. Relevant fields:
     
    • field_artist: term reference that references a single artist taxonomy term.
    • field_artist_bio_body: a formatted text field.
    • field_artist_bio_photos: a multi-value file field that references image files.
    • field_is_deceased: a boolean field to mark whether the artist is deceased or not.

Choosing the Migration’s Primary Source

With the new D8 site, we want to merge these two into a single node type. Since we are moving from one version of Drupal to another, we get to draw on some great work already completed.

First, we need to decide which entity type will be our primary source. After some analysis, we determine that we can’t use the artist_bio node because not every Artist taxonomy term is referenced by an artist_bio node. A migration based on the artist_bio node type would leave out many artists, and we can’t live with those gaps.

So the taxonomy term becomes our primary source. We won’t have an individual migration at all for the artist_bio nodes, as that data will be merged in as part of the taxonomy migration.

In addition to the migration modules included in core (migrate and migrate_drupal), we’ll also be using the migrate_plus module and migrate_tools.

Let’s create our initial migration configuration in a custom module, config/install/migrate_plus.migration.artists.yml.

id: artists
label: Artists
source:
  plugin: d7_taxonomy_term
  bundle: artist
destination:
  plugin: entity:node
  bundle: artist
process:
  title: name

  type:
    plugin: default_value
    default_value: artist

  field_discography:
    plugin: iterator
    source: field_discography
    process:
      target_id:
        plugin: migration
        migration: albums
        source: nid

This takes care of the initial taxonomy migration. As a source, we are using the default d7_taxonomy_term plugin that comes with Drupal. Likewise, for the destination, we are using the default fieldable entity plugin.

The fields we have under “process” are the fields found on the Artist term, though we are just going to hard code the node type. The field_discography assumes we have another migration that is migrating the Album content type.

This will pull in all Artist taxonomy terms and create a node for each one. Nifty. But our needs are a bit more complicated than that. We also need to look up all the artist_bio nodes that reference Artist terms and get that data. That means we need to write our own Source plugin.

Extending the Default Taxonomy Source Plugin

Let’s create a custom source plugin, that extends the d7_taxonomy_term plugin.

use Drupal\taxonomy\Plugin\migrate\source\d7\Term;
use Drupal\migrate\Row;

/**
 *
 * @MigrateSource(
 *   id = "artist"
 * )
 */
class Artist extends Term {

  /**
   * {@inheritdoc}
   */
  public function prepareRow(Row $row) {
    if (parent::prepareRow($row)) {
      $term_id = $row->getSourceProperty('tid');

      $query = $this->select('field_data_field_artist', 'fa');
      $query->join('node', 'n', 'n.nid = fa.entity_id');
      $query->condition('n.type', 'artist_bio')
        ->condition('n.status', 1)
        ->condition(fa.field_artist_tid, $term_id);

      $artist_bio = $query->fields('n', ['nid'])
        ->execute()
        ->fetchAll();

      if (isset($artist_bio[0])) {
        foreach (array_keys($this->getFields('node', 'artist_bio')) as $field) {
          $row->setSourceProperty($field, $this->getFieldValues('node', $field, $artist_bio[0]['nid']));
        }
      }

    }
  }
}

Let’s break it down. First, we see if there is an artist_bio that references the artist term we are currently migrating.

      $query = $this->select('field_data_field_artist', 'fa');
      $query->join('node', 'n', 'n.nid = fa.entity_id');
      $query->condition('n.type', 'artist_bio')
        ->condition('n.status', 1)
        ->condition(fa.field_artist_tid', $term_id);

All major D7 entity sources extend the FieldableEntity class, which gives us access to some great helper functions so we don’t have to write our own queries. We utilize them here to pull the extra data for each row.

      if (isset($artist_bio[0])) {
        foreach (array_keys($this->getFields('node', 'artist_bio')) as $field) {
          $row->setSourceProperty($field, $this->getFieldValues('node', $field, $artist_bio[0]['nid']));
        }
      }

First, if we found an artist_bio that needs to be merged, we are going to loop over all the field names of that artist_bio. We can get a list of all fields with the FieldableEntity::getFields method.

We then use the FieldableEntity::getFieldValues method to grab the values of a particular field from the artist_bio.

These field names and values are passed into the row object we are given. To do this, we use Row::setSourceProperty. We can use this method to add any arbitrary value (or set of values) to the row that we want. This has many potential uses, but for our purposes, the artist_bio field values are all we need.

Using the New Field Values in the Configuration File

We can now use the field names from the artist_bio node to finish up our migration configuration file. We add the following to our config/install/migrate_plus.migration.artists.yml:

field_photos:
    plugin: iterator
    source: field_artist_bio_photos
    process:
      target_id:
        plugin: migration
        migration: files
        source: fid

'body/value': field_artist_bio_body
'body/format':
    plugin: default_value
    default_value: plain_text

field_is_deceased: field_is_deceased

The full config file:

id: artists
label: Artists
source:
  plugin: d7_taxonomy_term
  bundle: artist
destination:
  plugin: entity:node
  bundle: artist
process:
  title: name

  type:
    plugin: default_value
    default_value: artist

  field_discography:
    plugin: iterator
    source: field_discography
    process:
      target_id:
        plugin: migration
        migration: albums
        source: nid

field_photos:
    plugin: iterator
    source: field_artist_bio_photos
    process:
      target_id:
        plugin: migration
        migration: files
        source: fid

'body/value': 'field_artist_bio_body/value'
'body/format':
    plugin: default_value
    default_value: plain_text

field_is_deceased: field_is_deceased

Final Tip

When developing custom migrations with the Migrate Plus module, configuration is stored in the config/install of a module. This means it will only get reloaded if the module is uninstalled and then installed again. The config_devel module can help with this. It gives you a drush command to reload a module’s install configuration.

Jul 24 2017
Jul 24

In my previous post, I introduced the YAML Content module and described the goal and usage for it at a high level. In this post, I aim to provide a more in-depth look at how to write content for the module and how to take advantage of a couple of the more advanced options included.
 

Import Data Structure: Creating a Node

To start with we'll have a look at creating a basic Node to explore the data structure being used. The following is an example of YAML content that could be included directly in a content file for import assuming a matching Node type with fields exists in the database. For these examples, this may be achieved in a  basic Drupal install with the Standard profile.
 

# Add a basic article page with simple values.
- entity: "node"
  type: "article"
  title: "Basic Article"
  status: 1
  # Rich text fields contain multiple keys that must be provided.
  body:
    - format: "basic_html"
      # Using a pipe we can define content across multiple lines.
      value: |
        <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed vobis
        voluptatum perceptarum recordatio vitam beatam facit, et quidem corpore
        perceptarum. Tum Quintus: Est plane, Piso, ut dicis, inquit.</p>
        <p>Primum cur ista res digna odio est, nisi quod est turpis? Duo Reges:
        constructio interrete. Rhetorice igitur, inquam, nos mavis quam
        dialectice disputare?</p>

Using this structure, any entity defined in Drupal may be created with or without field values populated. An entity is flagged for creation using the entity key where the value is the machine name of the entity type to be created. The rest of the values listed at that level are either property or field names at the entity level. These property names will vary based on the specific entity type being created, but most true properties will be assigned values directly corresponding one-to-one with the property name.

Populating fields gets a bit more complex, but the body key is an example of this. The body field is a special case in that the name of the field does not follow the standard naming convention of custom fields, typically field_*, but regardless, it passes through the field architecture and is populated in the same way.

Within the field name property, we assign an array of field item values. This array is indicated by the - prefixing the next level of values where each specific field item is indicated in this way. Depending on the type of field being populated, the properties assigned to each field item may vary. In the case of rich text fields, like the body field, there are two properties: value and format. At this field item level we assign these properties in the same way as at the entity level.
 

How to Determine a Data Structure

To understand and map the entity data structure, it is important to understand the hierarchy of classes it represents.

Within this structure, the top level of data containing the entity key maps to the Entity object being created. Most of the properties to be assigned at this level may be found within the entity_keys of the entity type definition. A look at the plugin annotation for the Node class can give some insight into this. When looking at these keys, it's important to note that the properties to use in your data files should be the one specific to the entity type being created. That is to say, your property keys should match the values from the entity_keys array, and not the keys.

The next level below the entity is where we've indicated a field name for assignment. Using the architecture described above, this field name added as a property navigates us to the level of a FieldItemList. Loosely, the FieldItemList can be thought of as an array of values assigned to a specific field. Even in the case of single-value fields, if it uses the Typed Data API it passes through a FieldItemList class. While this may be confusing at first, this is actually very fortunate since it allows reference to, and assignment of, all field values using the same structure. To map through this layer, each field item to be assigned to a field must be contained within an array. In YAML this is represented by prefixing each item with a -.

At the individual field item level, we're mapping more specifically to an extension of the FieldItemBase class. At this level, the properties available for assignment may become more specialized. Examples of this include rich text fields (TextItem), entity reference fields (EntityReferenceItem), and link fields (LinkItem). While the property keys needed for assignment of each of these field types may vary, it is possible to identify the keys by determining the FieldItem class corresponding to the field being assigned. Once the FieldItem class is identified, inspecting the propertyDefinitions() method will describe the properties for assignment.
 

Advanced Value Assignments

More advanced value assignments may be used throughout content to leverage some of the utilities built into the YAML Content module to create more dynamically interconnected or enriched content.
 

Processing Functions

Processing functions may be used throughout content being imported to provide dynamic content values to be populated during the import process. At the time of this writing, the following processing callbacks are available for use within field items:

  • Reference
    • Query for an entity ID to populate as the target ID in an entity reference field.
  • File
    • Query for an existing file by file name, and upload the asset if it doesn't exist.
       

Entity References

Nested Content Creation

Referencing other content can be done in a couple of ways. The most convenient method takes advantage of the entity save system to handle nested entities during the save process. If the content being created doesn't exist yet elsewhere in the content file or doesn't need to exist as an independent entity like an individual Paragraph entity, it may be defined fully within the parent field as an item value. See the code snippet below for a basic example of this.
 

- entity: "node"
  type: "page"
  title: "Paragraph Example"
  status: 1
  # Populate an example paragraph field.
  field_paragraph_content:
    # Define a nested entity directly as a field item value.
    - entity: 'paragraph'
      type: 'rich_text'
      field_title:
        - value: "Paragraph Headline"
      field_body:
        - value: |
            <p>Lorem ipsum...</p>
          format: 'full_html'

In the snippet above, the parent entity is defined normally. Within the paragraph field, a nested entity is then defined directly as an item value of the paragraph field. This works by populating the nested field architecture of the overall node, and once the parent entity is saved, the nested structure is traversed recursively to save all entity children to determine the entity IDs to be stored in the parent entity reference fields.

As long as entity existence checking is enabled for the import operations (it is by default), this approach should work fine as an alternative for the more complex usage of an entity reference callback. The only exceptions to this as of the time of this writing are Paragraph and Media entities which are never updated in place due to the need for more specialized logic to uniquely identify instances of them (see issue #2893055 for more detail).
 

Entity Reference Processing

In cases where referenced content needs to be more dynamically identified, the reference entity callback may be used to query existing entities and build the target_id value required for entity reference fields.
 

- entity: "node"
  type: "article"
  title: "Tagged Article"
  status: 1
  body:
    format: "full_html"
    # Using a pipe we can define content across multiple lines.
    value: |
      <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed vobis
      voluptatum perceptarum recordatio vitam beatam facit, et quidem corpore
      perceptarum. Tum Quintus: Est plane, Piso, ut dicis, inquit.Primum cur
      ista res digna odio est, nisi quod est turpis? Duo Reges: constructio
      interrete. Rhetorice igitur, inquam, nos mavis quam dialectice disputare?</p>
  # Using the tags below assumes the tags were created manually or imported earlier.
  field_tags:
    # This is done via a preprocessor.
    - '#process':
        # First we designate the processor callback to be used.
        callback: 'reference'
        # Each callback may require a set of arguments to configure its behavior.
        args:
          # Indicate the machine name of the entity type to be referenced.
          - 'taxonomy_term'
          # Provide a list of conditions to filter the content matches.
          # Each property filter maps directly to an EntityQuery condition.
          - vid: 'tags'
            name: 'Generated content'
    # Processors may be called multiple times to fill in any content requirements.
    - '#process':
        callback: 'reference'
        args:
          - 'taxonomy_term'
          - vid: 'tags'
            name: 'Imported demo content'

The code snippet above demonstrates usage of the reference processor to query existing taxonomy terms to be applied to the article being created. In the case that either taxonomy term does not exist already, it will be created containing the basic values provided to the callback query. Behind the scenes, this reference callback is building an entity query to search for existing content of the entity type designated in the first callback argument. All subsequent keyed values passed into the second key under the args array are then used to populate conditions on the entity query object before executing it. Making use of this mechanic offers a great deal of flexibility in the queries since the entity query system has been so greatly expanded in Drupal 8. By looking at the list of options available for query conditions it is clear that very complex queries may be created by providing the field condition as the argument key and the search value as the argument value.
 

File and Image References

Adequate demonstration of a site's functionality often requires files and image assets to be intermingled with content. Using the file processor, YAML Content supports the inclusion of files and images throughout content. To begin with, any images referenced from content files are expected to be located within an images/ directory beside the content/ directory containing the content files being imported. Likewise, any file or media assets being included with imported content are assumed to be located within a data_files/ directory on the same level.
 

# Files like images can even be referenced and added within content.
- entity: "node"
  type: "article"
  title: "Article with an Image"
  status: 1
  body:
    format: "full_html"
    # Using a pipe we can define content across multiple lines.
    value: |
      <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed vobis
      voluptatum perceptarum recordatio vitam beatam facit, et quidem corpore
      perceptarum. Tum Quintus: Est plane, Piso, ut dicis, inquit.Primum cur
      ista res digna odio est, nisi quod est turpis? Duo Reges: constructio
      interrete. Rhetorice igitur, inquam, nos mavis quam dialectice disputare?</p>
  field_tags:
    - '#process':
        callback: 'reference'
        args:
          - 'taxonomy_term'
          - vid: 'tags'
            name: 'Generated content'
  field_image:
    # To lookup and add files we'll need to use a different callback function.
    - '#process':
        # In this case we're looking up a file, so we'll use the `file` callback.
        callback: 'file'
        args:
          # Our first argument is, again, the bundle of the entity type.
          - 'image'
          # For this callback our additional arguments are telling what file we want.
          # By default, images are searched for within an `images` directory beside the
          # `content` directory containing our content files.
          - filename: 'demo-image.jpg'
      # Additional properties needed for a reference field may be defined at the same
      # level as the process indicator.
      alt: "Don't forget the alt text."

The example above again demonstrates a use of the reference processor to populate a taxonomy term field, but in the next field it provides an example of including an image file within an image field. Like before, the processor is defined within the individual field item it should populate. The rest of the definition is very similar in overall structure to the previous reference callback.

First, we define the specific callback to be used. In this case, we're looking up a file. Next we define the arguments required for this callback. The first of these is the bundle of the file being loaded. If this bundle is defined as image, the file will be searched for in the images/ directory. Otherwise, the file will be searched for within the data_files/ directory. The only other argument required in this case is the file name to be searched for. Given this definition, the processor will walk through the following steps before proceeding through the rest of the import process:

  1. Search for the file at images/demo-image.jpg
  2. Save the file as a managed file in Drupal
  3. Return the new managed file ID for reference in the parent entity reference field.

Hopefully this has shed some light onto a few more advanced methods of creating content with the YAML Content module. For any other questions feel free to reach out through the issue queue or the comments section below!
 

Additional Resources
Previous Article: Introducing the YAML Content Module
YAML Content module
Module documentation
Reference card
Wikipedia: YAML

Jul 24 2017
Jul 24

As of July 2017, there are 1500+ themes registered with the Drupal project. The sheer number of choices makes the selection of a theme difficult for most newcomers to Drupal. Some Drupal themes are free while the rest are known as being premium, i.e., they are available for a fee. Sometimes you can save more money by investing in a paid theme, but this topic we’ll cover in another article. This one lists the top 10 free Drupal themes, each of them in our opinion is a great choice for a beginning sitebuilder.

Are you a Drupal newcomer? Use our learning guide to become a guru!

Best Free Drupal Themes: Selection criteria

The main question when compiling the list of our proposed themes was how to make the comparison fair and objective. After much discussion, we decided that a theme must satisfy the following basic criteria for it to be considered in the list of top 10 Drupal themes:

 

  1. It must be free.

  2. It must run on either Drupal 7 or Drupal 8 (or better, both).

  3. It must be actively maintained and developed.

  4. It must be covered by the Drupal security advisory policy.
    Coverage under the policy does not guarantee that a theme is free of vulnerabilities. Rather, it means that the theme has been reviewed for any publicly known vulnerabilities by the Drupal security team.

  5. It must be for general purpose.
    Some Drupal themes are designed for specific industries, e.g., restaurant. For the purpose of this list, only general purpose themes are considered.

  6. It must be responsive.

A responsive theme adjusts its layout to accommodate different screen sizes and resolutions. This is a basic requirement for today's mobile platforms.

  1. It must run out-of-the-box.

There are themes, and there are theme frameworks (also known as base themes). A theme framework is like a blank canvas with tools which a theme developer uses to build a custom theme. The top 10 list only contains Drupal themes which one can use out-of-the-box as feature-complete themes.

In the course of conducting this study, it was observed that a small number of organizations have each generated a relatively large number of themes, albeit good ones, that are only marginally different. If an organization makes multiple but similar quality Drupal themes, only representative themes may be selected for inclusion in the following theme set. The individual or organization responsible for a theme is identified below in brackets.

Anyway, we have kept you in suspense for too long already. Based on the above criteria, the top 10 free Drupal themes are:

  • BlueMasters (by More than Themes)

  • Corporate Clean (by More than Themes)

  • Danland (by DanPros)

  • Business (by Devsaran)

  • Nexus (by Devsaran)

  • Zircon (by WeebPal)

  • Business Responsive Theme (by Zymphonies)

  • Drupal8 Zymphonies (by Zymphonies)

  • Fontfolio (by Marios Lublinski)

  • Integrity (by knackforge)

Below, we discuss each theme in more detail.

BlueMasters

Best Drupal themes: BlueMasters

 

BlueMasters is a popular WordPress theme that has been ported to the Drupal platform by More Than Themes. We recommend this theme, not just on its features only, but that it is maintained by More Than Themes, a solid well-reputed organization in the Drupal community. The theme supports a maximum layout of 12 regions. A region is the primary layout unit to which a component block can be placed. Therefore, the more regions a theme supports, the more customizable it is. With this Drupal theme, you can display a slideshow on the front page, and partition information into either 2 or 3 columns on a web page. In addition, you can organize and access your contents via multi-level dropdown menus. BlueMasters, however, is only available on Drupal 7.

Corporate Clean

Best Drupal themes: Corporate Clean

Like BlueMasters, Corporate Clean is a theme ported to Drupal by More Than Themes. We recommend this theme because it offers a unique feature that is missing in many free Drupal themes, namely, a color scheme selector. Most free themes have a fixed color scheme which means that you cannot change the color of a button or the page background. With Corporate Clean, you can adjust the color of some screen elements. This theme supports 1-column, 2-column as well as 3-column layout. Multi-level drop-down menus and slideshows are also supported. The Corporate Clean theme only runs on Drupal 7.

Danland

Best Drupal themes: Danland

We recommend Danland because, among the Drupal themes on this top 10 list, it gives you the most flexibility to fine tune the layout of your web page. Specifically, it supports a maximum of 26 regions, the highest number on the list. The layout can have 1, 2, or 3 columns. Danland runs on Drupal 6, 7, and 8.

Business

Best Drupal themes: Business

 

In terms of the major features, the Business theme is at par with other themes on the list. We recommend the theme because of the finer feature details. For example, the slideshow feature allows the display of up to 5 images. Note that some free Drupal themes only allow a maximum of 3. Also, the Business theme has a color module, which is missing in most free themes. You can specify one of 6 fixed colors for web components. The Business theme is available for Drupal 7 and 8. However, the Drupal 7 version is not responsive, and is currently in maintenance mode only. The Drupal 8 version, on the other hand, is being actively developed, and is fully responsive.

Nexus

Best Drupal themes: Nexus

 

Nexus is arguably the most visually appealing theme on the top 10 list. The clean design together with the solid support by Devsaran, its maintainer organization, put Nexus on the list. The theme runs on both Drupal 7 and 8 with the Drupal 8 version being a pre-release version only. The layout can have a maximum of 15 regions, which is average on the top 10 list. You can specify a 1-column or 2-column design on the layout. The slideshow feature supports a maximum of 3 images.

Zircon

Best Drupal themes: Zircon

 

If your Drupal website is rich in images, then you should definitely consider using Zircon as the theme. You will be delighted by its slideshow, slider, as well as carousel features. You can run Zircon on both Drupal 7 and 8. However, the current Drupal 8 version has remained as a release candidate since November 2015. The Zircon layout supports 18 regions in 3 columns.

Business Responsive

Best Drupal themes: Business Responsive

Not all themes on the top 10 list support Drupal 8. Even for those which do, some are in pre-release status only. If you are looking for a stable Drupal 8 theme, you should consider the Business Responsive theme which has reached the 1.0 stable release status. This theme supports 17 regions in 1-column, 2-column, or 3-column layouts. It also has a slider feature, but installing the feature requires some manual steps after installing the theme. This theme supports the use of social media icons for popular platforms such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. You can install this theme on both Drupal 7 and 8.

Drupal8 Zymphonies

 

Best Drupal themes: Zymphonies

 

If you want a stable Drupal 8 theme that offers more bells and whistles than the Business Responsive theme, Drupal8 Zymphonies comes highly recommended. This theme is, fittingly, only available on Drupal 8. It shares many features with other themes on the top 10 list, such as multi-level menus and 1/2/3-column layout. It distinguishes itself by offering 22 regions for placing blocks, the second highest on the theme list. Also, you can customize the Zymphonies credit link, all supported social media links, and the title and description fields in the main banner.

Fontfolio

Top free Drupal themes: Fontfolio

 

If your website is multilingual, you should definitely consider Fontfolio because it offers easy setup for links to webpages in all supported languages. Like BlueMasters, this theme is a popular WordPress theme that has been ported to Drupal. Fontfolio can run on both Drupal 7 and 8. Note that some existing features of the Drupal 7 version are still under development for the Drupal 8 version. Fontfolio supports a maximum of only 8 regions in its layout, the fewest on the list. Yet, overall, it is a simple but elegant Drupal theme that includes a 2-column responsive design.

Integrity

Top free Drupal themes: Integrity

 

If you want a simple no-frills theme that just works out-of-the-box, Integrity may be your choice. It is a Drupal 8 only theme. Its feature set is, in general, at par with the rest of the themes. Integrity supports multi-level menus and slideshows that display up to 5 images. The layout includes a 3-column design. The theme has defined 17 regions into which Drupal blocks can be placed.

Summary & Conclusion

Drupal has a wealth of good free themes. Each of them is ideal for Drupal users who have relatively simple requirements and want to try something other than the default theme. If a free theme cannot fully satisfy your particular requirements, then you may want to use its premium alternative or even to hire a professional Drupal agency that can assist you with your needs.

 

Which theme do you like the most among our top 10 choices? Perhaps, you have your own favourite theme that is not on our list. What are the goals of your project and what kind of theme are you looking for? Share with us your thoughts in the comments section below!

Jul 24 2017
Jul 24

Quite the problem—Drupal config management on local and prod.

Drupal 8 can be quite punctilious: it will simply refuse to work if there's a mismatch between database and config objects on the filesystem.

So... how do we manage two sets of configurations—one for local, and one for production?

The Problem

I have modules like varnish installed on production that shouldn't be enabled on local, and modules that are enabled on local that shouldn't be enabled on production.

I have uninstalled varnish_purger on local. When I run drush cex on local, varnish_purger is marked for uninstall in the config and it's config files are removed. I can't deploy this config to prod, or varnish_purger will be uninstalled. But I have to export field definitions and other settings for deploy.

Settings.php

In settings.php, you can override config variables like so:

$config['system.performance']['fast_404']['exclude_paths'] = '/\/(?:styles)\/|(?:system\/files)\/|favicon\.ico|apple-touch-icon(?:-precomposed)?\.png|manifest\.json|IEconfig\.xml|browserconfig\.xml/';

You can override default/settings.php by placing the following at the bottom of settings.php and overriding settings in a settings.local.php file that should be gitignored.

if (file_exists(__DIR__ . '/settings.local.php')) {
  include __DIR__ . '/settings.local.php';
}

However, modules cannot be disabled by this method.

drushrc.php

You can make a small module uninstall script in drushrc.php:
$options['shell-aliases']['local-pmu'] = '!drush pmu varnish_purger varnish_purge_tags memcache purge purge_drush purge_tokens purge_ui purge_queuer_coretags -y';

drush local-pmu will now uninstall all the modules listed above. However, when a config export is done, those module uninstalls and removed configs will get exported to drush.

Plain Drush

I heard tell that you could add modules whose config would not be exported to drush. But this option has been removed:

$ drush cex --skip-modules=varnish_purger
Unknown option: --skip-modules.  See <code>drush help config-export

for available options. To suppress this error, add the option --strict=0. [error]
$ drush --version
Drush Version: 8.1.11

Likewise the following syntax in drushrc.php didn't work for me:
$command_specific['config-export']['skip-modules'] = array('devel');

It seems these functions are being deprecated in favor of contrib-space solutions.

CMI tools

CMI tools is a drush plugin that allows you to add a list of ignored config files, and use drush cexy and drush cimy to import/export config instead. When combined with drush --skip-modules, this seemed perfect, but I couldn't get ignored modules and config files to work, so I skipped this drush extension. Additionally, this functionality is replicated by Config Ignore below.

Config Ignore

This module lets you ignore certain configuration files. Config Ignore is perfect for allowing site editors to, say, modify system.site configuration--which contains that site's name, slogan, and email—on your live site without creating config changes that must be exported manually.

This is great, but you cannot use it to have specific modules enabled or disabled on prod and local, because module installed/uninstalled status is governed by a file that cannot be ignored according to module docs: core.extensions.yml.

Config Partial Export

Config Partial Export allows you to export a tarball of recently changed files. I could see using this module. Basically, you'd set up sync directories for prod and local.
Config export with different sync directories in settings.php:

$config_directories = array(
  'prod' => '../config/prod',
  'local' => '../config/local',
);

Then you'll see this option:

$ drush cex
Choose a destination.
 [0]  :  Cancel
 [1]  :  prod   
 [2]  :  local

There are now two sets of configurations. You'd then manually manage changes with the Config Partial Export module and then dump those configs directly into the folder in question. There is something I like about the control you have in this workflow, for simpler sites.

This is untested, but let's say I enable the Coffee module on local and want that to go to production. I would first make sure the only change I had intended was on deck. I would drush cpex and then copy the contents of the tarball to config/production. Then I would modify config/prod/core.extension.yml to enable the module. Then I would I would drush cex local -y. Then, after deploying code to prod, I could drush cim prod on prod.

Config split: Installation

I ended up using Config Split. It's a more robust and automated config management tool, and looks like it could even make it into core, judging by comments I see on d.o. The tutorial I pulled the most from was by Jeff Geerling, including the comments. I suggest reading it.

Enable config_split and config_filter:

composer require drupal/config_split drupal/config_filter && drush en config_split config_filter

This module was somewhat difficult for me to get my head around at first, because of the definition of "blacklist" given, "Configuration listed here will be removed from the sync directory and saved in the split directory instead." I read that "graylisting" is somewhat unpredictable, so I won’t cover it here.

So let's use a concrete example to illuminate blacklisting. I'm creating three environments: dev, prod, and local.

If I'm on local, I want coffee enabled. But it should be disabled on prod. On prod and dev, I want varnish_purger enabled, but I want that disabled on local.

Config Split: Exporting

First step. As recommended by geerlinguy, start on prod (or the environment that has all your extra modules enabled), enable config_split and config_filter, and create your three environments, dev, prod, and local. Then export them on prod in the same step that you export the enabling of the config_split module. Create directories in your sync directory dev, prod, and local and point to those folders in each of the settings. Leave other settings untouched.

*Note:* Make sure to clear your cache after every config change:

Export the settings:

Commit, and then pull the database down to local and dev.

If you don't, you could run into the following strange chicken/egg problem where even though config_split was already enabled you get:
Configuration config_split.config_split.prod depends on the Configuration split module that will not be installed

But broader necessity of modifying some settings on prod is that you can't manage settings for modules that are not installed on your machine unless their settings have already been blacklisted, so, for example, I have to configure varnish_purger on prod and coffee on local, then commit the changes.

For instance, if I enable stage_file_proxy and do not yet run drush cex then the config object, stage_file_proxy.settings, will not yet appear in the UI for management. One way to work around this is by pasting the names of config objects into the text field. Searching through the multiple-select list for configs can be tedious.

The reason for leaving other settings intact is that config files cannot be removed from the settings until config_split is enabled. The important thing is to deploy config_split, config_filter, and blank settings with all modules enabled in one step, and in the next step deploy actual settings.

Now., here's how you'd configure local:

After I do that, run:

drush cr
drush csex local # Note the 's'

This will copy the blacklisted files to the local directory.

Now run:

drush cex # If you're running drush < 8.1.11 you may need to run `drush csex` here.

This will implement the blacklist and delete the coffee configuration file from the main config directory.

You can skip drush csex local and just run drush cex, but you may want to take your time and understand what's happening.

Now, blacklist varnish_purger module settings on both dev and prod and export.

Config Split: Importing

When you import a configuration, you're updating the drupal database configuration from the config file system. When you import using config_split and specify an environment, you're pulling in the global configuration as well as the config files in the specified directory. Unless, I am given to understand, a file has been graylisted and an override has been placed in the environment you're currently on. Then your environment's config should override the global config.

To use config_split, you need to do one of two things:
1. Run drush cim normally and add the following to your settings.php for each environment, this example pertaining to local:

$config['config_split.config_split.dev']['status'] = TRUE;
$config['config_split.config_split.local']['status'] = TRUE;
$config['config_split.config_split.prod']['status'] = FALSE;

When this is done properly, you'll see:

2. Or, you can run drush csim {environment} instead of drush cim.

Either works.

Now if you wish to import a configuration, and you've got your config_split.config set, you just run drush cim and the global config plus your blacklisted settings will be imported.

Config Split: Module Management

Now, I had been scratching my head for a little while, trying to figure out how to manage the enabling and disabling of modules. I did some tests to see if importing or exporting configurations was somehow using the config_split.config_spit.{env} to override the core.extensions file and enable/disable modules. It doesn't.

You have to add core.extensions to each of your splits and blacklist it. It seems this would be a perfect use case for graylisting, but I haven't tested that.

When you do this, and if you have your environments set in settings.php, you can enable and disable modules with impunity. When you export, config files will go to environments you've specified. If there are multiple versions of a config file, say one in local and one on dev, and you are on local, you will only be modifying the one on local. And this includes core.extensions.

But the upshot of this is that in order to export a module enable from local to prod, you'll have to manually cut-and-paste it from local/core.extensions.yml and add it to prod/core.extensions.yml.

I hope that made more sense to you than it did to me when I took it all in. I believe I have a full workflow now that's robust enough to handle anything I can throw at it, and I hope by sharing it I will have saved you some time!

Jul 21 2017
Jul 21

Eight months ago we launched the first beta version of Commerce 2.x for Drupal 8. Since then we’ve made 304 code commits by 58 contributors, and we've seen dozens of attractive, high-performing sites go live. We entered the release candidate phase this month with the packaging of Commerce 2.0-rc1 (release notes), the final part of our long and fruitful journey to a full 2.0.

Promotions and Coupons

Promotions received the most work since the last beta. We rewrote the code, redesigned the UI, and introduced many new features to ensure our system and interface for managing discounts and coupons is ready for merchants out of the box.

The promotion form now has two columns. This lets us prioritize the primary configuration elements in the form (title, offer, conditions, coupons) while presenting optional settings in the sidebar. Both promotions and their individual coupon codes can now be usage limited thanks to the new usage API, which will also allow for per-customer limits in the future. Additionally, the compatibility settings allow promotions to specify whether they can be combined with other promotions on the same order.

We rewrote the offer type plugins to let them natively target either orders or individual order items and also developed a brand new conditions API. As pictured above, we redesigned the conditions UI to make them easier to discover and configure. (Our conditions field can also now be attached to any entity type, like payment methods, in less than 100 lines of code!) Our initial conditions allow limiting promotions by customer address and role, product and product quantity, and order total. We will continue to port conditions throughout the release candidate phase.

Allowing coupons to be created in the admin pages doesn’t mean much if customers can’t redeem them! There is now a coupon redemption form with a matching checkout pane. It fits nicely in the sidebar on the checkout form beneath the order summary and can be configured to allow a single coupon per order or multiple. Both of the resulting UIs can be themed and customized using Twig.

Payments

Thanks to our improved on-site and off-site payment gateway APIs, integrating new payment gateways can now take hours instead of days. We've seen our own team grow increasingly productive thanks to the ability to pull in SDKs via Composer, our built-in tokenization support (i.e. "card on file"), and our pre-built UIs for authorizing, voiding, and refunding payments.

Our very active community has built over 30 payment gateway modules for Commerce 2.x! The team at Commerce Guys is responsible for 6 of those gateways (PayPal, Authorize.net, Braintree, Stripe, and Square), all of which now have beta releases compatible with RC1.

In addition to maintaining a list of known payment gateways, we’ve also been reviewing them to ensure they support Ludwig and conform to best practices. Big thanks to Tavi Toporjinschi (vasike) from Commerce Guys by Actualys for ensuring all contributed payment gateway modules were patched to support the RC1 API changes.

We now also include a “Manual” payment gateway in core that can be used to implement payment methods like Cash on Delivery, Card on Delivery, Cheque, Bank Transfer, etc. Upon checkout completion, we create a pending payment, and the payment instructions are shown to the customer. Merchant mark payments as complete via an admin UI.

The payment gateway API also benefited from the promotions work. You'll notice that the payment gateway configuration form now has the same conditions UI as the promotion form, allowing payment gateways to be limited to specific customer roles, orders above a specific total, etc.

Taxes

The commerce_tax submodule has been completely rewritten for better performance and user experience. Merchants can enter prices with or without tax, and product prices can be shown with taxes included even if it was entered without tax (and vice versa). Stores can specify where they’re registered to collect taxes in addition to their home country (e.g. a U.S. store registered in the UK to collect EU VAT on digital products, French store registered in Germany due to thresholds on selling physical products, etc.).

Taxes are calculated by tax type plugins, which can be remote (contacting a service such as Avalara) or local (storing the available tax rates in configuration or code). The “custom” tax type stores a set of tax rates and the territories where they apply. The territories can be a set of countries, states, postal codes, allowing people to specify Serbian VAT, Kentucky Sales Tax, etc.

In addition to letting you define your own tax rates, we also provide plugins with predefined rates for the European Union, Switzerland, and Canada as pictured above. With these rates also comes the logic for resolving them. The EU plugin distinguishes between physical and digital products, satisfying the requirement to charge the customer’s VAT on digital products (instead of the store VAT). It also understands Intra-Community Supply (B2B), selling to non-EU customers, and all of the territory exceptions (Lake Lugano, Busingen, Aaland islands, etc.). The Canadian plugin similarly knows to choose the rates from the customer’s province.

Basically, we give you a mini value added tax cloud - open source and easy to extend.

Embracing Twig

We grow more fond of Twig with each passing day. It powers our built-in order receipt emails, making them as easy to theme and customize as any other page on the website. Gone are the 1.x days of Commerce Message not supporting conditionals (“Only show this heading if there’s a shipping address”) or not having access to the right token. Instead you have a ready to theme template with all the functionality Twig affords you:

It also powers our order activity stream. Each type of activity has an inline Twig template that is filled with the saved log entry data. This will make it easy for developers to manage the types of activities that get logged and their presentation to better support their merchants:

Composer now Optional

We previously blogged about a new project we released called Ludwig. It gives developers who haven't yet made Composer a part of their workflow a way to manually install the PHP libraries Commerce 2.x and our payment gateway modules use. We still strongly recommend Composer, but we recognize that it comes with its own learning curve and wrote this project to let people learn when they're ready.

Next Steps

We have scheduled RC2 for early August. It will contain several important features and fixes that didn’t make it into RC1, including the ability to reuse addresses at checkout. Meanwhile, we’re focused on helping contrib porting and driving toward RC1 releases of payment gateways and shipping. We'll send regular updates as we progress through the Commerce Guys Newsletter, and you can find us in the #commerce channel of the Drupal Slack or the #drupal-commerce channel in IRC to pitch in!

Jul 21 2017
Jul 21

Nobody likes iframes. That's because you can't style their innards, and they aren't responsive... or are they?!?!

The first thing to know about here is the padding height hack. This allows you to set the height of an object relative to the width, because while height is always as a percentage of the container, padding height is as a percentage of width. So all you have to know is the ratio of height to width and you can make a thing that responsively scales.

My first attempt was to use javascript to get the viewport height from the iframe. Turns out, no, you can't do that. Even if you find some Stack Overflow that seems to say you can. We knew this already, didn't we?

What I did next was to make a field for the client to paste in the embed code. In iframe embed code there's almost always a width and a height. So, just scrape that from the HTML in a preprocess hook, and then set the padding accordingly in the twig file.

Here's my preprocess hook. You can learn more about HTML parsing found in this hook in my post here.

function mytheme_preprocess_field(&$variables, $hook) {
  if ($variables['field_name'] == 'field_myiframe_embed_code') {
    $embed = !empty($variables['items'][0]['content']['#context']['value']) ? $variables['items'][0]['content']['#context']['value'] : '';
    if (!empty($embed)) {
      foreach (Html::load($embed)->getElementsByTagName('iframe') as $iframe) {
        $variables['src'] = $iframe->getAttribute('src');  / Sets a variable `src` accessible in the twig template
        $width = $iframe->getAttribute('width');
        $height = $iframe->getAttribute('height');
        $variables['paddingTop'] =  $height / $width * 100; // Sets a variable `paddingTop` accessible in the twig template
      }
      if (empty($width) || empty($height) || empty($variables['src'])) {
        drupal_set_message('There\'s something wrong with your embed code. Please fix. Needs height, width, and src.', 'error');
      }
    }
    else {
      drupal_set_message('No value found for your embed code', 'error');
    }
  }
}

Here's my twig field template (field--field-myiframe-embed-code.html.twig):

<div class="myiframe__iframe-wrapper" style="padding-top: {{ paddingTop }}%;">
  <iframe src="http://glassdimly.com/blog/tech/drupal-8-responsive-iframes-css-drupal-planet/responsive-iframe-drupal-8-client-embed-code/{{ src }}" class="myiframe__iframe" />
</div>

And here's the CSS I used, more detail here.

.myiframe__iframe {
  position: absolute;
  top:0;
  left: 0;
  width: 100%;
  height: 100%;
  border: none;
}
 
.myiframe__iframe-wrapper {
  position: relative;
  height: 0;
  overflow: hidden;
}
Jul 21 2017
Jul 21

Often one finds oneself needing to parse HTML. Back in the day, we used regexes, and smoked inside. We didn't even know about caveman coders back then. Later, we'd use SimpleHtmlDom and mostly just swore when things didn't quite work as expected. Now, we use PHP's DomDocument, and in Drupal we create them using Drupal's HTML utility.

At the top of your file place:
use Drupal\Component\Utility\Html;

Now, I'm looking for an iframe embed in my HTML, and I want the src, width, and height. I've handed it a text string that looks like this:
<iframe width="900" height="800" frameborder="0" scrolling="no" src="http://myurl.com"></iframe>

Then, in my hook_field_preprocess() I do this.

      foreach (Html::load($my_html)->getElementsByTagName('iframe') as $iframe) {
        $variables['src'] = $iframe->getAttribute('src');
        $width = $iframe->getAttribute('width');
        $height = $iframe->getAttribute('height');
      }

Let's break this down. Html::load($my_html) returns a DomDocument. getElementsByTagName('iframe') returns a DOMNodeList which is iterable, and then you've got a DOMElement, which you can get the properties you need from, using the function getAttribute().

There. Beats regexes and patriarchy, doesn't it?

Jul 21 2017
Jul 21

Hook 42 is expanding our enterprise Drupal services to the public sector. It’s only logical that our next trek is to Drupal GovCon!

We are bringing some of our colorful San Francisco Bay Area love to DC. We will be sharing our knowledge about planning and managing migrations, as well as core site building layout technologies. The most exciting part of the conference will be meeting up with our east coast Drupal community and government friends in person.

Attend our sessions or come by our booth in the exhibit hall to say hello! Stop by for fun stickers and thought-provoking conversation. We can even set up a meeting to talk about how you are leveraging Drupal 8 for your organization.

We are also looking at hosting a Multilingual and possibly a SEO BoF, so stay tuned!

Sessions:

Planning & Managing Migrations

Wednesday August 2nd at 10:00 AM | Aimee Degnan  | Balcony A

Drupal 8 is great! Yay! Now it’s time to migrate!

There are many moving parts in a migration project compared to a new website build. Ensure your migration is a success with these architectural guidelines, helpful planning tools, and lessons learned.

The Order of Operations is important in migration projects. Keeping excellent, detailed documentation of each phase of migration is tantamount to success. These migration projects can be lengthy. Working in an efficient manner will provide your team the fitness to complete the project!

Topics Covered:

  • Types of migrations (Single Pass, Incremental, Hybrid).
  • Major phases of a migration project.
  • Planning efforts and documentation created for each phase.
  • Architectural considerations to migration - content, infrastructure, etc.
  • What migration support is provided “out of the box” and what is “custom development”?
  • Role-specific considerations, tools, and needs.
  • Gotchas, facepalms, and “remember tos”.

What level of knowledge should you have coming into this session?

  • Be familiar with basic Drupal terminology, such as: a node, content type, and field.
  • Understand simple project management concepts such as: resources, dependencies, tasks, and estimation.
  • Have a passion for (or fear of) juicy migration projects.

What will your session accomplish?

  • Prepare the community for Drupal 8 migrations!
  • Identify key success and failure points within migration projects.
  • Provide tools for project managers and development teams to plan and architect migrations.

What will attendees walk away having learned?

  • Understand the scope of a migration project.
  • Terminology and concepts to communicate with management and development teams.
  • Practical samples of migration planning documents.
  • How much time and money can be wasted if a migration isn't well planned and documented.
     

Harness the Power of View Modes

Monday July 31st at 2:00 PM | Aimee Degnan  | Balcony C

View Modes are the site-building glue that brings your content strategy, design, media strategy, and UX development together to actually create your web displays.

View Modes have been in Drupal for some time, but what do they really do? Why are they so powerful? With Drupal 8, View Modes are now even more relevant with the standardization of Entity and Field management across entity types.

Think beyond the Teaser and harness the power of View Modes!

Topics Covered:

  • View Modes in core:
    • Anatomy of a view mode.
    • Common applications of view modes across entity types.
    • View modes and media (media entities and file displays!).
    • What the “Default” view mode does vs. Full Content.
  • Architecting View Modes for your site:
    • Planning your View Mode + Content + UX + Component Library strategy.
    • Interacting with layout solutions. (Panels / Display Suite / Views)
    • Extending view modes in code.
  • Lessons Learned with View Modes:
    • Interactions of view modes across entity types.
    • Naming is important. What does “Teaser” really mean?!
    • But why can’t I use that view mode?!

What level of knowledge should you have?

This session listed as "Beginner", but the concepts and information can be applied to more advanced architectures.

  • Coming as a Site-Builder? You should know how to create content types and have reordered fields on "Manage Display" at least once.
  • Are you a project manager or designer? Be familiar with basic Drupal terminologies like a node, content type, and field.
  • Are you a Drupal Developer? You know enough to join this session. :)

What will This session accomplish?

  • Share with the community that there can be more than just Full Content and Teaser.
  • Provide tools for Site Builders to create powerful displays with little or no coding.
  • Explain why View Modes are a powerful tool in your Drupal tool chest.

What will attendees walk away having learned?

  • Terminology and concepts to connect design, content, and technical execution.
  • View Modes applied to different entities may mean different things.
  • Practical knowledge to apply for their own site extension.
  • Layers of View Modes can and will interact with each other. Layering must be deliberate.

By Role:

  • Project Managers: Understand that View Mode creation and extension requires effort which you need to include in planning.
  • Content Strategy / Analysts: How do view modes interact with content and functionality through the site.
  • Designers: The language and concepts to communicate your design vision to the development team.
  • Site Builders: Build what they are asked by the design and project management team. :)
  • Drupal Developers: Understand why all these non-coders on your team have created View Modes when you are asked to help possibly extend their displays. :D

Hope to see you there!

Jul 21 2017
Jul 21

A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client's state. Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern." Dependency injection is an advanced software design pattern and applying it will increase flexibility. Once you wrap your head around this pattern, you will be unstoppable.

A practical example of accessing services in objects using dependency injection

For the following example, let's assume we are creating a method that will use the service of A, we need to pull the dependencies of B and C into the plugin which we can use to inject whichever services we require.

  • Application needs A so:
  • Application gets A from the Container, so:
  • Container creates C
  • Container creates B and gives it C
  • Container creates A and gives it B
  • Application calls A
  • A calls B
  • B does something

Types of Dependency Injection

There are different types of Dependency Injection:

  • Constructor injection
  • Method injection
  • Setter and property injection
  • PHP callable injection

Constructor Injection

The DI container supports constructor injection with the help of type hints(Type hinting we can specify the expected data type) for constructor parameters. The type hints tell the container which classes or interfaces are dependent when it is used to create a new object. The container will try to get the instances of the dependent classes or interfaces and then inject them into the new object through the constructor.

Method Injection 

In constructor injection we saw that the dependent class will use the same concrete class for its entire lifetime. Now if we need to pass separate concrete class on each invocation of the method, we have to pass the dependency in the method only.

Setter & Property Injection

Now we have discussed two scenarios where in constructor injection we knew that the dependent class will use one concrete class for the entire lifetime. The second approach is to use the method injection where we can pass the concrete class object in the action method itself. But what if the responsibility of selection of concrete class and invocation of method are in separate places. In such cases we need property injection.

PHP Callable Injection

Container will use a registered PHP callable to build new instances of a class. Each time when yii\di\Container::get() is called, the corresponding callable will be invoked. The callable is responsible to resolve the dependencies and inject them appropriately to the newly created objects

Dependency Injection: Advantages & Disadvantages

Advantages

Reducing the dependency to each other of objects in application.
Unit testing is made easier.
Loosely couple 
Promotes re-usability of code or objects in different applications
Promotes logical abstraction of components.

Disadvantages

DI increases complexity, usually by increasing the number of classes since responsibilities are separated more, which is not always beneficial.
Code will be coupled to the dependency injection framework.
It takes time to learn
If misunderstood it can lead to more harm than good

Summary

Dependency injection is a very simple concept of decoupling your code and easier to read. By injecting dependencies to objects we can isolate their purpose and easily swap them with others. 

The service container is basically there to manage some classes. It keeps track of what a certain service needs before getting instantiated, does it for you and all you have to do is access the container to request that service. Using it the right way will save time and frustration, while Drupal developers will even make it easier for the layman. 

Jul 21 2017
Jul 21

CiviCooP and Systopia and Palasthotel have been working together on CiviProxy and CiviProxy. This blog is a round up of what we have achieved in the last couple of days. The first thing we have achieved is that we had fun and a very good work atmosphere. We made long days and made lots of progress.

What are CiviProxy and CiviMcRestFace?

CiviProxy is a script to act as an application firewall for CiviCRM. It could be used to put your civicrm in secure network. CiviProxy is the gatekeeper to which external systems, such as your website, connect (this is for example when a user signs a petition on your website and the website submits this data to your CiviCRM). CiviProxy will make sure the call is from the right place (ip-adress) and is only doing what allowed to do. 

CiviMcRestFace (CiviMRF) is a framework to be used in other systems (such as your external website) to connect to CiviCRM. The framework itself is divided in three parts: the abstract core (CMS/System independent), the core implementation (e.g. a Drupal 7 implementation), and lastly the module who is doing the actual submission (for example the cmrf_webform module which provides the functionality to submit a webform to CiviCRM).

What we have achieved:

  • Completed the documentation on CiviProxy: https://docs.civicrm.org/civiproxy/en/latest
  • Got a working drupal 7 module with CiviMcRestFace:
    • Completed screens for set up connection profiles (you can also provide the connection credentials through your module with an api; so that you can store them somewhere outside the database)
    • Completed screen for the call log (a call is submission to CiviCRM through CiviMcRestFace)
    • Added functionality to queue calls and run them in the background and added functionality to retry failed calls
    • Added a basic webform integration module to submit a webform to the CiviCRM Api
    • Added a rules integration module so that you can perform additional actions when a call succeeds or fails. Probably a use case is when a call fails you want to send the data by e-mail to the CiviCRM Administrator so that he or she can enter the data manually.
    • Added an example module so you can see how you could use the cmrf_core module in your durpal projects
    • Code: https://github.com/CiviMRF/cmrf_core/tree/7.x-dev
  • Got a start with the drupal 8 module for CiviMcRestFace: https://github.com/CiviMRF/cmrf_core/tree/8.x-dev
Jul 20 2017
Jul 20

Last weekend I had the pleasure of attending Drupal Camp Asheville 2017 ('twas my fourth year in a row : ). I absolutely love this event and encourage you to consider putting it on your list of Drupal events to hit next year. The Asheville area is a beautiful (and delicious) place to spend the weekend, but the bigger draw for me is the people involved:

Drupal Camp Asheville is always well organized (seriously, it's in the best of the best small conferences for venue, amenities, and content) and attended by a solid blend of seasoned Drupal users / contributors and newcomers. I live only an hour away, so I get to interact with my Drupal friends from Blue Oak Interactive, New Valley Media, and Kanopi on occasion, but then on Camp weekend I also get to see a regular mix of folks from Mediacurrent, Code Journeymen, Lullabot, Palantir, FFW, CivicActions, end users like NOAA, and more.

This year we got to hear from Adam Bergstein as the keynote speaker. Unfortunately, that "we" didn't include me at first, as I managed to roll up right after Adam spoke ... but his keynote is on YouTube already thanks to Kevin Thull! I encourage you to give it a listen to hear how Adam's experience learning to work against his own "winning strategy" as a developer (that of a honey badger ; ) helped him gain empathy for his fellow team members and find purpose in collaborative problem solving to make the world a better place.

I gave a presentation of Drupal Commerce 2.x focusing on how we've improved the out of the box experience since Commerce 1.x. This was fun to deliver, because we really have added quite a bit more functionality along with a better customer experience in the core of Commerce 2.x itself. These improvements continued all the way up to our first release candidate tagged earlier this month, which included new promotions, coupons, and payment capabilities.

Many folks were surprised by how far along Commerce 2.x is, but now that Bojan has decompressed from the RC1 sprint, I expect we'll start to share more about the new goodies on the Drupal Commerce blog. (If you're so inclined, you can subscribe to our newsletter to get bi-weekly news / updates as well.)

Lastly, I loved just hanging out and catching up with friends at the venue and at the afterparty. I played several rounds of a very fun competitive card game in development by Ken Rickard (follow him to find out when his Kickstarter launches!). I also enjoyed several rounds of pool with other Drupallers in the evening and closed out the night with cocktails at Imperial Life, one of my favorite cocktail bars in Asheville. I treasure these kinds of social interactions with people I otherwise only see as usernames and Twitter handles online.

Can't wait to do it again next year!

Jul 20 2017
Jul 20

As a Belgian sports fan, I will always be a loyal to the Belgium National Football Team. However, I am willing to extend my allegiance to Arsenal F.C. because they recently launched their new site in Drupal 8! As one of the most successful teams of England's Premier League, Arsenal has been lacing up for over 130 years. On the new Drupal 8 site, Arsenal fans can access news, club history, ticket services, and live match results. This is also a great example of collaboration with two Drupal companies working together - Inviqa in the U.K. and Phase2 in the US If you want to see Drupal 8 on Arsenal's roster, check out https://www.arsenal.com!

Arsenal
Jul 20 2017
Jul 20

When dealing with a site migration that has hundreds of thousands of nodes with larger than usual field values, you might notice some performance issues.

In one instance recently I had to write a migration for nodes that had multiple fields of huge JSON strings and parse them.  The migration itself was solid, but I kept running into memory usage warnings that would stop the migration its tracks.

Sometime during the migration, I would see these messages:

  • Memory usage is 2.57 GB (85% of limit 3.02 GB), reclaiming memory.
    [warning]
  • Memory usage is now 2.57 GB (85% of limit 3.02 GB), not enough reclaimed, starting new batch
    [warning]
  • Processed 1007 items (1007 created, 0 updated, 0 failed, 0 ignored) - done with 'nodes_articles'

The migration would then cease to continue importing items as if it had finished, while there were still several hundred thousand nodes left to import.  Running the import again would produce the same result.

I found a few issues on drupal.org that show others have been having similar issues:
https://www.drupal.org/node/2701335
https://www.drupal.org/node/2701121

The Drupal site was up to date and the patches provided in those issues weren't working.  The ideal solution would be to solve the problem so that the migrations would start back up after memory was freed, but because there wasn't enough time to dig into the cause of the issue, I opted for another solution.

Often times it can be useful to create a bash script to run your migrations for you.  That way you don't have to chain drush migrate-import commands together.  So writing a bash script like this:

#!/usr/bin/env bash

echo "Importing users";
drush mi users;
echo "Importing terms";
drush mi terms;
echo "Importing articles";
drush mi nodes_articles;
echo "Importing others";
drush mi other_nodes;

...can help save keystrokes.

When I ran into the memory issues with these larger migration items I thought it might be easier to apply a solution to the bash script since there was nothing inherently wrong with the migrations themselves.

I came up with this bash method:

migration_loop()
{
	# Get the output of the drush status.
	drush_output=$(drush ms | grep $1);

	# Split output string into an array.
	output=( $drush_output );

	# Output the status items.
	for index in "${!output[@]}"
	do

    	if [ $index == "0" ]
    	then
        	echo "Migration: ${output[index]}";
    	fi

    	if [ $index == "1" ]
    	then
        	echo "Status: ${output[index]}";
    	fi

    	if [ $index == "2" ]
    	then
        	echo "Total: ${output[index]}";
    	fi

    	if [ $index == "3" ]
    	then
        	echo "Imported: ${output[index]}";
    	fi

    	if [ $index == "4" ]
    	then
        	echo "Remaining: ${output[index]}";
    	fi

	done

	# Check if all items were imported.
	if [ "${output[4]}" == "0" ]
	then
    	echo "No items left to import.";
	else
    	echo "There are ${output[4]} remaining ${output[0]} items to be imported.";
    	echo "Running command: drush mi $1";
    	echo "...";
    	# Run the migration until it stops.
    	drush mi $1;
    	# Run the check on this migration again.
    	migration_loop $1;
	fi
}

The loop is pretty simple.  It simply reads the output of drush migrate-status for a given migration using grep as a filter.  It then prints out some information about the migration and determines.  

Based on the drush output of how many items remain to be imported, it will either run the migration again...

Migration: nodes_articles
Status: Idle
Total: 62294
Imported: 50672
Remaining: 11622

There are 11622 remaining nodes_articles items to be imported.
Running command: drush mi thr_node_venue

or end the loop...

Migration: terms
Status: Idle
Total: 8536
Imported: 8536
Remaining: 0

No items left to import.

Here is a full example of the script:

#!/usr/bin/env bash

migration_loop()
{
	# Better readability with separation.
	echo "========================";
	# Get the output of the drush status.
	drush_output=$(drush ms | grep $1);

	# Split output string into an array.
	output=( $drush_output );

	# Output the status items.
	for index in "${!output[@]}"
	do

    	if [ $index == "0" ]
    	then
        	echo "Migration: ${output[index]}";
    	fi

    	if [ $index == "1" ]
    	then
        	echo "Status: ${output[index]}";
    	fi

    	if [ $index == "2" ]
    	then
        	echo "Total: ${output[index]}";
    	fi

    	if [ $index == "3" ]
    	then
        	echo "Imported: ${output[index]}";
    	fi

    	if [ $index == "4" ]
    	then
        	echo "Remaining: ${output[index]}";
    	fi

	done

	# Check if all items were imported.
	if [ "${output[4]}" == "0" ]
	then
    	echo "No items left to import.";
	else
    	echo "There are ${output[4]} remaining ${output[0]} items to be imported.";
    	echo "Running command: drush mi $1";
    	echo "...";
    	# Run the migration until it stops.
    	drush mi $1;
    	# Run the check on this migration again.
    	migration_loop $1;
	fi
}

migration_loop users;
migration_loop terms
migration_loop article_nodes;
migration_loop other_nodes;

With this, you can circumvent any memory issues you may encounter with large migrations if time is limited.

Additional Resources:
Migration with Custom Values in Drupal 8 | Blog
Drupal 8: How to Reference a Views' Block Display From a Field | Blog
Rethinking Theme Structure in Drupal 8 | Blog

Jul 20 2017
Jul 20

Our Client

The Fine Arts Museums of San Francisco (FAMSF) is the largest public arts institution in the city of San Francisco and one of the largest art museums in the state of California. With an annual combined attendance of 1,442,200 people for the two museums (the Legion of Honor and the de Young), FAMSF sought a way to expand the experience for attendees beyond the reach of the physical exhibits themselves and to deepen visitors’ engagement with the art. From this goal, the idea of ‘Digital Stories’ was born.

The Challenge of ‘Digital Stories’

FAMSF had an interesting challenge:

  • They wanted to create engaging, interactive websites for each future exhibition. 
  • They wanted each exhibit website to be unique – not employing the same template over and over. 
  • They wanted these websites to serve as educational tools for their exhibits over the course of many years. 
  • They required a platform that museum staff could use to author without starting from scratch. 
  • They needed to create these website for their two museums — the Legion of Honor and the de Young — with different branding appropriate to each museum and to their respective exhibitions. 

In short, they required a tool that allowed them to “spin up” unique, interactive educational microsites for multiple exhibits, across two museums, for several years.

FAMSF had seen various treatments of text, images, audio, and video on the web that they felt could be used as inspiration for interactive features for their content, that when combined together could provide a larger learning experience for visitors. Those treatments included an expansive use of the standards in HTML5 and CSS3, along with a series of exciting Javascript libraries that expand interactions further than what is offered through HTML and CSS.

The problem also required more than a front-end solution to create the interactions. It needed to be built on a content management system — Drupal 8 in this case — that could support their content editors, providing them with a tool where they could simply upload and arrange content to produce amazing, dynamic exhibit sites.

The Solution

Understanding the brands

In order to create an adaptable template for the museums, we needed to first understand the two brands. The Legion of Honor displays a European collection that spans ancient cultures to early modernism.. The exhibits are seen as authoritative manifestos. The de Young, on the other hand, houses diverse collections including works from Africa, Oceania, Mesoamerica, and American Art. The exhibits are challenging and exploratory, and invite visitors to think about art in new and different ways. The framework for the microsites needed to be flexible enough to convey either brand effectively.

Understanding the content

The FAMSF project was unique in that it wasn’t the typical content strategy we do for websites. Because this project was more interaction and feature driven, our content strategy was focused on the different elements of the stories to be told, and how we could showcase those elements to create an expansive experience for visitors. For example, users should be able to zoom in on a painting to get a closer look, or be able to click on a question and see the answer display.

Creating the right interactive features

With so many different possible elements, it was important to narrow down the interactions and feature components that they needed. These components needed to match their content and also have the ability to be executed in a tight timeline.

For overall presentation treatment, we introduced a flexible content area where individual sections could be introduced as revealable “card sections”. Within each card section, a site administrator can first choose to add hero content for that section which could include either background images or background video, plus various options for placement and style of animated headers.

Next within the card section, a series of “Section Layout Components” were available, such as single column or two columns side-by-side that they could choose from. Within the column sections they could place modular content components that included media (video, images, audio) and text.

Menu features for the Early Monet site, one of the Digital Stories for the Legion of Honor Museum.Menu features for the Early Monet site, one of the Digital Stories for the Legion of Honor Museum.

We used a custom implementation of several JavaScript and jQuery libraries to achieve the card-reveal effect (pagePiling.js) and animated CSS3 transitions as slides are revealed, using a suite of CSS3 animation effects, particularly for Hero sections of slides. Additionally, implementation of a JavaScript library (lazyloadxt) for lazy-loading of images was critical for the success of the desired media-rich pages in order to optimize performance. All were coded to work on modern mobile and desktop browsers, so that every experience would be rich, no matter the type of device it was displayed on.

Many interactive components went through a process of discovery and iteration achieved through team collaboration, taking into account the strategic needs of each component to increase user engagement, along with content requirements of the component, look, feel and interactivity. Components as well as the general treatment were presented as proof-of-concept, where additional client feedback was taken into account. Most interactivity on individual components was done by creating custom jQuery behaviors and CSS3 animation for each component. This often included animated transitional effects to help reveal more pieces of content as users look more closely.

Collapsible content displayed on the “Summer of Love” site, one of the Digital Stories for the de Young MuseumCollapsible content displayed on the “Summer of Love” site, one of the Digital Stories for the de Young Museum

Applying the FAMSF brand to design components

Although the same colors and typefaces employed in FAMSF’s main website were used, it was agreed from the beginning that the Digital Stories and the main website were going to be “cousins” within the same family as opposed to “siblings,” so they could definitely have their own unique feel. This supported the goal of designing the microsites to be an immersive and very targeted experience. This was achieved by expanding upon the existing color palette and using additional fonts within the brand’s font family.

Style tile created for the de Young: playful / challenging / contemporary / exploratoryStyle tile created for the de Young: playful / challenging / contemporary / exploratoryThe de Young style tile creates a sense of excitement and delight through the use of whimsical icons and graphics. Easily recognizable iconography is incorporated in order to communicate effectively with a wide audience, with the added bonus of fun details such as saturated drop shadows and stripes. The de Young style tile creates a sense of excitement and delight through the use of whimsical icons and graphics. Easily recognizable iconography is incorporated in order to communicate effectively with a wide audience, with the added bonus of fun details such as saturated drop shadows and stripes.

In order to make sure the FAMSF team could reliably reproduce new, unique exhibit sites without having to change any code, we had to systematize the structure of the content and application of the interactions.

The Digital Story Content type for each exhibition had modular and reusable interactive features including the following:

  • An image comparison component for comparing two or three images side by side, where revealable text can give more context for each image 
  • An audio component for uploading audio files that included custom playback buttons and a revealable transcript
  • The ability to add a highly customizable caption or credit to any instance of an image or video
  • A zoomable image component where markers can be positioned on the initial image and that marker can be clicked, revealing a more detailed image and an area for more commentary on that detail
  • A revealable “read more” section that can contain further subsections
  • An image with an overlay, to be able to reveal a new image on top of the existing image. This was used to demonstrate aspects of the composition of a painting, showing a drawing on top of the painting.
  • A video component that could support uploaded videos or embedded streaming video
  • A horizontal slider that could contain images and captions with a variety of configuration
  • A stand-alone quotation with display type and an animated transition

The Results

The resulting platform we built allowed FAMSF to launch two exhibit sites in rather quick succession, which would have been incredibly difficult if they had to build each from scratch. In a matter of weeks, FAMSF launched two quite different interactive learning experiences:

Both exhibit sites have received praise, not only internally at FAMSF, but from online reviews of the exhibits, which mention the accompanying Digital Stories online learning tool.

Since the completion of the engagement with Palantir, FAMSF has already leveraged this tool to create an additional Digital Stories site (digitalstories.famsf.org/degas), and they have plans to create at least three more before the end of the year. Because of the simplicity of using the platform, they anticipate being able to spin up 4 - 5 different exhibit sites per year.

Current success for the Digital Stories sites is being measured by individual views and actual participation rate, and the initial results are on track with FAMSF’s initial goals:

  • The Monet site has over 30,000 views
  • The Summer of Love site has just under 30,000 views
  • Visitors are typically spending 4 - 5 minutes on each page

We’re pleased to have been part of a project that helps expand visitors’ understanding of important artists and their works. FAMSF was a great partner, allowing for a true collaboration focused on both pairing the nest technologies to fit the material and also providing the best learning mechanism for those engaging with the content.

We want to make your project a success.

Let's Chat.
Jul 20 2017
Jul 20
GSOC

I am working on Adding support for The League OAuth2 and new implementers for social auth and social post under the mentorship of Getulio Sánchez "gvso" (Paraguay) and Daniel Harris “dahacouk” (UK).

Last week, I created the implementers for the Officially supported base client of The League OAuth2 client library. Due to large number of implementers involved and time required to thoroughly review each implementer, my GSoC mentor Getulio Sánchez advised to start working on my first social post implementer using The League OAuth2 library. The first social post implementer I chose to create is Social Post Facebook.

Here are some of the things that I worked on during the 7th week of GSoC coding period.

Defining Social Post Data Handler in Social Post - To efficiently and make code readable SocialPostDataHandler.php was defined to  write and read data from the Session.

Creating the first social_post implementer - The first social_auth implementer I choose to implement is social_auth_facebook. We’ll be using Facebook Provider for OAuth 2.0 Client for authentication purpose and Facebook Graph SDK to make API calls on behalf of the user.

  • Checking The Required library for Social Post implementer: we’ll extend The League OAuth2 client by using Facebook provider and so we need to make sure that the library is installed. We’ll also require Facebook Graph SDK to make API calls to post on behalf of the user.

  • Creating social_post_facebook settings form page:  A new setting form is created to facilitate and configure the Social Post Facebook, it’s similar to Social Auth Facebook except for some fields such as data points, etc which are not required in Social Post implementers.

  • Creating Instance of \League\OAuth2\Client\Provider\Facebook class.

  • Refactoring Social Post Twitter - The social post twitter was earlier created as a part of last year GSoC project, although social post facebook time implementation  is different from social post twitter  but code written before can be used again.

  • Getting an authorization url: Apart from requesting user data we need to explicitly request the permission to post on behalf of the user. This permission is requested via scope ‘publish_actions’.

$authUrl = $provider->getAuthorizationUrl([

          'scope' => ['email', ‘publish_actions’],

      ]);

      'code' => $_GET['code']

    ]);

These were some of the important topics related to my project that I had to work on during my seventh week. My goal for the next week is to complete this Social Post implementer and then work on other Social Auth implementers.

Jul 20 2017
Jul 20

We are in the seventh week of GSoC 17 to integrate Google Cloud Machine Learning Engine to Drupal. Before proceeding, please have a look at our previous blogs and mainly the major planning week. This week started with the aspiration to create a basic version of the ml_engine module, but we could reach only till creating basic entities. The major problem we faced was the daunting bugs in creating the entities. I made a mistake by spending almost a week trying to solve the issue without updating the issue queue. Recently have added it to the issue page. Let me share a screencast of the progress.

[embedded content]

Our initial reference in creating the entity was the examples module. It has a sub module content_entity_example that explains how to create a content entity to store Contacts. We made a similar one called Project by tweaking few functionalities. Here is the code. It gave a problem of having the same Tensorflow input fields for all projects. The fields we created using entity manage fields were getting reflected in all the project entities. It was a good place to start playing with entities. With the design to use different inputs for each project in mind, the Contact and Node modules that ships with the core was referred. It gave the insight to store the ML Engine Project and its Input as separate Drupal entities.

The Project is a config entity that handles the ML Engine settings data. It holds job, model and version names, training steps, deployment URL etc. The Argument is a content entity that handles the command line arguments to the Tensorflow code. The Argument entity fields for Tensorflow code can be varied but Projects fields are fixed.

With one week left for the second major evaluation, we have some major works to be completed to achieve the deliverables. The first one is to complete coding the entities. Second, convert the selected Drupal Views to CSV. Third, enable each project entity to run training, deployment, and prediction. Finally to create a plugin to format the prediction output. The second and the final tasks are the hard ones. I am expecting the coming week to be hectic. We will complete the coding and submit it for mentor review. 

Jul 20 2017
Jul 20

Everybody wants to be high on web search engines results because when people look for information, they rarely get past the first page of Google, Yahoo, Bing ... So, if you don’t show up high in a search engine results, the potential customers will have hard times finding you. Or, most likely, they won’t find you at all. Luckily, there are some ways to boost your rankings in search engine results.

With Modules

There are many Drupal modules, which will help you enhance the search engine optimization (SEO). In fact, we have already made a list of the best Drupal SEO modules to optimize your website. The list includes Google Analytics, SEO Checklist, Metatags, Content Optimizer, Pathauto and so on. It's hard to expect that you will try all of them, so it's up to you to test some of them and see if the results match your expectations.

 

Drupal SEO Modules

 

With Keywords

It's crucial to define the keywords you will target. These keywords can be then in the centre of your SEO campaign. But to do so, you must make a research about them. Some of the Drupal SEO modules will help you have control over the keywords, but you must first find them. The easiest way to do so is with Google Trends, which allows you to see how a specific keyword has done over the years. And with that, you will know if there is any traffic potential from that keyword. After you have selected them, you can use a Keyword planner to see, how frequently terms are searched per month, how competitive they are for ranking, and how much they cost to advertise on.

With Images

Not just because the readers like to break the monotony of the connected on-going texts and look at some other content-related things, but because people look for images as well. They don't look just for content. And the latter will be easily explored if your images score high as well. A little tip in achieving this is making sure that at least one of the alternative texts of your images in the article, blog ... contains a focus keyword from your content. However, keep in mind that images, which are not compressed and optimised for serving up over slower internet connections are seeking for trouble.

 

SEO

 

With speed and without dead links

Make sure your page loads fast. Search engines don't favour the ones, which have high bounce rates due to the slow loading speed. So, make sure your site is not a »slow loader«. Moreover, the problem we still, accidentally, face sometimes are dead links. The ones, which are headed towards non-existing locations or the ones, which guide you to sites that are not working. Broken link issues also lower your score, so make sure you don't have them. Therefore, your URLs should be clean, readable and should contain the page title.

With continuous updates

The SEO is flexible, so it requires a continuous improvement. The idea of improving your Drupal site can come from your competitors as well, so make sure you monitor them. However, we hope that at the moment we have helped you on your way to rank better in search engines results. But one thing is crucial and keep that in mind. Primarily, you write a content for your Drupal site visitors and then comes the optimisation for search engines. Not the other way around!

Jul 20 2017
Jul 20

You want to allow your content editors to easily retrieve and integrate media, images, videos, sounds, presentations from third-party platforms? The Media entity suite has an extra string to its bow with the URL embed module. This module uses the PHP Embed library, and allows you to simply retrieve from a url a media from a third-party platform like Twitter, YouTube, Instagram, SoundCloud, Spotify, Vimeo, Slideshare, etc. To mention only the best known.

This module allows you to integrate a remote media within a text field or from a dedicated link field type. Let us discover how this module works, very simply besides.

Installing the module

You can install this module with Composer (composer require drupal/url_embed ~1.0). This method will download the module, as well as its dependencies, namely the Embed module and the PHP Embed library. If you want to install it from drush, or by downloading its archive, you will have to download these dependencies, and in particular place the PHP library in the /vendor directory of your site.

Configuration and use with CKEditor

Once installed, this module provides an embed button by default. This button will be available later in the CKeditor options for the text formats using it. You can customize this button, change its icon, and add as many as you need.

Bouton URL Embed pour insérer des médias dans un body

This button will then be available in the available CKeditor buttons.

Configuration CKEditor

To enable media integration, drag the previously created button into the CKeditor  active bar. In the example below we configure the full HTML text format.

We activate too the two filters provided by the URL embed module.

URL embed filters

The first filter Convert URLs to URL embeds automatically converts a link pasted directly into the text body, and transform it into an url that can be processed by the second filter.

The second filter Display embedded URLs allows to transform the link added with the dedicated media dialog box.

In order to automatically transform the pasted urls directly into the body, the first filter must be placed before the second in the filter processing order settings applied to the text format. 

Ordre des filtres

If you have enabled the Limit allowed HTML and correct faulty HTML filters, then the following tag should be added to the Allowed Tags input field:

<drupal-url data-*>

As shown in the capture below.

Balises autorisées

You also have the option of filtering the type of url that you want to convert automatically. For example, if you want only Twitter urls to be converted automatically, you can enter https://twitter.com in this field. Leave it empty to allow any type of urls.

limiter par type d'url les urls converties

And your text format is configured. All you have to do now is to insert media using links, either with the button and its dialog box.

Insérer un média distant

Or simply by copying / pasting the url into the body of the text.

Insertion d'un media dans un body

And the corresponding rendering

Le media twitter inséré dans le corps du texte

Using Embed URLs with fields

The use of this module with Link fields is even easier.

After you add a Link field to a content type, you simply configure its display mode and select the Embedded URL format.

Format d'affichage Embedded URL

And you just need to enter any supported url in the field.Insertion d'un lien vidéo avec url embed

For the following rendering

Une vidéo rendue depuis un champ formaté par URL Embed

Embed URL and Media Entity Suite

The Media Entity suite already includes many extensions to add many media types (image, video, document, etc.) to a structured catalog, as well as remote media such as Twitter, Slideshare, Soundcloud, Spotify, Youtube, Dailymotion, Etc.

URL Embed is a lighter alternative, but also with fewer options, for those who want to easily insert different types of remote media, without wanting to set up an internal media library system. Although in alpha version, the module is quite operational, despite some small bugs on certain types of media or urls (Some providers (Facebook, Twitter) badly rendered in editor and Some URLs are rendered correctly only the first time), and still requires some minor improvements before it can be available in stable version. A module to keep in its favorite module.

Jul 19 2017
Jul 19
When Acquia-related developer content starts piling up around the Web, it's time for an Acquia Dev Scan.In this edition: Taming Expanding Databases in Drupal 8, Reservoir on DrupalEasy, and a Conference for Decoupled Drupal DevelopersDealing with expanding cache_render tables in Drupal 8There are a number of scenarios in Drupal 8 where you might notice your MySQL database size starts growing incredibly fast, even if you're not adding any content.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web