Author

Mar 23 2017
Mar 23

Recently, I had to create a slideshow for a project. Nothing unusual about that you say. Indeed, everywhere you look you see slideshows. If there's one thing that's common to 99% of all websites today it's a slideshow. Almost boring. That is until you actually start to implement one. There must be 50 ways to slide you images - and I don't mean the transition effects. Picking and choosing the right module and library is almost a burden.

There are various scenarios where you would want to display a slideshow. The most common is the home page. I've used the venerable Views Slideshow module (2007) in the past for this purpose. It's simple enough to implement and is available for D7, D8 as well as Backdrop.

SLider and it's carrousel

In my use case, an existing corporate website which I did not build and didn't know very well, the slideshow was for a project page - like a one-page website within the website - which was to house photos, videos and texts related to a single promotional event. My first thought was to repurpose available modules and libraries if I could.

The wireframes showed a slider with an image carousel underneath in place of the usual circles - nice! Because the setup needed to work for both images and videos, I decided to create a prototype to test things out.

Views Slideshow was installed and used for the home page banner, so I started experimenting with that but wasn't totally satisfied with my options. Same with Galleria (2008), couldn't get it to work with YouTube videos.

In another recent Bootstrap based project I had searched for (search never ends does it?), found and chosen Flex Slider because I needed to create a series of horizontal book carousel for a Swiss Publishing house. Adopting the FlexSlider 2 library for that project allowed me to save a lot of time because I got most of what I needed out-of-the-box, including beautiful responsive carousels.

Book carousel

But for my corporate client, I needed to combine slider and carousel. So I headed out on the Web and searched for best practices and then screencasts that would help me fast track my prototype.

Bingo! I found an amazing screencast series by Drupal Legoland that showed exactly how to create both the slider and carousel with the Flex Slider module.

The trick is that you actually build 2 carousels and make them work together. The end result is beautiful and works like a charm. I can't say enough about the screencast series itself - it's a great example of how screencasts should be done - and to top it off, it's free! Kudos to the author!

So, if there are 50 ways to slide you images, I now feel like there's only one way to learn how to do it. First, check out the end result on the client's website (in the Photos tab) and then be sure to watch Drupal Logoland screencast series here.

Cheers!
`~..~`

Mar 23 2017
Mar 23

Overview

JSON Web Tokens (JWT) are an open, industry standard RFC 7519 to represent a set of information securely between two parties. JWTs are commonly used for authentication to routes, services, and resources and are digitally signed, which enables secure transmission of information that is verified and trusted.  Seen as a more modern approach to authentication, JWTs serve as a robust alternative to traditional authentication models - eliminating the need to pass sessions or credentials repeatedly to the server.

In this post, I outline the benefits of JWTs and its advantages over session-based authentication. I also share a step-by-step guide to setting up JWT authentication in Drupal 8 for authenticating requests to protected core REST resources.

The JWT Structure

A JWT contains 3 parts:

  • Header - typically contains the type of the token (i.e., “JWT”) and the hashing algorithm being used (i.e., “HS256”, “HS512”). This information is Base64Url-encoded.
  • Payload - contains the “claims”, which are statements with information about an entity (such as the user) and other types of metadata. This information is Base64Url-encoded.
  • Signature - used to check the authenticity of the JWT. It is generated by using the hashing algorithm specified in the header to hash the “header”, “payload”, and a secret key. The signature is crucial in verifying that the sender of the JWT is legitimate and that the message has not been tampered with along the way during transmission. If the message has been tampered with, the signature will not match because it was generated from the original payload data and will be invalid - failing the authentication.

Example of a JWT:


Main Advantages

  • Compact - the small overhead allows for the quick transmission of information. Because of this, it can be quickly sent through an URL, POST parameter, or in an HTTP header.
  • Self-contained - All required information about the user or client are contained in the JWT’s “payload” - which reduces database queries.
  • Works with CORS - token-based authentication allows for calls to any server by transmitting user information through the HTTP header.

JWT vs Session-based authentication

  • JWT do not require session data to be kept on the server to perform authentication. For applications running on multiple servers, this alleviates the need for sharing session data across the servers.
  • JWT carry their own expiry date (in the “payload”) and do not require garbage collection as is needed when expiring sessions.
  • JWT allows for true RESTful services as the communication between the parties is stateless - requiring a valid token to be included in each request.
  • The JWT can easily be sent with each request and contains all the information about the user/client - eliminating the need to reference the database to get the information.

Authentication Workflow

  1. Client logs in (or requests a JWT directly from the provider).
  2. A digitally-signed JWT is created with the secret key.
  3. A JWT is returned that contains information about the client. This JWT should be stored client-side like in localStorage.
  4. On each request, the JWT should be sent in the “Authorization” header (where <token>is the JWT):
Authorization: Bearer <token>
  1. The JWT is verified and validated. If the JWT has expired, a new one should be requested.
  2. If validated, the response gets returned to the client.

Setting Up in Drupal 8

We will go through the process of setting up JWT in Drupal 8 to authenticate some core REST resources

Install and enable the JWT module

The JWT module provides an authentication provider that uses JWTs that we can enable for our REST endpoints. It has a dependency on the Key module which should also be enabled. The firebase/php-jwt package library will be included for encoding and decoding the JWTs.

The module can easily be installed with composer:

composer require drupal/jwt;

Enable the module:

drush en jwt-y;

We also want to enable the “JWT Auth Consumer” and “JWT Auth Issuer” modules that come with the JWT module (these modules will allow Drupal to issue and consume JWTs):

drush en jwt_auth_consumer jwt_auth_issuer -y;

Create a secret key

To generate and validate JWTs, a secret key is needed. Go to /admin/config/system/keys/add (or the “Configuration” page and click “Keys”).On the form:

  • “Key name” - the name of the key.
  • “Key type” - select “JWT HMAC Key”. We will be using HMAC.
  • “JWT Algorithm” - select “HMAC using SHA-512 (HS512)”. Any other algorithms will also work.
  • “Key value” - provide a key with a length that satisfies the hashing algorithm chosen in “JWT Algorithm”. In our case, we need to provide a key that is a minimum of 1024 bits.

Example:

 Save the key.

Use the secret key with JWT

Now, we need to tell the JWT module to use the key we just created as the secret.Go to /admin/config/system/jwt (or the “Configuration” page and click “JWT Authentication”).Under “Algorithm”, select the algorithm that was set for your secret key. In our case, it will be “HMAC using SHA-512 (HS512)”.Under “Secret”, choose the secret key you created.Example:

Save the configuration.

Install and enable the REST and REST UI modules

In this example, we will be applying JWT authentication to a core REST resource, so we need to make sure the REST module and our resource are enabled:

Install the REST UI module:

composer require drupal/restui;

Enable the modules:

drush en rest -y;
drush en restui -y;

Configure our REST resource

Go to /admin/config/services/rest and enable the “Content” resource:

Click “Edit” on the “Content” resource.
Enable the “GET” method, check the “json” option under “Accepted request formats”, and “jwt_auth” under “Authentication providers”:

Save the configuration.As of Drupal 8.2.0, accessing entities via REST no longer requires REST-specific permissions. As a result, whether or not a user has access to an entity via REST depends on their permissions to access that entity. For the sake of our example, let’s prevent anonymous users from accessing any published content, which also restricts them from accessing those entities via REST:

Usage

Creating a piece of test content

We will create and publish a test “article” node for which we will be accessing its JSON output via REST. You can create any node you’d like. Once created, the JSON output for that node can be viewed by accessing /node/1?_format=json).

Accessing our protected resource

Now that our REST resource has been configured to use JWT Auth and anonymous users are restricted from accessing entities via REST, it’s time to test out the JWT authentication. 

Open your favorite REST client like Postman and enter the URL for the node’s JSON output (i.e., /node/1?_format=json). Using the GET method, hit “Send”. The response will be empty because we are trying to access this resource as an anonymous user (and we prevented anonymous users from accessing published nodes for this example): 

Since our REST resource is protected with JWT Authentication, we need to pass in a JWT to authenticate us. Logged in as an “administrator” user, visit /jwt/token to retrieve a JWT. You should see the JWT displayed when you visit that route:

Copy this entire token and add an “Authorization” header in your REST client with a value of “Bearer <token>” where <token> is the JWT:

 Hit “Send” again and voilà - a response is returned:

How did this work?

Let’s discuss what’s happened here.The JWT module provides 3 events -- VALIDATEVALID, and GENERATE -- which are dispatched by the event dispatcher. Custom validations and logic can be incorporated by creating your own event subscribers that subscribes to these events (more information about the events can be found in JwtAuthEvents.php). 

The VALID event fires after the token has been validated (in other words, the JWT signature is verified as legitimate). The user is then loaded from the “uid” that is specified in the JWT “payload” (see loadUser() in JwtAuthConsumerSubscriber.php). Note that if no “uid” is in the payload, the user is “anonymous”. The JWT Authentication provider (JwtAuth.php) has an authenticate() method that validates this user and if the validation succeeds, that user will be able to access the REST resource. Because the JWT passed into the request’s header contains the “uid” of the “administrator” user -- and provided that administrators are able to access this resource -- the response is returned successfully.

Notes

We can also use the JWT Debugger to analyze any JWT tokens. If we paste our JWT in the debugger and put in our secret key, we can see a confirmation that our signature has been verified. We can also see that the JWT “payload” contains the administrator user’s uid and the expiration timestamp:

Mar 23 2017
Mar 23

It's that time of year again when everyone starts getting excited about DrupalCon.  People are getting geared up to attend sessions, meet up with team members and clients, and let's not forget, load up on as much swag as possible.  But an important piece which often gets overlooked are the Summits that happen the Monday before the conference begins.  These events are happening again in Baltimore, and the Media and Publishing Summit is one you should consider attending.

Media has always been a weakness in Drupal.  Dries, Drupal’s founder and fearless leader, has long mentioned that he saw better media handling being critical to Drupal’s continued success and survival in the future.  Because of this, Dries announced at DrupalCon New Orleans that there would be an official Media Initiative to improve media handling in core.  At DrupalCon Baltimore, the Media and Publishing Summit will highlight what's new in Drupal for media and where the future is headed.

Here are the top agenda items of the day:

Update on Media initiate

Much of the work that is going to move into Drupal core already exists in the contrib workspace.  Modules like media_entity, entity_browsers, and entity_embed allow for a much better way to handle your media in drupal.  Creating, searching and attaching videos and files has become much easier and more intuitive in Drupal 8 than it was in past versions of Drupal. Dries will be giving an update on where these modules stand, what functionality is moving into core as a result of the Media Initiative, and what work still needs to be done.

Panel discussion

The Media and Publishing Summit will also feature a panel discussion that will include industry leaders discussing how they are using Drupal to enhance the digital experiences they provide.  They’ll discuss benefits, problems they’ve run into and the types of things they will need Drupal to be able to do.  You will be able to participate in the conversation which will allow you to use their experience to help improve your own site or brand.

Case studies

Specific case studies will give us all insight into how some of the biggest brands in the Media space have leveraged Drupal to create powerful media experiences for their users.

Future of digital media

Chuck Fishman, Director of Industry Marketing and Development at Acquia, will share trends in the Media industry and will discuss where these trends may go and what we may see in the industry in the future.  We will discuss how we may be able to position Drupal to be able to take advantage of these trends and remain relevant in the media space.

Round table discussions

We will also break out into smaller groups to discuss more targeted subjects.  We will have leaders from different parts of the industry head these discussions.  Sports and Entertainment, Print and Publishing, and Broadcast/Cable will be some of the sectors represented.  This will give you the ability to discuss issues and ideas that are more relevant to their specific aspect of the industry with your professional peers.

This will be an engaging day filled with great information from industry leading professionals, Drupal core media developers, and hopefully you!  Be sure to sign up for the Media and Publishing Summit and grab your tickets now!

Date: Monday, April 24th
Time: 11:30am-5:00pm (lunch and afternoon coffee included)
Cost: $199 advanced | $250 on-site

Register Now

Mar 23 2017
Mar 23

A couple of those advantages are:

  • Fieldable: nodes can have fields so you can create really flexible pages with, especially with modules like Paragraphs and Panelizer.
  • URL aliases: nodes can have URL aliases defined by the content editor, so they don’t have to wait for development to update paths.
  • Revisionable: nodes are revisionable which means that any change can be logged, viewed and reverted when needed.
  • Translatable: nodes can be translated which allows you to create different pages for such pages. It’s really common to want different content per language.
  • Metatags: you can use the metatag module on nodes to let the content editor edit the content of metatags.

We created a little trick for this at .VDMi/ to simply generate those pages from a Drush command. The command checks if the pages already exist and will create them if they don’t. Because you want to be able to look up these pages from code, to link to the search page for example, every page has a custom machine name configured.

Creating the pages

So first we start with our custom Drush command:
my_module/my_module.drush.inc:

<?php

/**
 * Implements hook_drush_command().
 */
function my_module_drush_command() {
  $items = array();
  $items['default-pages'] = [
    'description' => 'Installs the default pages and registers them in configuration.',
    'arguments' => [],
    'drupal dependencies' => ['my_module'],
    'aliases' => [],
  ];
  return $items;
}

/**
 * Callback function for default pages.
 */
function drush_my_module_default_pages() {
  drush_print('Ensuring default pages:');

  /** @var Drupal\Component\EventDispatcher\ContainerAwareEventDispatcher $dispatcher */
  $dispatcher = \Drupal::service('event_dispatcher');
  $dispatcher->dispatch('my_module.default_pages');
}

In the Drush file, we define our custom Drush command “default-pages”. When we execute the command “drush default-pages” it will execute the function “drush_my_module_default_pages”. This function dispatches an event to the Drupal event system. Other modules can respond to this event, so that’s what we will do from our other module called my_front_module:
my_front_module/my_front_module.services.yml:

services:
  my_front_module.default_pages_subscriber:
    class: '\Drupal\my_front_module\EventSubscriber\DefaultPagesSubscriber'
    tags:
      - { name: 'event_subscriber' }

This yaml file defines a service that has the event_subscriber tag which makes it be run on events.
The next file will be the event subscriber implementation:
my_front_module/src/EventSubscriber/DefaultPagesSubscriber.php:

<?php
/**
 * @file
 * Contains \Drupal\my_front_module\EventSubscriber\DefaultPagesSubscriber.
 */

namespace Drupal\my_front_module\EventSubscriber;

use Drupal\my_module\EventSubscriber\DefaultPagesBase;

/**
 * Event Subscriber DefaultPageSubscriber.
 */
class DefaultPagesSubscriber extends DefaultPagesBase {

  /**
   * [email protected]}
   */
  protected function getDefaultPages() {
    $frontpage_base = [
      'type' => 'page',
      'path' => 'home',
      'title' => 'Home',
    ];

    return [
      'frontpage_nl' => $frontpage_base + [
        'field_description' => 'Dit is de voorpagina!',
      ],
      'frontpage_en' => $frontpage_base + [
        'field_description' => 'This is the frontpage!',
      ],
      'frontpage_de' => $frontpage_base + [
        'field_description' => 'Dies ist die Titelseite!',
      ],
    ];
  }
}

In this code we define the frontpage for 3 languages: Dutch, English and German. This class doesn’t do anything else on it’s own. It relies on the “DefaultPagesBase” class to actually create the pages:
my_module/src/EventSubscriber/DefaultPagesBase.php:

<?php
/**
 * @file
 * Contains \Drupal\my_module\EventSubscriber\DefaultPagesBase.
 */

namespace Drupal\my_module\EventSubscriber;

use Symfony\Component\EventDispatcher\Event;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Drupal\node\Entity\Node;
use Drupal\Core\Language\LanguageInterface;

/**
 * Event Subscriber DefaultPageSubscriber.
 */
abstract class DefaultPagesBase implements EventSubscriberInterface {

  public static $stateKey = 'my_module.default_pages';

  public function onCreateDefaultPages(Event $event) {
    $default_pages = $this->getDefaultPages();

    if (count($default_pages)) {
      foreach ($default_pages as $id => $default_page) {
        if ($this->defaultPageExists($id)) {
          drush_print($id . ': Page already exists!');
          continue;
        }

        if ($this->createDefaultPage($id, $default_page)) {
          drush_print($id . ': Page created!');
        }
        else {
          drush_set_error($id . ': Error while creating page!');
        }
      }
    }
  }

  /**
   * [email protected]}
   */
  public static function getSubscribedEvents() {
    $events['my_module.default_pages'][] = ['onCreateDefaultPages'];
    return $events;
  }

  /**
   * @return array
   */
  abstract protected function getDefaultPages();

  /**
   * @param $id
   * @return bool
   */
  private function defaultPageExists($id) {
    $default_pages = \Drupal::state()->get(self::$stateKey, array());
    $entity_id = (isset($default_pages[$id]) ? $default_pages[$id] : NULL);

    if ($entity_id && $node = Node::load($entity_id)) {
      return TRUE;
    }

    return FALSE;
  }

  /**
   * @param $id
   * @param $page
   * @return bool
   */
  private function createDefaultPage($id, $page) {

    if (!isset($page['langcode'])) {
      $page['langcode'] = LanguageInterface::LANGCODE_NOT_SPECIFIED;
    }

    $path = FALSE;
    if (isset($page['path'])) {
      $path = $page['path'];
      unset($page['path']);
    }

    try {
      $node = Node::create($page);
      $node->save();

      if ($path) {
        /* @var \Drupal\Core\Path\AliasStorageInterface $path_storage */
        $path_storage = \Drupal::service('path.alias_storage');
        $saved_path = $path_storage->save("/node/" . $node->id(), '/' . $path, $page['langcode']);
      }

      /* @var \Drupal\Core\Config\ConfigFactoryInterface $config */
      $default_pages = \Drupal::state()->get(self::$stateKey, array());
      $default_pages[$id] = $node->id();
      \Drupal::state()->set(self::$stateKey, $default_pages);

      return TRUE;
    }
    catch (\Exception $e) {
      drush_set_error('Error while creating default page:');
      drush_set_error($e->getMessage());
    }

    return FALSE;
  }
}

The code above is what actually responds to the events and creates the nodes. It loads the default pages from the current class. Loops through them and checks one by one if the pages already exist, if it doesn’t, it will create it. It uses the Drupal State API to keep track of the pages that have been created and their page IDs. We use the State API and not the configuration API as we don’t want these values to transfer between environments as content also does not. We can use this State API to find this default page later again:

<?php
use Drupal\node\Entity\Node;

$current_language = \Drupal::languageManager()->getCurrentLanguage();
$default_pages = \Drupal::state()->get('my_module.default_pages', array());
$node_id = (isset($default_pages['front_' . $current_language->getId()]) ? $default_pages['front_' . $current_language->getId()] : NULL);

if (!$node_id) {
  return NULL;
}

$node = Node::load($node_id);

Making sure the editors can’t delete the default pages

Let’s be fair, we're all human, and we all make mistakes. Sometimes content editors do stuff they shouldn’t. And as these default pages are content, mistakes can happen to them, like accidental content removal. The following code makes sure content editors can't delete default pages.
my_module/my_module.module:

<?php

use Drupal\Core\Access\AccessResult;

/**
 * Implements hook_node_access().
 *
 * Makes sure default pages nodes aren't deleted.
 */
function my_module_node_access(\Drupal\node\NodeInterface $node, $op, \Drupal\Core\Session\AccountInterface $account) {
  if ($op == 'delete') {
    $default_pages = \Drupal::state()->get('my_module.default_pages', array());
    $default_pages_nids = array_flip($default_pages);
    if (isset($default_pages_nids[$node->id()])) {
      // Return forbidden when this is a default page.
      return AccessResult::forbidden();
    }
  }

  // No opinion by default.
  return AccessResult::neutral();
}


As you can see, the delete button is gone now.

Note: do not give people the “Bypass content access control” permission, this bypasses the hook.

Using different nodes as frontpage for different languages

Sadly Drupal can’t configure a different front URL per language (yet). Because of this, we needed to create a workaround. We did this by creating a route on the URL /front, and find the correct node for the current language from the default pages.
my_front_module/my_front_module.routing.yml:

 my_front_module.front:
  path: '/front'
  defaults:
    _controller: '\Drupal\my_front_module\Controller\FrontController::content'
    _title_callback: '\Drupal\my_front_module\Controller\FrontController::title'
  requirements:
    _permission: 'access content'

my_front_module/src/Controller/FrontController.php:

<?php

namespace Drupal\my_front_module\Controller;

use Drupal\Core\Controller\ControllerBase;
use Drupal\node\Entity\Node;

class FrontController extends ControllerBase {

  /**
   * @return \Drupal\Core\Language\LanguageInterface
   */
  private static function currentLanguage() {
    $language_manger = \Drupal::languageManager();

    return $language_manger->getCurrentLanguage();
  }

  /**
   * @return \Drupal\Core\Entity\EntityInterface
   */
  public static function getFrontPageForCurrentLanguage() {
    $current_language = self::currentLanguage();

    $default_pages = \Drupal::state()->get('my_front_module.default_pages', array());
    $node_id = (isset($default_pages['front_' . $current_language->getId()]) ? $default_pages['front_' . $current_language->getId()] : NULL);

    if (!$node_id) {
      return NULL;
    }

    $node = Node::load($node_id);

    if (!$node) {
      return NULL;
    }

    return $node;
  }

  public function title() {
    $node = $this->getFrontPageForCurrentLanguage();
    if (!$node) {
      return '';
    }

    return $node->label();
  }

  public function content() {
    $node = $this->getFrontPageForCurrentLanguage();
    if (!$node) {
      return array(
        '#type' => 'markup',
        '#markup' => t('Frontpage not found!'),
      );
    }

    $view_builder = \Drupal::entityTypeManager()->getViewBuilder('node');
    $build = $view_builder->view($node, 'default');

    return $build;
  }
}

The code above looks up the front page for the current language in the default pages State. When it finds it, it renders it in the default view mode. When you configure /front as frontpage for your site, you will see that it will load the correct node per language.

Making sure this trick works with the metatag module

Since metatag loads the entity per route, it’s not able to find the metatags for the /front page. To fix this you will need to implement a metatag hook: 
my_front_module/my_front_module.module:

<?php

/**
 * Return node of current language so metatag attaches metatags.
 *
 * @param \Drupal\Core\Routing\CurrentRouteMatch $route
 * @return \Drupal\Core\Entity\EntityInterface|null
 */
function my_front_module_metatag_route_entity($route) {
  if ($route && $route->getRouteName() == 'my_front_module.front') {
    return \Drupal\my_front_module\Controller\FrontController::getFrontPageForCurrentLanguage();
  }
}

This hook will get called when the metatag module tries to find the metatags for the current page. The hook implementation basically checks if the route name matches the frontend route name and then uses the controller class to lookup the node for the current language so the metatag module can extract the metatags for the node.

All the code is available in the following gist: https://gist.github.com/jerbob92/51128ab3106ff49533625a237fe8bb2e (Github gist doesn't support directories so / is replaced by __)

Mar 23 2017
Mar 23

Introduction

Web APIs are not just useful when making headless sites in Drupal: large Drupal sites often hold valuable information that could also be useful outside the site's context. Media companies might want to expose historical media content, community sites could show data about their community activities, e-commerce sites tend to open an API for their affiliates and partners.

While it is possible to use Drupal 7 and Drupal 8 as an API backend, a lot of functionalities that describe a mature API service do not come out of the box. In this post we will explain what key concepts you have to keep in mind when designing an API service, why they are important and how APIgee Edge can make it easier to build a full-featured API webservice in Drupal successfully.

Designing APIs: the API first strategy

In a large part of the software development industry, API first thinking is replacing a user interface design approach. API first design is about planning and documenting your API before it would be implemented in code. If you set up your backend service this way, you can use it with different clients regardless of the way they were implemented. API first strategy allows you to diversify user interfaces: UI developers can work without knowing how your backend service works.

Building good backend services is not easy, there are plenty of pitfalls on the road and most of them only reveal themselves during development. Your responsibilities as a service provider grow with the number of clients:

  • maintaining the security of your services (especially if you are providing paid services),
  • handling compatibility problems between client apps and different app versions,
  • ensuring that your services are able to handle unexpected loads.

You can’t handle all of these tasks without monitoring the services. Especially for monetization, monitoring is crucial.

Features to keep in mind for building good API backend services

Security

Security is one of the most important trust signals of a mature API. A multi-layered protection system should be able to hide your non-public services from the public, handle the authorization processes, and protect the original resources from attackers.

Compatibility

Compatibility issues are the nightmares of service providers: versioning your APIs is your first step to harmony.

Scalability

Successful services have to handle an enormous number of requests every second and your services have to scale with the number of your new clients. Sometimes moving your backend to better hardware does not help, because the root of the problem is in the initial architectural decisions or implementations.

Monitoring

You need exact analytics about the usage of your API: it is indispensable for monetization purposes, plus you could use analytics data to improve your service and to understand your users' behavior.

Documentation

Good documentation is an essential part of the API service, as this is the first line of support for developers trying to understand and learn how to use the API. Developer portals often have different kinds and levels of supporting material from getting started pages to various guides, case studies, playbooks, and tutorials.

Monetization

You will need an authorization and monitoring system to efficiently track and bill customers for using your services. Exposing different resources of your APIs individually or grouped, and setting up usage limits based on these “API products” can be a time consuming task.

Companies specialized in API management solutions

You can choose from many API management technologies to build an API service, but each technology stack has its own limitations. Some companies have specialized to help you solve (a part of) the problems that might occur. Our non-exhaustive list of such companies as an example (company descriptions are from Crunchbase):

  • 3scale’s API management platform empowers API providers to easily package, distribute, manage and monetize APIs.
  • Apiphany provides API management and delivery solutions that enable organizations to leverage the mobile, social and app economy.

  • Layer 7 Technologies provides security and management products for API-driven integrations spanning the extended hybrid enterprise.

  • Mashery is a TIBCO company providing API management services that enable companies to leverage web services as a distribution channel.

  • StrikeIron offers a cloud-based data quality suite offering web-based infrastructure to deliver business data to internet-connected systems.
  • Apigee is the leading provider of API technology and services for enterprises and developers.

The rest of this article will focus on Apigee (recently acquired by Google). Disclaimer: Pronovix is an Apigee partner, so we are somewhat biased. However, even if we wouldn’t be partners, we believe they are probably the best API management service provider for Drupal projects. They are not only a market leader in the space, they have also invested in a Drupal integration: Apigee Edge.

Apigee Edge: a Drupal integration for API services

Built in JAVA, Apigee Edge is able to replace or enhance complicated parts of your services. API proxies will protect your services from direct customer access (as they guard the backend code), and add the above mentioned 6 key features to your APIs. Apigee Edge manages these features in a specific way.

Policies

Apigee Edge enables you to control the behavior of your APIs (without writing a single line of code) via policies. A policy is similar to a module that implements a specific, limited function that is part of the proxy request/response flow. You can add various types of management features to an API by using policies.

Traffic management policies

With cache policies you can set up traffic quotas and concurrent rate limits on your API proxies.

Mediation policies

Mediation policies let you do custom validation and send back custom messages and alerts to your clients, independently from your backend services. Moreover, you do not need to implement separate xml serialization in your services to accept requests or send responses in XML, because the JSON to XML and XML to JSON policies are capable to do automatic conversions between these formats.

Security policies

Security policies give access control management to your APIs with different types of authorization methods and protection features.

Extension policies

If you haven’t found an existing policy for a special task, you can implement your own policy in Java, Javascript or Python with the help of the Extension policies, which also contain policies for external logging and statistics collecting.

Developer portals

A great developer experience is crucial for API adoption. Apigee Edge has a developer portal solution that is built in Drupal 7 with API documentation, forums and blog posts that come out of the box. API developer portals done well are your power tools to help the adoption of your API and build strong communities. Apigee's dev portals could be hosted either on cloud or on-premises with Apigee Edge for Private Cloud.

We hope this introduction gave you some insight into building high-performance API web services.

Disclaimer: When we specialised Pronovix in API documentation and developer portals we started a partnership with Apigee, and we do extensive work customising the Apigee developer portal.

Interested in our research on developer portals? Subscribe to our developer portals mailing list and get notified when we publish similar content relevant to API teams.

Subscribe >

*We will not publish, share, or sell your email address in any way.

Mar 23 2017
Mar 23

In my last blog post, I set out to learn more about D8's cache API. What I finally wrote about in that blog post made up only a small part of the journey taken.

Along the way I used:

  • drush cutie to build a stand-alone D8 environment
  • a custom module to build a block
  • Twig templates to output the markup
  • dependency injection for the service needed to obtain the current user's ID

It was the last one, dependency injection, that is the inspiration for this post. I have read about it, I feel like I know it when I see it, but when Adam pointed out that my sample code wasn't using dependency injection properly, I set out to right my wrong.

Briefly, Why Dependency Injection?

Using dependency injection properly:

  • Ensures decoupled functionality, which is more reusable.
  • Eases unit testing.
  • Is the preferred method for accessing and using services in Drupal 8.

So, Where Was I?

In the aforementioned D8 Cache API blog post, I was building a block Plugin and grabbing the current user's ID with the global Drupal class, based on an example from D.O.

$uid = \Drupal::currentUser->id();

With a single line of code placed right where I needed it, I could grab the current user's ID and it worked a charm. However, as Adam noted, that is not the preferred way to do things.

"Many of the current tutorials on Drupal 8 demonstrate loading services statically through the \Drupal class. Developers who are used to Drupal 7 may find it faster to write that way, but using dependency injection is the technically preferred approach." - From Acquia's Lesson 8.3 - Dependency injection

Oops, guilty as charged! That global Drupal class is meant to be a bridge between old, procedural code and D8's injected, object-oriented ethos. It’s located in core/lib/Drupal.php and begins with this comment:

  /**
   * Static Service Container wrapper.
   *
   * Generally, code in Drupal should accept its dependencies via either
   * constructor injection or setter method injection. However, there are cases,
   * particularly in legacy procedural code, where that is infeasible. This
   * class acts as a unified global accessor to arbitrary services within the
   * system in order to ease the transition from procedural code to injected OO
   * code.

Further down you find the Drupal::service() method, which is a global method that can return any defined service. Note the comment Use this method if the desired service is not one of those with a dedicated accessor method below. If it is listed below, those methods are preferred as they can return useful type hints.

  /**
   * Retrieves a service from the container.
   *
   * Use this method if the desired service is not one of those with a dedicated
   * accessor method below. If it is listed below, those methods are preferred
   * as they can return useful type hints.
   *
   * @param string $id
   *   The ID of the service to retrieve.
   *
   * @return mixed
   *   The specified service.
   */
  public static function service($id) {
    return static::getContainer()->get($id);
  }


To determine what it means by “...those with a dedicated accessor method below.”, refer to the methods mentioning the return of services, like Returns the time service. from Drupal::time() or Returns the form builder service. from Drupal::formBuilder(). These dedicated accessor methods are preferred because they can return useful type hints. For instance, in the case of returning the time service, @return \Drupal\Component\Datetime\TimeInterface.

Okay, so that’s some explanation of the global Drupal class, but I wanted to use proper dependency injection! I figured I would find a quick dependency injection example, alter it for my needs and finish off the blog post. Boy, was I in for a surprise.

The Journey

I found examples, read articles, but nothing seemed to work. Adam even sent me some sample code from another project, but it didn't help me. I focused on the wrong things or couldn't see the forest through the trees. Based on the examples, I couldn't figure out how to use dependency injection AND get the current user's ID! What was I doing wrong?

Maybe I was passing the wrong type of service into my __construct method?

public function __construct(WrongService $wrong_service) {

Maybe I was missing a crucial use statement?

use Drupal\Core\Session\RightService;

I always felt one step away from nirvana, but boy, was I taking a lot of wrong steps along the way. The structure of my code matched the examples, but the services I tried never returned the user ID I needed.

Troubleshooting in the Dark

My troubleshooting focused on all the use statements. I thought I hadn't pulled the right one(s) into my code - use ... this, use ... that. I always got the error ...expecting this, got NULL. I would Google the error message, find some other examples, try them, same error or different error. Which use statement was I missing?! I was working with block Plugin code. Wait, did the dependency injection have to be in the Controller and not the Plugin? Why not just in the Plugin? When did the room get dark? How long have I been standing in this dark room? What time is it?

Finally, I took a deep breath and focused in on the fact that I was building a Plugin. I Googled "dependency injection in plugins". Eureka! The very first result was titled Lesson 11.4 - Dependency injection and plugins (link below) and the results blurb had these crucial sentences:

"Plugins are the most complex component to add dependency injection to. Many plugins don't require dependency injection, making it sometimes challenging to find examples to copy." - From Acquia's Lesson 11.4 - Dependency injection and plugins

I reiterate: ...making it sometimes challenging to find examples to copy.

"The key to making plugins use dependency injection is to implement the ContainerFactoryPluginInterface. When plugins are created, the code first checks if the plugin implements this interface. If it does, it uses the create() and __construct() pattern, if not, it uses just the __construct() pattern." - From Acquia's Lesson 11.4 - Dependency injection and plugins

“In order to have your new plugin manager use the create pattern, the only thing your plugins need is to implement the ContainerFactoryPluginInterface.” - From https://www.lullabot.com/articles/injecting-services-in-your-d8-plugins

(Speaking of the plugin manager and its create pattern, you can also take a gander at FormatterPluginManager::createInstance)

So, I was incorrectly using this standard Block definition:

class HeyTacoBlock extends BlockBase {

When I should have also been adding implements ContainerFactoryPluginInterface as follows:

class HeyTacoBlock extends BlockBase implements ContainerFactoryPluginInterface {

And this must be why I get paid the big bucks; not because I know everything, but because I submit myself to the sublime torture of failing at the simplest of things!

In the vein of keeping it simple, when you are dealing with injecting services in plugin classes, you need to implement an additional interface - ContainerFactoryPluginInterface - and if you look at that interface, you see the create() method has extra parameters.

The Plugin create() Method With Extra Parameters from ContainerFactoryPluginInterface

public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition)

Compare the Above With a Typical Controller’s create() Method

public static function create(ContainerInterface $container)

You Thought I was Done?

Just to drill the point home and help it sink in, here’s a full example of dependency injection in a Block Plugin. Note the use statements, the implements ContainerFactoryPluginInterface and the four parameters in the create() method. At the end of the day, this was all done to get the user’s account information into the __construct() method, using the $account variable, like so: $this->account = $account;. After that, I can grab the user ID anywhere in my class with a simple $user_id = $this->account->id();

<?php

namespace Drupal\heytaco\Plugin\Block;

use Drupal\Core\Session\AccountProxy;
use Drupal\Core\Session\AccountProxyInterface;
use Drupal\Core\Block\BlockBase;
use Drupal\Core\Plugin\ContainerFactoryPluginInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;

/**
 * Provides a Hey Taco Results Block
 *
 * @Block(
 *   id = "heytaco_block",
 *   admin_label = @Translation("HeyTaco! Leaderboard"),
 * )
 */
class HeyTacoBlock extends BlockBase implements ContainerFactoryPluginInterface {

  /**
   * @var $account \Drupal\Core\Session\AccountProxyInterface
   */
  protected $account;

  /**
   * @param \Symfony\Component\DependencyInjection\ContainerInterface $container
   * @param array $configuration
   * @param string $plugin_id
   * @param mixed $plugin_definition
   *
   * @return static
   */
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) {
    return new static(
      $configuration,
      $plugin_id,
      $plugin_definition,
      $container->get('current_user')
    );
  }

  /**
   * @param array $configuration
   * @param string $plugin_id
   * @param mixed $plugin_definition
   * @param \Drupal\Core\Session\AccountProxyInterface $account
   */
  public function __construct(array $configuration, $plugin_id, $plugin_definition, AccountProxyInterface $account) {
    $this->account = $account;
  }


Compare all of the above with this example of dependency injection in a Controller (as opposed to a Plugin), where only AccountInterface is injected directly into the __construct() method. The reason it seems simpler is because ControllerBase already implements ContainerInjectionInterface for you and there are fewer inherited arguments to pass through create().

<?php

namespace Drupal\heytaco\Controller;

use Drupal\Core\Session\AccountInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Drupal\Core\Controller\ControllerBase;

/**
 * Controller routines for HeyTaco! block.
 */
class HeyTacoController extends ControllerBase {

  protected $account;

  public function __construct(AccountInterface $account) {
    $this->account = $account;
  }

  public static function create(ContainerInterface $container) {
    return new static(
      $container->get('current_user')
    );
  }

To Summarize

I wrote this blog post for two reasons:

  1. To get more Drupal 8 Plugin dependency injection content out there to help the next person find what he/she's Googling for.

  2. To vent the frustration of being a Drupal expert and feeling like a dolt.

So remember, when you're dealing with dependency injection in Drupal 8 plugins, there's an additional interface to implement and some extra parameters in the create() method!

Mar 23 2017
Mar 23

Time for a little confession. I didn't intend to showcase DrupalVM as a DIY Drupal hosting solution when I conceived this series idea. Jeff Geerling, DrupalVM's creator hinted at using DrupalVM as a viable solution for small to medium sites in the first post of the series. It was an idea worth exploring and the result is this post.

DrupalVM is designed to spin up full stack Drupal setups for testing and development purposes on local VMs, It solves a major pain point encountered by developers, which is production parity, a.k.a It Works on My MachineTM problem. In my early development days, I go fix a bug, only to find that it breaks in production because, well, the PHP version differs from the one on my local, or I spend hours figuring out why this piece of code is not working in production, because the caching settings are different from my local setup. The answer to this, is to have your dev setup match your production setup as close as possible, down to a T. Yeah, I hear you, what if another project I'm working on uses a different set of settings, or even worse, an older version of PHP? You're out of luck, and its time to get rid of your LAMP/MAMP setup and use DrupalVM.

The biggest strength of DrupalVM is its configurability. It has enough settings to choke a whale, and more. DrupalVM is written using a configuration management tool called Ansible, which allows users to specify finer aspects of their Drupal site using YAML, a human-friendly declarative language. Ansible has the convention of declaring a set of variables in YAML, which will be parsed and a set of tasks executed(ex. install MySQL, install Drush, update composer etc). This collection of tasks are called playbooks in Ansible lingo. DrupalVM, in technical terms, is a highly configurable ansible playbook to setup and deploy Drupal onto any infrastructure. "Any infrastructure" could mean that this could be a Vagrant machine(the default use case for DrupalVM), a DigitalOcean/Linode VPS, or your own server. The only prerequisite is for the target machine to have SSH access. This is where Ansible scores over other configuration management systems like Chef and Puppet. Ansible operates over an "agentless architecture". In other words, the target machines need not have any agent or tool installed for Ansible scripts to work.

Installing DrupalVM involves cloning the official repository. The only requirement is to have Ansible installed in your system. The repository contains a default.config.yml, which is pretty much the only file you need to go through and play around with to setup your site. There's actually 2 more files you need to edit, which is trivial and I'll cover it in a bit. The documentation has instructions for making DrupalVM work on a Vagrant machine. I'll walk the hosting/production deployment part alone.

The DrupalVM repo consists of a main playbook.yml inside the provisioning directory. This reads configuration from a config.yml written by you under a config_dir folder. The playbook assembles various roles(a modular collection of Ansible tasks) and other task files dynamically by reading the values in the config file. For example, it will add the apache role and install apache if apache is configured as the webserver of choice, and so on. This is a gist of how DrupalVM works.

In order to demonstrate DIY Hosting capabilities of DrupalVM, I've written a set of customized sample config.yml files for a couple of sites. This is not the only way to do things, but a recommended practice to maintain your production infrastructure configuration. Clone the repo used to illustrate this example.

$ git clone [email protected]:badri/DIY-hosting-drupalvm.git deployments

This will be organized on a per-site basis. Each folder(named after the site's TLD) will contain a customized config.yml, which contains only the overridden values. The default ones will be picked up from default.config.yml in DrupalVM repo. This is pretty much the blueprint of your site. There will also be a per-site inventory file, which is an Ansible convention specifying where to run all our tasks and playbooks on. This could be a list of IPs or hostnames, grouped into groups and follows an INI-ish format.

[all:vars]
ansible_python_interpreter=/usr/bin/python3

[drupalvm]
staging.lakshminp.com ansible_ssh_user=root

I specify a global variable called ansible_python_interpreter, which indicates Ansible the python executable to use on the remote machine(ansible is written in Python BTW). This is done explicitly as all our remote machines are Ubuntu 16.04, where the default Python version is 3.5.2, whereas Ansible is configured to work with Python 2.x by default. Hopefully, this will go way sometime soon and there won't be any need for this piece of configuration. Read up more about [[][Ansible inventories]] if you're curious.

The other site common across all sites is the ansible.cfg file. As aptly named, it contains configuration needed while running Ansible playbooks.

[defaults]
host_key_checking=false

[ssh_connection]
ssh_args = -o ForwardAgent=yes -o StrictHostKeyChecking=no

Arguably, the most important piece here is the ForwardAgent=yes part. When running one of the playbooks, I specify a Git repo to clone my site from. This will most likely be a private repository, which means that you have to have a pair of SSH keys on the target machine to authorize the git clone command. This can get messy quickly, as you have to create new key pairs every time you deploy to a new machine and move it around. The ForwardAgent tells Ansible to pick up keys from the machine where you are running the playbooks, which means no more clumsy key management.

Onto the scripts themselves. Let's take the staging.lakshminp.com/config.yml and dissect it. First I specify the domain where we will deploy the site. NOTE that before you run Ansible, you should have a bare Ubuntu 16.04 server running and pointing to that domain. If you are using DigitalOcean, it should look something like this:

drupal_domain: "staging.lakshminp.com"

Rebuild DO imageRebuild DigitalOcean Image

DO DNS configurationDigitalOcean DNS configuration

Then comes the apachevhosts configuration. Though this is specified in the same manner as default.config.yml, I'm pruning other entries in vhost configuration and only having the Drupal stuff, hence the overwrite. This is followed by Drupal version and PHP versions.

apache_vhosts:
  - servername: "{{ drupal_domain }}"
    documentroot: "{{ drupal_core_path }}"
    extra_parameters: |
          ProxyPassMatch ^/(.*\.php(/.*)?)$ "fcgi://127.0.0.1:9000{{ drupal_core_path }}"

drupal_major_version: 7
php_version: "5.6"

DrupalVM has provision to install a lot of add-on packages, like Varnish, Solr, Redis, DrupalConsole etc. For this, we just stick with Drush, can't do Drupal without it :)

installed_extras:
  - drush

We then specify the various credentials used in the stack, like Drupal admin password, DB username/password etc. NOTE that this is NOT how you specify credentials in Ansible. I'll cover the proper way to do it in another blog post. We are just exposing it here for pedagogical purposes. Please don't specify your actual credentials and check in this code, you WILL be fired from your job!

drupal_account_pass: admin
drupal_db_password: drupal
mysql_root_password: root

There are a couple of not-so-obvious gems in DrupalVM called the pre and post provision tasks.

pre_provision_tasks_dir: '{{config_dir }}/pre.yml'
post_provision_tasks_dir: '{{config_dir }}/post.yml'

These can be specified as either shell scripts or Ansible tasks. I prefer the latter. The 'pre' tasks are run before running any other task in DrupalVM. The 'post' is run after all the required packages are configured and installed. For this site, I want to set it up from a git repo and install from a DB snapshot. So, I set drupal_install_site: false, and add a bunch of custom variables.

git_repo: [email protected]:badri1/lakshminp.git'
git_branch: 'master'
db_snapshot: 'https://s3.amazonaws.com/drupaldbexports/db-03-20-2017.sql.gz'
prod_url: "https://lakshminp.com"

this is pretty much what I do in the post provision file.

- name: "{{ drupal_domain }} | Install Drupal with drush."
  command: >
    {{ drush_path }} site-install {{ drupal_install_profile | default('standard') }} -y
    --site-name="{{ drupal_site_name }}"
    --account-name={{ drupal_account_name }}
    --account-pass={{ drupal_account_pass }}
    --db-url={{ drupal_db_backend }}://{{ drupal_db_user }}:{{ drupal_db_password [email protected]/{{ drupal_db_name }}
    {{ drupal_site_install_extra_args | default([]) | join(" ") }}
    -r {{ drupal_core_path }}
  when: db_snapshot is undefined and not drupal_install_site

- include: "{{config_dir }}/install-from-db.yml"
  when: db_snapshot is defined
  static: no

The drush site-install won't execute in this site, as I've defined a DB snapshot. I include an install-from-db.yml file instead. The convenience of Ansible is the idea that you can modularize a set of related tasks and include them as per the context, like this example above.

The pre-provision file just installs Git and clones the repo.

---
- name: "{{ drupal_domain }} | Install dependencies"
  apt: pkg={{ item }} state=installed
  with_items:
    - python3-mysqldb
    - git

- name: "{{ drupal_domain }} | Clone repo"
  git: repo={{ git_repo }}
       version={{ git_branch }}
       dest={{ drupal_core_path }}
       accept_hostkey=yes

The python3-mysqldb is another Ansible-Python3 nuance. The git clone will work because of the ForwardAgent trick we talked earlier. Let's give our new setup a spin by running the playbook. Make sure you are running this inside the DrupalVM repo you cloned.

$ env ANSIBLE_CONFIG=~/deployment/ansible.cfg ansible-playbook -i ~/deployment/staging.lakshminp.com/inventory  provisioning/playbook.yml  --extra-vars="config_dir=~/deployment/staging.lakshminp.com"

Here, ~/deployments is the directory you cloned the [[][DIY-hosting-drupalvm]] repository. We specify the non-standard location of ansible.cfg first, then indicate the location of the inventory file using the -i flag, and lastly tell Ansible where the config_dir resides. Go grab a coffee while Ansible grinds your site. it might take a good 15 minutes for the first run.

The second site installs plain vanilla Drupal 8 on a PHP 7 stack served by Nginx.

$ env ANSIBLE_CONFIG=~/deployment/ansible.cfg ansible-playbook -i ~/deployment/example-site1.com/inventory  provisioning/playbook.yml  --extra-vars="config_dir=~/deployment/example-site1.com"

Any steps specific to this site could be added in its own post config script. The post config script is the way to extend DrupalVM's functionality without hacking its core. A simple example of this would be applying Drupal module updates. We could write a set of tasks for:

  • Put the site in maintenance mode using drush
  • Taking a backup of the site.
  • Switch the code to lastest/appropriate commit/tag/branch.
  • Apply updates using Drush
  • Ensure that the update is successful. You could define your own set of tests for this or capture the output of drush updb command and validate accordingly.
  • Run the eternal drush cache clear command
  • Switch off maintenance mode, using drush

Ansible has the concept of tagging tasks, as in,

- name: "{{ drupal_domain }} | Set the site in maintenance mode."
  command: >
    {{ drush_path }} vset maintenance_mode 1
    -r {{ drupal_core_path }}
  tags:
    - updates

We could tag all the above tasks under the "updates" tag, and run only those tasks in Ansible. So, to update example-site1.com, effectively,

$ env ANSIBLE_CONFIG=~/deployment/ansible.cfg ansible-playbook -i ~/deployment/example-site1.com/inventory  provisioning/playbook.yml  --extra-vars="config_dir=~/deployment/example-site1.com" --tags="updates"

DrupalVM, though lacking a neat dashboard, helps you manage all your Drupal infrastructure using a single codebase without leaving the comfort of your commandline or editor. You can also extend all the ops you do on your sites by writing new tasks and "ansibilizing" them. All this while achieving development-production parity. You should give DrupalVM a spin if every one of your sites has config as unique as a snowflake and if you don't want to invest in complicated tools or manpower. It's just Ansible and YAML files. Can't get simpler than that!

Mar 23 2017
Mar 23

It's not over yet. There are still Druplicons that need to be presented. After already exploring the fields of Humans and Superhumans, Fruits and Vegetables, Animals, Outdoor Activities and National Identities, it's now time to look in the field of emotions and see, which emotions are shown by Drupal Logos.

After expecting to find many Druplicons in the area of national identities, we came up with an idea of exploring something more challenging. After some thought, we decided it's time to look in the area of emotions. After all, Druplicon was designed with a mischievous smile, so it looks serious (and maybe something less pleasant as well) all the time. Despite that, it can show a lot of (other) emotions. Here are some according to Human-Machine Interaction Network on Emotion (HUMAINE), which classifies 48 emotions.

Sad Druplicon (SADCamp 2011)

Sad Druplicon

 

Hurt Druplicon

Hurt Drupal Logo

Happy Druplicon (GLAD Camp 2014)

Happy Druplicon

Relaxed Druplicon

Relaxed Drupal Logo

Angry Druplicon (BADCamp)

Angry Drupal Logo

We are aware that we already used that as a pirate, but hey, if Druplicon can have more functions, why not present them all.

Annoying Druplicon

Annoying Drupal Logo

Irritated Druplicon

Irritated Drupal Logo

You responded extraordinary last time. When we asked you to find any of the missing Drupal Logos, which represent national identities, Chimezie Chuta‏, Dragan Eror‏ and Sine informed us on twitter about Nigerian, Serbian and Polish Druplicon. Moreover, Dark Dim‏ and Chimezie Chuta‏ even found missing Drupal Logos from the field of Outdoor activities and Humans. Thank you all!

By now, we think you already know the rules. Find any of the Druplicons Showing Emotions that we did not cover here and post it on our Twitter account. If you do that, you'll be mentioned in our next blog post about Drupal Logos.

Mar 23 2017
Mar 23

I have had many differences with the Drupal Association in the past, starting with the many clashes we had with their erstwhile leadership when we were organising DrupalCon Copenhagen 2010, so I’ll admit I wasn’t their biggest fan before the latest events.

Yesterday evening, I learned that the DA leadership, led by Dries Buytaert himself, has taken upon themselves to declare one of Drupal’s most well-known (and well-loved) community members, Larry Garfield, “persona non grata”, because they disapprove of things he does in his private life. You can read his own explanation, but suffice to say that he likes to engage in BDSM-related practises with consenting adults.

He's kept this a secret, since many do not approve or understand such things. I will not claim to understand such myself, but I am of the firm opinion, that as long as people don't involve unwilling or unwitting participants in them, their private lives are none of my (or anyone else’s) business.

To say that condemning someone for their sexual orientation is against the stated values of the Drupal community would be the understatement of the year. For years, the community has been striving to become more inclusive, implementing a strict Code of Conduct, that states:

Everyone can make a valuable contribution to Drupal. We may not always agree, but disagreement is no excuse for poor behaviour and poor manners.

and

We will not tolerate bullying or harassment of any member of the Drupal community.

If spreading rumors about someone’s private affairs isn’t bullying and harassment, I don’t know what is. But in this instance, the DA has chosen to punish the victim, rather than the bully.

The bully

If Larry’s words are to be believed (we have no reason not to, especially since his accusers are not denying them), this campaign against him has involved a good amount of cyber-stalking, breach of privacy and trust, harassment, and spreading of malicious rumors. But the person committing said deeds have not been sanctioned.

I shall not name said person here, but I shall say that I despise his actions here, and (as the ancient curse goes), I wish that his hindquarters may itch, and his arms be to short to scratch.

The thought police response

Dries replies, in a post ironically titled “Living our values”:

A few weeks ago, I privately asked Larry Garfield, a prominent Drupal contributor, to leave the Drupal project. I did this because it came to my attention that he holds views that are in opposition with the values of the Drupal project.

and furthermore

The Gorean philosophy promoted by Larry is based on the principle that women are evolutionarily predisposed to serve men and that the natural order is for men to dominate and lead.

Not only is the latter a gross misrepresentation of what Larry’s views are, it is also completely false that Larry has been promoting said philosophy, in the context of the Drupal community. And what he does, privately, elsewhere is not for the Drupal community to adjudicate. On the contrary, he’s taken care to keep said philosophy completely separate from his work in the Drupal community.

So he’s being punished, not for something he’s done, but for the words he’s shared privately, with people who share his world view.

Dries, and the Drupal Association has thus decided to act as the thought police. They will determine which opinions you are not allowed to have, and regardless of your actions, if come to know of your naughty thoughts, by any and all means, you will too be banned.

The message is clear. If you have any kind of unsavoury thought, better hope Dries and his watchful arbiters do not learn of it, because if they do, you’ll be branded as outcast for your heretical thoughts.

Maybe its time to step down, Dries?

You have presided over a great many things as Drupal Association chairman, Dries. Financial mismanagement of several Drupal events that cost the DA dearly. The subsequent cover-up of said mismanagement. Many smaller controversies.

A great many things have gone wrong, and it's been my impression that the DA could do with some more oversight. Maybe it’s time we found a chairman who’s less busy with running a big corporation, and is maybe not so concerned with regulating people's thoughts, and a little more concerned with providing proper oversight for the important organisation that it is like a truly open and welcoming community, that's open to anyone, no matter what they do and think in private.

Mar 23 2017
Mar 23

The Drupal community is committed to welcome and accept all people. That includes a commitment to not discriminate against anyone based on their heritage or culture, their sexual orientation, their gender identity, and more. Being diverse has strength and as such we work hard to foster a culture of open-mindedness toward differences.

A few weeks ago, I privately asked Larry Garfield, a prominent Drupal contributor, to leave the Drupal project. I did this because it came to my attention that he holds views that are in opposition with the values of the Drupal project.

I had hoped to avoid discussing this decision publicly out of respect for Larry's private life, but now that Larry has written about it on his blog and it is being discussed publicly, I believe I have no choice but to respond on behalf of the Drupal project.

It is not for me to share any of the confidential information that I've received, so I won't point out the omissions in Larry's blog post. However, I can tell you that those who have reviewed Larry's writing, including me, suffered from varying degrees of shock and concern.

In the end, I fundamentally believe that all people are created equally. This belief has shaped the values that the Drupal project has held since it's early days. I cannot in good faith support someone who actively promotes a philosophy that is contrary to this. The Gorean philosophy promoted by Larry is based on the principle that women are evolutionarily predisposed to serve men and that the natural order is for men to dominate and lead.

While the decision was unpleasant, the choice was clear. I remain steadfast in my obligation to protect the shared values of the Drupal project. This is unpleasant because I appreciate Larry's many contributions to Drupal, because this risks setting a complicated precedent, and because it involves a friend's personal life. The matter is further complicated by the fact that this information was shared by others in a manner I don't find acceptable either.

It's not for me to judge the choices anyone makes in their private life or what beliefs they subscribe to. I certainly don't take offense to the role-playing activities of Larry's alternative lifestyle. However, when a highly-visible community member's private views become public, controversial, and disruptive for the project, I must consider the impact that his words and actions have on others and the project itself. In this case, Larry has entwined his private and professional online identities in such a way that it blurs the lines with the Drupal project. Ultimately, I can't get past the fundamental misalignment of values.

First, collectively, we work hard to ensure that Drupal has a culture of diversity and inclusion. Our goal is not just to have a variety of different people within our community, but to foster an environment of connection, participation and respect. We have a lot of work to do on this and we can't afford to ignore discrepancies between the espoused views of those in leadership roles and the values of our culture. It's my opinion that any association with Larry's belief system is inconsistent with our project's goals.

Second, I believe someone's belief system inherently influences their actions, in both explicit and subtle ways, and I'm unwilling to take this risk going forward.

Third, Larry's continued representation of the Drupal project could harm the reputation of the project and cause harm to the Drupal ecosystem. Any further participation in a leadership role implies our community is complicit with and/or endorses these views, which we do not.

It is my responsibility and obligation to act in the best interest of the project at large and to uphold our values. Decisions like this are unpleasant and disruptive, but important. It is moments like this that test our commitment to our values. We must stand up and act in ways that demonstrate these values. For these reasons, I'm asking Larry to resign from the Drupal project.

(Comments on this post are allowed but for obvious reasons will be moderated.)

Mar 23 2017
Mar 23

Preface

We recently had the opportunity to work on a Symfony app for one of our Higher Ed clients that we recently built a Drupal distribution for. Drupal 8 moving to Symfony has enabled us to expand our service offering. We have found more opportunities building apps directly using Symfony when a CMS is not needed. This post is not about Drupal, but cross posting to Drupal Planet to demonstrate the value of getting off the island. Enjoy!

Writing custom authentication schemes in Symfony used to be on the complicated side. But with the introduction of the Guard authentication component, it has gotten a lot easier.

One of our recent projects required use to interface with Shibboleth to authenticate users into the application. The application was written in Symfony 2 and was using this bundle to authenticate with Shibboleth sessions. However, since we were rewriting everything in Symfony 3 which the bundle is not compatible with, we had to look for a different solution. Fortunately for us, the built-in Guard authentication component turns out to be a sufficient solution, which allows us to drop a bundle dependency and only requiring us to write only one class. Really neat!

How Shibboleth authentication works

One way Shibboleth provisions a request with an authenticated entity is by setting a “remote user” environment variable that the web-server and/or residing applications can peruse.

There is obviously more to Shibboleth than that; it has to do a bunch of stuff to do the actual authenticaiton process. We defer all the heavy-lifting to the mod_shib Apache2 module, and rely on the availability of the REMOTE_USER environment variable to identify the user.

That is pretty much all we really need to know; now we can start writing our custom Shibboleth authentication guard:

<?php

namespace AppBundle\Security\Http;

use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\HttpFoundation\RedirectResponse;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Generator\UrlGeneratorInterface;
use Symfony\Component\Security\Core\Authentication\Token\TokenInterface;
use Symfony\Component\Security\Core\Exception\AuthenticationException;
use Symfony\Component\Security\Core\User\UserInterface;
use Symfony\Component\Security\Core\User\UserProviderInterface;
use Symfony\Component\Security\Guard\AbstractGuardAuthenticator;
use Symfony\Component\Security\Http\Logout\LogoutSuccessHandlerInterface;

class ShibbolethAuthenticator extends AbstractGuardAuthenticator implements LogoutSuccessHandlerInterface
{
    /**
     * @var
     */
    private $idpUrl;

    /**
     * @var null
     */
    private $remoteUserVar;

    /**
     * @var UrlGeneratorInterface
     */
    private $urlGenerator;

    public function __construct(UrlGeneratorInterface $urlGenerator, $idpUrl, $remoteUserVar = null)
    {
        $this->idpUrl = $idpUrl;
        $this->remoteUserVar = $remoteUserVar ?: 'HTTP_EPPN';
        $this->urlGenerator = $urlGenerator;
    }

    protected function getRedirectUrl()
    {
        return $this->urlGenerator->generateUrl('shib_login');
    }

    /**
     * @param Request $request The request that resulted in an AuthenticationException
     * @param AuthenticationException $authException The exception that started the authentication process
     *
     * @return Response
     */
    public function start(Request $request, AuthenticationException $authException = null)
    {
        $redirectTo = $this->getRedirectUrl();
        if (in_array('application/json', $request->getAcceptableContentTypes())) {
            return new JsonResponse(array(
                'status' => 'error',
                'message' => 'You are not authenticated.',
                'redirect' => $redirectTo,
            ), Response::HTTP_FORBIDDEN);
        } else {
            return new RedirectResponse($redirectTo);
        }
    }

    /**
     * @param Request $request
     *
     * @return mixed|null
     */
    public function getCredentials(Request $request)
    {
        if (!$request->server->has($this->remoteUserVar)) {
            return;
        }

        $id = $request->server->get($this->remoteUserVar);

        if ($id) {
            return array('eppn' => $id);
        } else {
            return null;
        }
    }

    /**
     *
     * @param mixed $credentials
     * @param UserProviderInterface $userProvider
     *
     * @throws AuthenticationException
     *
     * @return UserInterface|null
     */
    public function getUser($credentials, UserProviderInterface $userProvider)
    {
        return $userProvider->loadUserByUsername($credentials['eppn']);
    }

    /**
     * @param mixed $credentials
     * @param UserInterface $user
     *
     * @return bool
     *
     * @throws AuthenticationException
     */
    public function checkCredentials($credentials, UserInterface $user)
    {
        return true;
    }

    /**
     * @param Request $request
     * @param AuthenticationException $exception
     *
     * @return Response|null
     */
    public function onAuthenticationFailure(Request $request, AuthenticationException $exception)
    {
        $redirectTo = $this->getRedirectUrl();
        if (in_array('application/json', $request->getAcceptableContentTypes())) {
            return new JsonResponse(array(
                'status' => 'error',
                'message' => 'Authentication failed.',
                'redirect' => $redirectTo,
            ), Response::HTTP_FORBIDDEN);
        } else {
            return new RedirectResponse($redirectTo);
        }
    }

    /**
     * @param Request $request
     * @param TokenInterface $token
     * @param string $providerKey The provider (i.e. firewall) key
     *
     * @return Response|null
     */
    public function onAuthenticationSuccess(Request $request, TokenInterface $token, $providerKey)
    {
        return null;
    }

    /**
     * @return bool
     */
    public function supportsRememberMe()
    {
        return false;
    }

    /**
     * @param Request $request
     *
     * @return Response never null
     */
    public function onLogoutSuccess(Request $request)
    {
        $redirectTo = $this->urlGenerator->generate('shib_logout', array(
            'return'  => $this->idpUrl . '/profile/Logout'
        ));
        return new RedirectResponse($redirectTo);
    }
}

Let’s break it down:

  1. class ShibbolethAuthenticator extends AbstractGuardAuthenticator ... - We’ll extend the built-in abstract to take care of the non-Shibboleth specific plumbing required.

  2. __construct(...) - As you would guess, we are passing in all the things we need for the authentication guard to work; we are getting the Shibboleth iDP URL, the remote user variable to check, and the URL generator service which we need later.

  3. getRedirectUrl() - This is just a convenience method which returns the Shibboleth login URL.

  4. start(...) - This is where everything begins; this method is responsible for producing a request that will help the Security component drive the user to authenticate. Here, we are simply either 1.) redirecting the user to the Shibboleth login page; or 2.) producing a JSON response that tells consumers that the request is forbidden, if the client is expecting application/json content back. In which case, the payload will conveniently inform consumers where to go to start authenticating via the redirect property. Our front-end application knows how to handle this.

  5. getCredentials(...) - This method is responsible for extracting authentication credentials from the HTTP request i.e. username and password, JWT token in the Authorization header, etc. Here, we are interested in the remote user environment variable that mod_shib might have set for us. It is important that we check that the environment variable is actually not empty because mob_shib will still have it set but leaves it empty for un-authenticated sessions.

  6. getUser(...) - Here we get the credentials that getCredentials(...) returned and construct a user object from it. The user provider will also be passed into this method; whatever it is that is configured for the firewall.

  7. checkCredentials(...) - Following the getUser(...) call, the security component will call this method to actually verify whether or not the authentication attempt is valid. For example, in form logins, this is where you would typically check the supplied password against the encrypted credentials in the the data-store. However we only need to return true unconditionally, since we are trusting Shibboleth to filter out invalid credentials and only let valid sessions to get through to the application. In short, we are already expecting a pre-authenticated request.

  8. onAuthenticationFailure(...) - This method is called whenever our authenticator reports invalid credentials. This shouldn’t really happen in the context of a pre-authenticated request as we 100% entrust the process to Shibboleth, but we’ll fill this in with something reasonable anyway. Here we are simply replicating what start(...) does.

  9. onAuthenticationSuccess(...) - This method gets called when the credential checks out, which is all the time. We really don’t have to do anything but to just let the request go through. Theoretically, this would be there we can bootstrap the token with certain roles depending on other Shibboleth headers present in the Request object, but we really don’t need to do that in our application.

  10. supportsRememberMe(...) - We don’t care about supporting “remember me” functionality, so no, thank you!

  11. onLogoutSuccess(...) - This is technically not part of the Guard authentication component, but to the logout authentication handler. You can see that our ShibbolethAuthenticator class also implements LogoutSuccessHandlerInterface which will allow us to register it as a listener to the logout process. This method will be responsible for clearing out Shibboleth authentication data after Symfony has cleared the user token from the system. To do this we just need to redirect the user to the proper Shibboleth logout URL, and seeding the return parameter to the nice logout page in the Shibboleth iDP instance.

Configuring the router: shib_login and shib_logout routes

We’ll update app/config/routing.yml:

# app/config/routing.yml

shib_login:
  path: /Shibboleth.sso/Login

shib_logout:
  path: /Shibboleth.sso/Logout

You maybe asking yourself why we even bother creating known routes for these while we can just as easily hard-code these values to our guard authenticator.

Great question! The answer is that we want to be able to configure these to point to an internal login form for local development purposes, where there is no value in actually authenticating with Shibboleth, if not impossible. This allows us to override the shib_login path to /login within routing_dev.yml so that the application will redirect us to the proper login URL in our dev environment.

We really can’t point shib_logout to /logout, though, as it will result in an infinite redirection loop. What we do is override it in routing_dev.yml to go to a very simple controller-action that replicates Shibboleth’s logout URL external behavior:

<?php

... 

  public function mockShibbolethLogoutAction(Request $request)
  {
      $return = $request->get('return');

      if (!$return) {
          return new Response("`return` query parameter is required.", Response::HTTP_BAD_REQUEST);
      }

      return $this->redirect($return);
  }
}

Configuring the firewall

This is the last piece of the puzzle; putting all these things together.

########################################################
# 1.  We register our guard authenticator as a service #
########################################################

# app/config/services.yml

services:
  app.shibboleth_authenticator:
    class: AppBundle\Security\Http\ShibbolethAuthenticator
    arguments:
      - "@router"
      - "%shibboleth_idp_url%"
      - "%shibboleth_remote_user_var%"

...

##########################################################################
# 2. We configure Symfony to read security_dev.yml for dev environments. #
##########################################################################

# app/config/config_prod.yml

imports:
  - { resources: config.yml }
  - { resources: security.yml }

...

# app/config/config_dev.yml
imports:
  - { resources: config.yml }
  - { resources: security_dev.yml } # Dev-specific firewall configuration

...


#####################################################################################
# 3. We configure the app to use the `guard` component and our custom authenticator #
#####################################################################################

# app/config/security.yml

security:
  firewall:
    main:
      stateless: true
      guard:
        authenticators:
          - app.shibboleth_authenticator

      logout:
          path: /logout
          success_handler: app.shibboleth_authenticator

...

#####################################################
# 4. Configure dev environments to use `form_login` #
#####################################################

# app/config/security_dev.yml
security:
  firewall:
    main:
      stateless: false
      form_login:
        login_path: shib_login
        check_path: shib_login
        target_path_parameter: return

The star here is actually just what’s in the security.yml file, specifically the guard section; that’s how simple it is to support custom authentication via the Guard authentication component! It’s just a matter of pointing it to the service and it will hook it up for us.

The logout configuration tells the application to allocate the /logout path to initiate the logout process which will eventually call our service to clean up after ourselves.

You also notice that we actually have security_dev.yml file here that config_dev.yml imports. This isn’t how the Symfony 3 framework ships, but this allows us to override the firewall configuration specifically for dev environments. Here, we add the form_login authentication scheme to support logging in via an in-memory user-provider (not shown). The authentication guard will redirect us to the in-app login form instead of the Shibboleth iDP during development.

Also note the stateless configuration difference between prod and dev: We want to keep the firewall in production environments stateless; this just means that our guard authenticator will get consulted in all requests. This ensures that users will actually be logged out from the application whenever they are logged out of the Shibboleth iDP i.e. when they quit the web browser, etc. However we need to configure the firewall to be stateful during development, otherwise the form_login authentication will not work as expected.

Conclusion

I hope I was able to illustrate how versatile the Guard authentication component in Symfony is. What used to require multiple classes to be written and wired together now only requires a single class to implement, and its very trivial to configure. The Symfony community has really done a great job at improving the Developer Experience (DX).

Setting pre-authenticated requests via environment variables isn’t just used by mod_shib, but also by other authentication modules as well, like mod_auth_kerb, mod_auth_gssapi, and mod_auth_cas. It’s a well-adopted scheme that Symfony actually ships with a remote_user authentication listener starting 2.6 that makes it very easy to integrate with them. Check it out if your needs are simpler i.e. no custom authentication-starter/redirect logic, etc.

Mar 22 2017
Mar 22

Windows 10 is the only release Acquia's BLT officially supports. But there are still many people who use Windows 7 and 8, and most of these people don't have control over what version of Windows they use.

Windows 7 - Drupal VM and BLT Setup Guide

Drupal VM has supported Windows 7, 8, and 10 since I started building it a few years ago (at that time I was still running Windows 7), and using a little finesse, you can actually get an entire modern BLT-based Drupal 8 project running on Windows 7 or 8, as long as you do all the right things, as will be demonstrated in this blog post.

Note that this setup is not recommended—you should try as hard as you can to either upgrade to Windows 10, or switch to Linux or macOS for your development workstation, as setup and debugging are much easier on a more modern OS. However, if you're a sucker for pain, have at it! The process below is akin to the Apollo 13 Command Module startup sequence:

It required the crew—in particular the command module pilot, Swigert—to perform the entire power-up procedure in the blind. If he made a mistake, by the time the instrumentation was turned on and the error was detected, it could be too late to fix. But, as a good flight controller should, Aaron was confident his sequence was the right thing to do.

Following the instructions below, you'll be akin to Swigert: there are a number of things you have to get working correctly (in the right sequence) before BLT and Drupal VM can work together in Windows 7 or Windows 8. And once you have your environment set up, you should do everything besides editing source files and running Git commands inside the VM (and, if you want, you can actually do everything inside the VM. For more on why this is the case, please read my earlier post: Developing with VirtualBox and Vagrant on Windows.

Here's a video overview of the entire process (see the detailed instructions below the video):

[embedded content]

Upgrade PowerShell

Windows 7 ships with a very old version of PowerShell (2.0) which is incompatible with Vagrant, and causes vagrant up to hang. To work around this problem, you will need to upgrade to PowerShell 4.0:

  1. Visit the How to Install Windows PowerShell 4.0 guide.
  2. Download the .msu file appropriate for your system (most likely Windows6.1-KB2819745-x64-MultiPkg.msu).
  3. Open the downloaded installer.
  4. Run through the install wizard.
  5. Restart your computer when the installation completes.
  6. Open Powershell and enter the command $PSVersionTable.PSVersion to verify you're running major version 4 or later.

Install XAMPP (for PHP)

XAMPP will be used for it's PHP installation, but it won't be used for actually running the site; it's just an easy way to get PHP installed and accessible on your Windows computer.

  1. Download XAMPP (PHP 5.6.x version).
  2. Run the XAMPP installer.
  3. XAMPP might warn that UAC is enabled; ignore this warning, you don't need to bypass UAC to just run PHP.
  4. On the 'Select Components' screen, only choose "Apache", "PHP", and "Fake Sendmail" (you don't need to install any of the other components).
  5. Install in the C:\xampp directory.
  6. Uncheck the Bitnami checkbox.
  7. When prompted, allow access to the Apache HTTP server included with XAMPP.
  8. Uncheck the 'Start the control panel when finished' checkbox and Finish the installation.
  9. Verify that PHP is installed correctly:
    1. Open Powershell.
    2. Run the command: C:\xampp\php\php.exe -v

Note: If you have PHP installed and working through some other mechanism, that's okay too. The key is we need a PHP executable that can later be run through the CLI.

Set up Cmder

  1. Download Cmder - the 'full' installation.
  2. Expand the zip file archive.
  3. Open the cmder directory and right-click on the Cmder executable, then choose 'Run as administrator'.
    • If you are prompted to allow access to Cmder utilities, grant that access.
  4. Create an alias to PHP: alias php=C:\xampp\php\php.exe $*
  5. Verify that the PHP alias is working correctly: php -v (should return the installed PHP version).

Cmder is preferred over Cygwin because it's terminal emulator works slightly better than mintty, which is included with Cygwin. Cygwin can also be made to work, as long as you install the following packages during Cygwin setup: openssh, curl, unzip, git.

Configure Git (inside Cmder)

There are three commands you should run (at a minimum) to configure Git so it can work correctly with the repository:

  1. git config --global user.name "John Doe" (use your own name)
  2. git config --global user.email [email protected] (use an email address associated with your GitHub account)
  3. git config --global core.autocrlf true (to ensure correct line endings)

Install Composer (inside Cmder)

  1. Make sure you're in your home directory: cd C:\Users\[yourusername].
  2. Run the following commands to download and install Composer:
    1. Download Composer: php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
    2. Build the Composer executable: php composer-setup.php
    3. Delete the setup file: rm -f composer-setup.php
  3. Test that Composer is working by running php composer.phar --version.

Create a Private/Public SSH Key Pair to authenticate to GitHub and Acquia Cloud

Generate an SSH key

All the following commands will be run inside of Cmder.

  1. Inside of Cmder, make sure you're in your home directory: C:\Users\[yourusername]
  2. Run the command: ssh-keygen -t rsa -b 4096 -C "[email protected]"
    • When prompted for where to save the key, press enter/return (to choose the default path).
    • When prompted to enter a passphrase, press enter/return (to leave it empty).
    • When prompted to enter the same passphrase, press enter/return (to leave it empty).
  3. To get the value of the new public key, run the command: cat .ssh/id_rsa.pub
    • Highlight the text that is output (starts with ssh-rsa and ends with your email address)
    • Copy the text of the public key

Note: The public key (id_rsa.pub) can be freely shared and added to services where you need to connect. NEVER share the private key (id_rsa), as that is a secret that identifies you personally to the services to which you're connecting.

Add the SSH key to your GitHub account

  1. Log into GitHub
  2. Go to your account settings (click on your profile image in the top right, then choose 'Settings').
  3. Click on "SSH and GPG keys" in the 'Personal settings' area.
  4. Click "New SSH key" to add your new SSH key.
  5. Add a title (like "Work Windows 7")
  6. Paste your key in the "Key" field.
  7. Click "Add SSH Key" to save the key.

Add the SSH key to your Acquia Cloud account

  1. Log into Acquia Cloud
  2. Go to your Profile (click on your profile image in the top right, then choose 'Edit Profile').
  3. Click on "Credentials"
  4. Under "SSH Keys", click "Add SSH Key"
  5. Add a nickname (like "Work_Windows_7")
  6. Paste your key in the "Public key" field.
  7. Click "Add Key" to save the key.

Set up the BLT-based Drupal project

Fork the BLT project into your own GitHub account

  1. Log into GitHub.
  2. Visit your project's GitHub repository page.
  3. Click the "Fork" button at the top right to create a clone of the project in your own account.
  4. After GitHub has created the Fork, you'll be taken to your Fork's project page.

Clone the BLT project to your computer

  1. Click the "Clone or download" button on your Fork's project page.
  2. Copy the URL in the 'Clone with SSH' popup that appears.
  3. Go back to Cmder, and cd into whatever directory you want to store the project on your computer.
  4. Enter the command: git clone [URL that you copied in step 2 above]
    • If you receive a prompt asking if you want to connect to github.com, type yes and press enter/return.
    • Wait for Git to clone the project to your local computer.

Note: At this time, you may also want to add the 'canonical' GitHub repository as an upstream remote repo so you can easily synchronize your codebase with the repository you forked earlier. That way it's easy for you to make sure you're always working on the latest code. This also makes it easier to do things like open Pull Requests on GitHub from the command line using Hub.

Install BLT project dependencies

  1. Change directory into the project directory (cd projectname).
  2. Ensure you're on the master branch by checking the repository's status: git status
  3. Move the composer.phar file into the project directory: mv ..\composer.phar .\composer.phar
  4. Run php composer.phar install --prefer-source to ensure everything needed for the project is installed.
    • Wait for Composer to finish installing all dependencies. This could take 10-20 minutes the first time it's run.
    • If the installation times out, run php composer.phar install --prefer-source again to pick up where it left off.
    • The installation may warn that patches can't be applied; ignore this warning (we'll reinstall dependencies later).
  5. Move the composer.phar file back out of the project directory: mv composer.phar ..\composer.phar (in case you ever need it on your host computer again).

Note: You need to use the --prefer-source option when installing to prevent the ZipArchive::extractTo(): Full extraction path exceed MAXPATHLEN (260) error. See this Stack Overflow answer for details.

Set up the Virtual Machine (Drupal VM)

  1. Download and install Vagrant.
  2. Download and install VirtualBox.
  3. Restart your computer after installation.
  4. Open Cygwin, and cd into the project folder (where you cloned the project).
  5. Install recommended Vagrant plugins: vagrant plugin install vagrant-hostsupdater vagrant-vbguest vagrant-cachier
    • Note that if you get an error about a space in your username, you will need to add a folder and VAGRANT_HOME environment variable (see this post for more info).
  6. Create a config file for your VM instance: touch box/local.config.yml
  7. Inside the config file, add the following contents (you can use vi or some other Unix-line-ending-compatible editor to modify the file):

    vagrant_synced_folder_default_type: ""
    vagrant_synced_folders:
      - local_path: .
        destination: /var/www/projectname
        type: ""
    post_provision_scripts: []
    
    
  8. Run vagrant up.

    • This will download a Linux box image, install all the prerequisites, then configure it for the Drupal site.
    • This could take 10-20 minutes (or longer), depending on your PC's speed and Internet connection.
    • If the command stalls out and seems to not do anything for more than a few minutes, you may need to restart your computer, then run vagrant destroy -f to destroy the VM, then run vagrant up again to start fresh.

Note: If you encounter any errors during provisioning, kick off another provisioning run by running vagrant provision again. You may also want to reload the VM to make sure all the configuration is correct after a failed provision—to do that, run vagrant reload.

Log in and start working inside the VM

  1. First make sure that ssh-agent is running and has your SSH key (created earlier) loaded: start-ssh-agent
  2. Run vagrant ssh to log into the VM (your SSH key will be used inside the VM via ssh-agent).
  3. Change directories into the project directory: cd /var/www/projectname
  4. Delete the Composer vendor directory: rm -rf vendor
  5. Also delete downloaded dependencies—in Windows Explorer, remove the contents of the following directories:
    • docroot/core
    • docroot/libraries
    • docroot/modules/contrib
    • docroot/themes/contrib
    • docroot/profiles/contrib
  6. Run composer install so all the packages will be reinstalled and linked correctly.
  7. Manually run the BLT post-provision shell script to configure BLT inside the VM: ./vendor/acquia/blt/scripts/drupal-vm/post-provision.sh
  8. Log out (exit), then log back in (vagrant ssh) and cd back into the project directory.
  9. Use BLT to pull down the latest version of the database and finish building your local environment: blt local:refresh

After a few minutes, the site should be running locally, accessible at http://local.projectname.com/

Note: ssh-agent functionality is outside the scope of this post. Basically, it allows you to use your SSH keys on your host machine in other places (including inside Drupal VM) without worrying about manually copying anything. If you want ssh-agent to automatically start whenever you open an instance of Cmder, please see Cmder's documentation: SSH Agent in Cmder.

Mar 22 2017
Mar 22

Between the rush of product updates we're putting out lately, a moment of reflection...

Like many other Drupal shops and theme/product developers I've been taking it easy with major investment in D8. But times are changing. Now we are seeing a time where Google searches including Drupal 8 are more numerous than searches containing Drupal 7. This is by no means a guarantee that D8 is a clear winner but to me it is a sign of progress and it inspires enough confidence to push ahead with our Drupal 8 product upgrades. SooperThemes is on schedule to release our Drupal themes and modules on Drupal 8 soon and I'm sure it will be great for us and our customers.

2017 will be an interesting year for Drupal, a year in which Drupal 8 will really show whether it can be as popular as it's younger brother. The lines in the chart might be crossing but Drupal 8 some way to go before it is as popular as 7. Understanding that Drupal 8 is more geared towards developers one might say it never will, but I think that it's important for the open web that Drupal will stay competitive in the low end market. Start-ups like Tesla and SpaceX have demonstrated how Drupal can grow along with your business all the way towards IPO and beyond.

Is your business ready for Drupal 8?

Personally I think I will need a month or 2 before I can say I'm totally comfortable with shifting focus of development to Drupal 8. Most of my existing customers are on Drupal 7 and my Drupal 7 expertise and products will not be irrelevant any time soon. One thing that is holding me back is uncertainty about media library features in Drupal 8, I hope the D8media team will be successful with their awesome work that puts this critical feature set in core.

If you are a Drupal developer, themer, or business owner, how do you feel about Drupal 8? Are you getting more business for Drupal 8 than Drupal 7? How is your experience with training yourself or your staff to work with Drupal 8 and it's more object oriented code? 

Let me know in the comments if you have anything to share about what Drupal 8 means to you!

Mar 22 2017
Mar 22

As you can understand from name itself it’s basically used to Alter an email created with drupal mail in D7/ MailManagerInterface->mail() in D8.  hook_mail_alter() allows modification of email messages which includes adding and/or changing message text, message fields, and message headers.

Email sent rather than drupal_mail() will not call hook_mail_alter(). All core modules use drupal_mail() & always a recommendation to use drupal_mail but it’s not mandatory.
 

Syntax: hook_mail_alter(&$message)

Parameters

$message: Array containing the message data. Below are the Keys in this array include:

  • 'id': The id of the message.
  • 'to': The address or addresses the message will be sent to. 
  • 'from': This address will be marked as being from, which is either a custom address or site-wide default mail address.
  • 'subject': Subject of the email to be sent. Subject should not contain any newline characters.
  • 'body': An array of strings containing the message text. message body created by concatenating individual array strings into a single text string.
  • 'headers': Associative array containing mail headers, such as From, Sender, MIME-Version, Content-Type, etc.
  • 'params': An array of optional parameters supplied by the caller of drupal_mail() that is used to build the message before hook_mail_alter() is invoked.
  • 'language': The language object used to build the message before hook_mail_alter() is invoked.
  • 'send': Set to FALSE to abort sending this email message.

Why i am discussing on hook_mail_alter() ?

Recently i was doing one of the Drupal 8 project where client was looking for formatted HTML mail that also works for contact form. So whenever an anonymous user fill up and submit the contact form, It triggers an automated mail to the admin user. If you ever get a chance then just look at that weird formatted Email. Mail related to contact form is being triggered from core contact module in D8.

Just to alter the email format in Drupal 8 we have decided to write a custom module using hook_mail_alter() which alters the outgoing email message using drupal_mail().


Let’s start with code:

To implement hook_mail_alter() whether you can write your own custom module or put it in any of the custom module.

Sample Source code:

/**
 * Implements hook_mail_alter().
 */
function mymodule_mail_alter(&$message) {
  if (isset($message['id']) && $message['id'] == 'contact_page_mail') {
    /** @var \Drupal\contact\Entity\Message $contact_message */
    $contact_message = $message['params']['contact_message'];
    // Get sender's name.
    $sender_name = $contact_message->getSenderName();
    // Get sender's mail.
    $sender_mail = $contact_message->getSenderMail();
    // Get subject.
    $subject = $contact_message->getSubject();
    // Get message.
    $message_body = $contact_message->getMessage();
    // Get the value of "field_request" field.
    $request_value = $contact_message->get('field_request')->getValue();
  }
}

In Above source code  as you can see that we have added conditional statement so the changes will be impact only on specific mail.  

$message['params']['contact_message']: The message entity stored in the variable and it contains all the values from the contact form. Where contact_message is the type of entity. We can also fetch all the custom field value using get() method.

Source code: https://github.com/xaiwant/D8MailAlter

Conclusion: hook_mail_alter is solution to customize your mail body send through drupal mail. In this blog i have shared my idea of how can we send custom HTML formatted mail which triggers from Core drupal 8 contact form.  
 

Mar 22 2017
Mar 22

Presentation Slides can be found here: https://legaudinier.github.io/Ready-Git-Set-Go/# 1

I fully embraced the motto “go big or go home” when I started to think about my first solo community presentation for Stanford Drupal Camp 2017. I wanted to force myself to learn a subject well enough that I could explain it to others. I like a challenge, so I set my eyes on understanding the fundamentals of Git.

Originally I wanted to name my talk “Git Destroys Lives and Tears Apart Families,” but the deeper I dove into Git, the more I realized that understanding the nuts and bolts of Git was not that hard or scary. 2

The light bulb above my head turned on when I realized that a good portion of what I do with Git only exists on my local computer. After this realization, I broke my presentation into four digestible chunks.

First, I explained a local Git Project/Repository, including the steps to move a code change from the Working Directory, to the Staging Area and finally to the Git Directory.

Stanford Camp 2017 - Git Project Workflow

Second, I described how teams can use Git by utilizing remote repositories.

Stanford Camp 2017 - Git Remotes

Third, I explained what makes Git stand out from other version control systems: Git’s fast and powerful branching feature.

Stanford Camp 2017 - Git Branching

Lastly, I explored some common Git situations: conflicts, merging, rebasing, cherry-picking, and stashing.

Stanford Camp 2017 - Three Way Merge Commit

I purposely didn’t give many shortcuts, because I think it’s better to force someone to go through a slightly longer process in order to fully understand how the process works. I also asked my listeners to be patient, because Git is complicated. I sometimes used words before they were defined.

Overall, my presentation merely skimmed the surface of Git. Perhaps I will give a talk in the future that delves deeper into Git? Let’s see!

I used reveal.js for this presentation and I highly recommend it! Learn more about reveal.js here: https://github.com/hakimel/reveal.js/

Don’t get me wrong, Git can get excessively and unnecessarily complicated really fast. Just take a look at the biggest and weirdest commits in Linux Git history. I am not naive about the monster that Git can be if you feed it after midnight. However, for a lot of situations, Git really isn’t that hard to work with and understand.

Mar 22 2017
jam
Mar 22
My trusty microphone, camera, and I recorded a few great conversations at DrupalCon in Mumbai that have never been released until now. Today, a conversation with Rakesh James, who credits Dries for giving him a way to live and support his family with Drupal. Rakesh is an extraordinary and generous person; he's personally paid that forward, giving others in India the chance to change their lives, too, by teaching hundreds of people Drupal and giving them a shot at a career, too. He's also a top 30 contributor to Drupal 8 core.
Mar 22 2017
Mar 22

Overview

Drupal 8 core provides for solid REST capabilities out-of-the-box, which is great for integrating with a web service or allowing a third-party application to consume content. However, the REST output provided by Drupal core is in a certain structure that may not necessarily satisfy the requirements as per the structure the consuming application expects.

In comes normalizers that will help us alter the REST response to our liking. For this example, we will be looking at altering the JSON response for node entities.
 

Getting Started

First, let’s install and enable the latest stable version of the “REST UI” module:

composer require drupal/restui;
drush en restui -y;

Go to the REST UI page (/admin/config/services/rest) and enable the “Content” resource:

You should see the resource enabled:


Enable “GET”, check “json” as the format, and “cookie” as the authentication provider: 

Enable Drupal core’s “rest” module. This will also enable the “Serialization” module as it is a dependency:

Create a test node and fill in one or all of the fields. We will be requesting and altering the structure of the core node REST resource for this node output.

After you’ve created your node, append ?_format=json to the end of your node’s URL (so it looks something like /node/1?_format=json) and access that page. You should see a JSON dump with the field names and the values for the node entity similar to:

This is great, but what if we wanted this output to be structured differently? We can normalize this output!


Creating the Normalizers

Create a custom module that will contain our custom normalizers. The module structure should look like:

custom/
├── example_normalizer │   ├── example_normalizer.info.yml │   ├── example_normalizer.module
│   ├── example_normalizer.services.yml
│   └── src
│       └── Normalizer
│           ├── ArticleNodeEntityNormalizer.php
│           ├── CustomTypedDataNormalizer.php
│           └── NodeEntityNormalizer.php

Each normalizer must extend NormalizerBase.php and implement NormalizerInterface.php. At the minimum, the normalize must define:

  • protected $supportedInterfaceOrClass - the interface or class that the normalizer supports.

  • public function normalize($object, $format = null, array $context = array()) {} - performs the actual “normalizing” of an object into a set of arrays/scalars.

Let’s write a normalizer to remove those nested field “value” keys:

CustomTypedDataNormalizer.php

<?php
namespace Drupal\example_normalizer\Normalizer;
use Drupal\serialization\Normalizer\NormalizerBase;
/**
 * Converts typed data objects to arrays.
 */
class CustomTypedDataNormalizer extends NormalizerBase {
  /**
 * The interface or class that this Normalizer supports.
 *
 * @var string
 */
  protected $supportedInterfaceOrClass = 'Drupal\Core\TypedData\TypedDataInterface';
  /**
 * [email protected]}
 */
  public function normalize($object, $format = NULL, array $context = array()) {
    $value = $object->getValue();
    if (isset($value[0]) && isset($value[0]['value'])) {
      $value = $value[0]['value'];
    }
    return $value;
  }
}

We set our $supportedInterfaceOrClass protected property to Drupal\Core\TypedData\TypedDataInterface (so we can make some low-level modifications with the values for the entity). This means that this normalizer supports any object that is an instance of Drupal\Core\TypedData\TypedDataInterface. In the normalize() method, we check if the value contains the [0][‘value’] elements, and if so, just return the plain value stored in there. This will effectively remove the “value” keys from the output.

example_normalizer.services.yml
We need to allow Drupal to detect this normalizer, so we put it in our *services.yml and tag the service with “normalizer”:

services:
example_normalizer.typed_data:
class: Drupal\example_normalizer\Normalizer\CustomTypedDataNormalizer
tags:
- { name: normalizer, priority: 2 }

One important thing to notice here is the “priority” value. By default, the “serialization” module provides a normalizer for typed data:

serializer.normalizer.typed_data:
class: Drupal\serialization\Normalizer\TypedDataNormalizer
tags:
- { name: normalizer }

In order to have our custom normalizer get picked up first, we need to set a priority higher than the one that already exists that supports the same interface/class. When the serializer requests the normalize operation, it will process each normalizer sequentially until it finds one that applies. This is the “Chain-of-Responsibility” (COR) pattern used by Drupal 8 where each service processes the objects it supports and the rest are passed to the next processing service in the chain.

Make sure to clear the cache so the new normalizer service is detected. 

If we go to our JSON output for the node again, we can see that the output has changed a bit. We no longer see the nested “values” keys being displayed (and looks much cleaner, as well):

Great! Now, what if we want to add some custom values to our output? Let’s say we want the link to the node and an ISO 8601-formatted “changed” timestamp.

Altering node entity JSON output

We can create a normalizer that will make these modifications. Add an entry in the *services.yml file:

example_normalizer.node_entity:
class: Drupal\example_normalizer\Normalizer\NodeEntityNormalizer
arguments: [[email protected]']
tags:
- { name: normalizer, priority: 8 }

And our normalizer:

NodeEntityNormalizer.php

<?php
namespace Drupal\example_normalizer\Normalizer;
use Drupal\serialization\Normalizer\ContentEntityNormalizer;
use Drupal\Core\Datetime\DrupalDateTime;
/**
 * Converts the Drupal entity object structures to a normalized array.
 */
class NodeEntityNormalizer extends ContentEntityNormalizer {
  /**
 * The interface or class that this Normalizer supports.
 *
 * @var string
 */
  protected $supportedInterfaceOrClass = 'Drupal\node\NodeInterface';
  /**
 * [email protected]}
 */
  public function normalize($entity, $format = NULL, array $context = array()) {
    $attributes = parent::normalize($entity, $format, $context);
    // Convert the 'changed' timestamp to ISO 8601 format.
    $changed_timestamp = $entity->getChangedTime();
    $changed_date = DrupalDateTime::createFromTimestamp($changed_timestamp);
    $attributes['changed_iso8601'] = $changed_date->format('c');
    // The link to the node entity.
    $attributes['link'] = $entity->toUrl()->toString();
    // Re-sort the array after our new additions.
    ksort($attributes);
    // Return the $attributes with our new values.
    return $attributes;
  }
}


Similar to before, we have to define our supported interface or class. For this normalizer, we only want to support node entities, so we set it to Drupal\node\NodeInterface (which means any object that implements Drupal\node\NodeInterface).

Our custom node normalizer extends the Drupal\serialization\Normalizer\ContentEntityNormalizer class that is provided by the “serialization” module. The only thing we want to do is append 2 new values to the output to what is already provided -- the “link” and “changed_iso8601” values.

To format our timestamp, we get the timestamp from the entity object, create a DrupalDateTime object, and then format it using the PHP date format “c” character for ISO 8601. This value will be assigned to our new key on the $attributes array that holds all the values.

We will create the “link” value by using the toUrl() and toString() methods to get the URL which gets assigned to the “link” key of the $attributes array. 

After clearing cache and visiting the node output again, we do indeed see our new additions:

Altering a specific node entity type JSON output

So, we were able to alter the output for all nodes, but there may be cases where we would only want to alter the JSON output of specific node types. Fortunately, there isn’t much more to do than what we already learned. We will create a normalizer that contains a custom “changed” timestamp format that will only apply to “article” nodes.

Add another entry to our *services.yml file:

example_normalizer.article_node_entity:
class: Drupal\example_normalizer\Normalizer\ArticleNodeEntityNormalizer
arguments: [[email protected]']
tags:
- { name: normalizer, priority: 9 }


ArticleNodeEntityNormalizer.php

<?php
namespace Drupal\example_normalizer\Normalizer;
use Drupal\serialization\Normalizer\ContentEntityNormalizer;
use Drupal\Core\Datetime\DrupalDateTime;
use Drupal\node\NodeInterface;
/**
 * Converts the Drupal entity object structures to a normalized array.
 */
class ArticleNodeEntityNormalizer extends ContentEntityNormalizer {
  /**
 * The interface or class that this Normalizer supports.
 *
 * @var string
 */
  protected $supportedInterfaceOrClass = 'Drupal\node\NodeInterface';
  /**
 * [email protected]}
 */
  public function supportsNormalization($data, $format = NULL) {
    // If we aren't dealing with an object or the format is not supported return
    // now.
    if (!is_object($data) || !$this->checkFormat($format)) {
      return FALSE;
    }
    // This custom normalizer should be supported for "Article" nodes.
    if ($data instanceof NodeInterface && $data->getType() == 'article') {
      return TRUE;
    }
    // Otherwise, this normalizer does not support the $data object.
    return FALSE;
  }
  /**
 * [email protected]}
 */
  public function normalize($entity, $format = NULL, array $context = array()) {
    $attributes = parent::normalize($entity, $format, $context);
    // Convert the 'changed' timestamp to ISO 8601 format.
    $changed_timestamp = $entity->getChangedTime();
    $changed_date = DrupalDateTime::createFromTimestamp($changed_timestamp);
    $attributes['article_changed_format'] = $changed_date->format('m/d/Y');
    // Re-sort the array after our new addition.
    ksort($attributes);
    // Return the $attributes with our new value.
    return $attributes;
  }
}


In this normalizer, you will notice that we’re defining a new method. public function supportsNormalization($data, $format = null)  {} allows for performing more granular checks for the objects that are instances of the class/interface defined in $supportedInterfaceOrClass. In our supportsNormalization(), we check if the type of the node is an “article” using the getType() function. If so, it returns TRUE - indicating that this normalizer supports the node. Otherwise, it will return FALSE - and using the COR pattern,  it will process the next normalizer in sequence until it finds one that applies to the object.

Let’s take a look at the JSON output of a test “article” node. We do indeed see our custom attribute “article_changed_format” from our custom “article” normalizer:

When we look at the REST response of another content type (like a “page”), we do not see this custom attribute because it is not an “article” and the normalizer did not apply to it. It does, however, pick up the next normalizer in sequence, which happens to be the custom node normalizer we created earlier:

Notes

Mar 22 2017
Mar 22

Tomorrow I'll be giving a workshop about the Drupal 8 media. As part of it we'll build a "media" site from scratch. We will start with the standard Drupal installation, add modules and configuration and see how far we can get.

If you are planning to attend the workshop and want to be fully productive I'd ask you to take some time and prepare your development environment. We will need Drupal 8 checkout with the following modules:

Besides that we'll also need Dropzone and Slick libraries, which you can install based on the docs provided in the README files of the respective modules ([1], [2]).

You can download all dependencies manually or use the project template that I provided for you. Simply clone the repository and run composer install in the project root.

Mar 21 2017
Mar 21

The first meeting of the Drupal User Group Bodensee (Lake Constance) in 2017 will be about to 8.

The new modules allow you to import content into 8 from various sources like Drupal 6 and 7, other content management systems like Wordpress or from files. In contrast to former Drupal versions they don't alter the source but rather copy the data to its new destination.

It is a plugin-based system, that allows importing of data from various formats. You can easily extend it by writing your own plugins.

I will show you what it needs to run a and how to create your own migration with configuration files and plugins.

When & Where

Monday, 27 march 2017, 5.30 pm

Tojio GmbH
Turmstr. 20
D-78464 Konstanz

Free entrance

Mar 21 2017
Mar 21

We need you!

Want to give back to the Drupal Community without writing a line of code? Volunteer to help out at MidCamp 2017.  We’re looking for people to help with all kinds of tasks including: 

Setup/Teardown

  • For setup, we need help making sure registration is ready to roll, and getting T-shirts ready to move.

  • For teardown, we need to undo all the setup including packing up all the rooms, the registration desk, cleaning signage, and making it look like we were never there.

Registration and Ticketing

Room Monitors

  • Pick your sessions and count heads, make sure the speakers have what they need to survive, and help with the in-room A/V

If you’re interested in volunteering or would like to find out more, please contact us.

Volunteer!

Mar 21 2017
Mar 21

Core contributors are currently working on a solution for #2766957 Forward revisions + translation UI can result in forked draft revisions. This issue can affect users of Workbench Moderation (that is, users of Lightning) too though.

The problem presents itself when:

  • The site uses Lightning Workflow
  • Content Translation is enabled with at least one additional language defined (let's say English and Spanish) 
  • A piece of content exists where:
    • There is a published English and a published Spanish version of the content.
    • Both the English and Spanish version have unpublished edits (AKA forward revisions).
  • An editor publishes the forward revision for either the English or Spanish version (let's say English).

The result is the existing published Spanish version becomes unpublished - even though the editor took no action on that version at all. This is because the system is marking the unpublished Spanish version as the default revision.

A workaround exists in the Content Translation Workflow module. If you are still using Drupal core 8.2.x (which, as of this writing, Lightning is) you will also need a core patch that adds a getLoadedRevisionId() method to ContentEntityBase.

Workaround Summary

  1. Apply this core patch.
  2. Add the Content Translation Moderation module to your codebase and enable it.

For more information and demonstration of the bug and the fix, see the video below.

[embedded content]

Note: This is an alpha module with known issues and, by definition, is not covered by the Drupal Security policy and may have security vulnerabilities publicly disclosed.

Note: The Content Translation Workflow module works around the original issue by creating an additional revision based on the current default revision. This preserves existing forward revisions and their content, but effectively makes them past (rather than forward) revisions.

Bonus: The author of Content Translation Workflow, dawehner, has also created a companion module Content Translation Revision which adds a nice UI to translate individual revisions.

Mar 21 2017
Mar 21

Competitive analysis is an exercise, the importance of which transcends the borders of many industries, including healthcare. By taking a look at how your site compares to your competitors, you can ultimately make changes that allow you to better serve your patient’s specific needs.

In recognition of Women’s History Month, we are focusing on women’s health, specifically heart disease, the number one cause of death for women in the United States. We are also honing in on on DrupalCon-host city Baltimore, which has launched several initiatives to combat cardiovascular disease. The goal is to take a look at how two health systems in Charm City categorize and present information about cardiovascular disease on their public-facing websites.

Let’s imagine you have been tasked by the American Heart Association (AHA) to compare and evaluate websites of local health systems in the field of cardiology in how they serve women patients who suffer from cardiovascular disease. Where do we begin? What competitors will we look at? What dimensions or features/site attributes are we comparing? What key tasks are important to patients and caregivers? How does search impact the site visitor journey to each competitor website.

By the time you finish reading this post, you will have the know-how to do a competitive analysis for a health-system or hospital website with a focus on particular health specialties and demographics. You will be able to see how your website measures against the competition at the specialty level and also in meeting the needs of specific patient and caregiver audiences.

What is competitive analysis?

As we discussed in Competitive Analysis on a Budget, competitive analysis is a user experience research technique that can help you see how your site compares with competitor websites in terms of content, design, and functionality. It can also lead to better decision-making when selecting new design and technical features for your site (e.g. search filter terms or search listing display). In this post, we’ll focus on the navigation and internal menu labels as our dimensions.

A Tale of Two Hospitals

Johns Hopkins Medicine and the University of Maryland Medical Center are two large university hospitals local to Baltimore that have centers dedicated to women and heart disease. The two centers are considered direct competitors because both offer the same service and function in the same way.

Fast Facts for Context

  • Women’s heart disease symptoms are complex and often differ from mens’ symptoms. 
  • Women suffering from heart disease may not experience any symptoms at all.
  • In 2015, the Baltimore City Health Department released a report that cited cardiovascular disease as the leading cause of death in the city.
  • According to the 2015 Maryland Vital Statistics Annual Report, approximately 1 in 4 deaths in the Baltimore Metro Area were related to heart disease.
  • National and statewide statistics confirm cardiovascular disease is the leading cause of death for men and women.

It all begins with search

Search plays a key role in how patients and caregivers, especially women, find information about health conditions and treatment. In 2013, Pew Research’s Health Online Report noted that “women [were] more likely than men to go online to figure out a possible diagnosis.” The report also noted that “77% of online health seekers say they began at a search engine such as Google, Bing, or Yahoo.”

Specific search queries will likely bring this group of site visitors to a specific page, rather than to the homepage. This means the information architecture of health system internal pages plays a key role in providing patients and caregivers with information and resources about medical conditions and services. Competitive analysis can help us understand if and how these pages are meeting patient and caregiver needs.

Keywords are key

Keyword selection drastically impacts the results that are returned during a patient and caregiver search query. To demonstrate this, let’s start with a basic keyword search to evaluate how sites are optimizing search for topics like women and heart disease. As shown below, keywords can transform the information-seeking experience for women.

Figure 1: Google search with “women heart disease baltimore md” as key wordsFigure 1: Google search with “women heart disease baltimore md” as key words

The first figure shows the search query results for “women heart disease baltimore md.” Johns Hopkins Women’s Cardiovascular Health Center and University of Maryland Medical Center Women’s Heart Program landing pages are both listed in the search results (Figures 2 and 3).

Figure 2: Johns Hopkins Women’s Cardiovascular Health Center landing pageFigure 2: Johns Hopkins Women’s Cardiovascular Health Center landing page Figure 3: University of Maryland Medical Center Women’s Heart Health Program landing pageFigure 3: University of Maryland Medical Center Women’s Heart Health Program landing page Figure 4: Google search with “heart disease hospital baltimore md” Figure 4: Google search with “heart disease hospital baltimore md”

Search significantly impacts patient and caregiver access to health and hospital information. Google provides results based on previous search behavior, so results may vary by browser and search history, among other factors. We tried these terms using a private session and when logged into Google and saw little to no variance.

As shown in Figure 4, using different keywords in the search query yields different search results. “Heart disease hospital baltimore md” returns Johns Hopkins Heart & Vascular Institute as one of the top search results, but University of Maryland Medical Center’s Heart and Vascular Center is not returned as a top result when logged into Google Chrome on during a private session.

This is important to note because the University of Maryland Medical Center may want to look into methods to improve search engine optimization. There are different ways to address the absence of your website or landing page, product or service at the top of the site visitor’s search results listing.

Menu hierarchy and landing pages - when alphabetization complicates user experience

If women with heart disease choose keywords like “heart disease hospital baltimore md,” and do not indicate their gender in their query, they are brought to Heart & Vascular Health landing pages for each respective health system. Both landing pages use alphabetization to organization centers and programs, Because the centers or programs dedicated to women and heart disease begin with “W,” they are situated at the bottom of the internal navigations.

This may pose a challenge to patients and caregivers entering the site from search queries that omit the word “women” (i.e. heart disease hospital baltimore md). These search query examples are not meant to represent the most common queries for people looking for information about heart disease in Baltimore; rather they demonstrate how different search queries can yield different results for people seeking this information.

Figure 5: Johns Hopkins Heart & Vascular Institute landing pageFigure 5: Johns Hopkins Heart & Vascular Institute landing page Figure 6: University of Maryland Medical Center Heart and Vascular CenterFigure 6: University of Maryland Medical Center Heart and Vascular Center

Internal Menu Labeling and Nesting

Now that we see how search impacts visitor pathways to the health system sites, let’s take a closer look at how Johns Hopkins Medicine and the University of Maryland Medical Center, differ in presenting information in the internal menus for the centers and programs dedicated to women’s heart disease and heart health.

Figure 7: Johns Hopkins Heart & Vascular Institute landing page navigationFigure 7: Johns Hopkins Heart & Vascular Institute landing page navigation

Multiple internal navigations within the Johns Hopkins Heart & Vascular Institute landing page and the current placement of the Women’s Cardiovascular Health Center at the bottom of the navigation hierarchy might make it challenging for patients looking for this particular center. Since centers provide services for patients, the placement of “centers of excellence” under “clinical services” may complicate site visitors’ understanding of resources and the relationship between services and centers. These types of naming conventions should be examined more closely.

Figure 8: Johns Hopkins Heart & Vascular Institute landing page internal navigationsFigure 8: Johns Hopkins Heart & Vascular Institute landing page internal navigations Figure 9: University of Maryland Medical Center Women’s Heart Health Program landing page navigationsFigure 9: University of Maryland Medical Center Women’s Heart Health Program landing page navigations

Like its competitor, the University of Maryland Medical Center has multiple internal navigations, which may also be cumbersome to users. Patients and caregivers have too many options which may make it difficult for them to understand what they should do on this page. It may also make it challenging for them to complete key tasks (i.e. researching risk factors, find a physician, schedule an appointment, etc).

The University of Maryland Medical Center’s “Centers and Services might resonate better with site visitors because they can find both Centers and Services under “Centers and Services;” Johns Hopkins Medicine’s placement of Centers of Excellence under Clinical Services could be confusing. Patients typically go to a center to receive clinical services; they don’t often go to a clinical service to find a center.

The University of Maryland Medical Center’s Heart & Vascular Center use of “Services’” for one of its navigations might not be intuitive to site visitors. “Services” plays the role of a catch-all for conditions (i.e. aortic disease), topics (i.e. women’s heart health) and treatment options (i.e. heart and lung transplant) and may make it challenging for visitors to find what they are looking for on this page.

More specifically, a patient or caregiver looking for women’s heart health may not necessarily expect to find a program under “Services.” These items could be surfaced more quickly and more efficiently organized within Centers and Services so that the pathways to Women’s Heart Health are more intuitive to patients and their caregivers.

We’ll know if this is the case after we test these health system site pages with real visitors.

Figure 10: Competitive analysis matrixFigure 10: Competitive analysis matrix

In sum

So how do you design a website for women who may have asymptomatic heart disease? How do you integrate the needs of potential patients who experience neck and back pain as a symptom of their heart disease? We can gain a better understanding of specific cases like this by understanding the user journey of patients who exhibit non-traditional symptoms of heart disease and their caregivers by conducting competitive usability tests of these sites.

So what next?

Now that we’ve provided a cursory analysis and heuristic evaluation of the internal navigations of two health system sites, we’ll perform user tests on the websites to validate the some of the hypotheses we discuss in this blog post and compare the content and design of the two health system sites. Keep an eye out for that post in a couple weeks!

We want to make your project a success.

Let's Chat.
Mar 21 2017
Mar 21

This is the second in a series of articles, in which I'd like to share the most common pitfalls we've seen, so that you can avoid making the same mistakes when building your sites!

myDropWizard offers support and maintenance for Drupal sites that we didn't build initially. We've learned the hard way which site building mistakes have the greatest potential for creating issues later.

And we've seen a lot of sites! Besides our clients, we also do a FREE in-depth site audit as the first step when talking to a potential client, so we've seen loads of additional sites that didn't become customers.

In the last article, we looked at security updates, badly installed module code and issues with patching modules, as well as specific strategies for addressing each of those problems. In this article, we'll look at how to do the most common Drupal customizations without patching!

NOTE: even though they might take a slightly different form depending on the version, most of these same pitfalls apply equally to Drupal 6, 7 and 8! It turns out that bad practices are quite compatible with multiple Drupal versions ;-)

4. Patching when it's not necessary

In the last article, we talked about the potential problems with depending on patches and the extra work required to organize them to avoid those problems.

If you want to avoid doing all that extra work, it's best to avoid patching contrib modules and themes whenever possible!

However, most of the patched modules we see on the sites we audit, are patched to customize things that could easily be customized WITHOUT patching the module!

Here are the most common patches we see:

  • CSS changes
  • HTML template changes
  • Form alterations

All of those things could be done in custom theme or module - patching is totally unnecessary!

Since 99% of Drupal sites include a custom theme, we highly recommend putting those type of customizations in the theme. You could also put them in a custom module, however, we find that people who don't consider themselves "developers" are more comfortable working in the theme than a custom module.

Here's how...

4a. CSS changes

If you only want to make a minor change, you can just add some new CSS rules to your theme's CSS styles which override the CSS from the module. The theme's CSS will always come after the module's CSS, so it'll override it (assuming it's not "less specific" than the module CSS).

But if you want to totally change the CSS from the module, you can copy the CSS file into your theme, "register" it and your theme's version will totally replace the version from the module.

In order to "register" a CSS file in Drupal 6 or 7, you simply add a line like "stylesheets[all][] = MODULE.css" to your themes .info file and clear caches. See this documentation page for a more detailed explanation.

Replacing a CSS file in Drupal 8

Unfortunately, the process for completely replacing a CSS file in Drupal 8 is ... overly complicated. I sincerely hope that this is simplified in later versions or that tools appear to help make this easier. Going from adding a single line to this complex process is a clear loss for Drupal 8, where usually working in Drupal 8 is all wins. :-/

This documentation page covers it in detail, but I'm going to attempt to briefly explain here in order to get you started!

CSS files are registered in "libraries" in the MODULE.libraries.yml file. So, first we need to find the library entry for the CSS file we want to replace. If we wanted to replace "node.preview.css", we'd look in "node.libraries.yml" in the "node" module and see:

drupal.node.preview: # The library name
  version: VERSION
  css:
    theme: # The type of the CSS file - take note of this!
      css/node.preview.css: {} # Here you can see the CSS file we want!
  js:
    node.preview.js: {}
  dependencies:
    - core/jquery
    - core/jquery.once
    - core/drupal
    - core/drupal.form

So, we need to edit our THEME.info.yml and add the following:

libraries-override:
  node/drupal.node.preview: # MODULE/LIBRARY_NAME
    css:
      theme: # this has to come from the MODULE.libraries.yml
        # The original CSS file name mapped to the file in the theme
        css/node.preview.css: css/node.preview.css

Notice that some information from the MODULE.libraries.yml needs to be transferred into the THEME.info.yml (which is what makes this such a pain):

  • The module and library name
  • The CSS file type - 'theme' in this case
  • The path to the original CSS file in the theme (mapped to the path to the new CSS file in the theme)

Then you just clear the Drupal caches and Drupal will use your CSS file instead of the module CSS file everywhere it was originally used!

4b. HTML template changes

All of the HTML that Drupal generates comes from templates in Drupal 8 (*.html.twig files), and most of it in Drupal 6 and 7 too (*.tpl.php files).

To customize one of these templates, simply copy it from the module into your theme, modify if it, and clear the Drupal cache. The next time you load the page, Drupal will use your version instead!

4c. Form alterations

In Drupal, you create forms in PHP by making special arrays that represent the form elements. While a module will create the original form array, there is a mechanism for other modules and the theme to alter forms before they are shown to the user!

Unfortunately, doing this requires understanding a little PHP and referring to lots of docs on the Form API. However, if you were able to patch a module to alter the form, it's only a little more work to alter one in your theme!

Of all the topics discussed so far this is the most "developer-y" and I don't want to overwhelm you, so we're going to cover the "cheaters" process for making form alterations! ;-)

  1. Find the form ID of the form you want to alter. You can view the form in the web browser and then look at the HTML source for a hidden "form_id" input. For example, the login form's id is "user_login": The user login form and the source code showing the form id
  2. Install the Devel module. Never leave this installed on live sites! It's not a security vulnerability, but an attacker could potentially use it to escalate their access after exploiting another vulnerability.
  3. Edit (or create) template.php (D6 & 7) or THEME.theme (D8). If you're creating a new file, be sure to put '<?php' as the first line!
  4. Add a function named like THEME_form_FORM_ID_alter. So, if your theme is "mytheme" and you're altering "user_login", then it'd be:
    <?php
    function mytheme_form_user_login_alter(&$form, &$form_state) {
      dsm($form);
    }
    ?>
    
  5. Clear caches and reload the form. You'll see a cool, interactive array explorer which allows you to drill down into the form array and see what's in there. You can use this to find the thing you want to change:Screenshot showing exploring the form array of the login form via the Devel module
  6. Add PHP code to make our alteration. For example, let's say we want to change the "Username" field to "Nick name":
    <?php
    function mytheme_form_user_login_alter(&$form, &$form_state) {
      $form['name']['#title'] = t('Nick name');
    }
    ?>
    
  7. Reload the page and see your change:Screenshot of an alteration to the user login form without any patches

Note: In the code above, it wraps the "Nick name" in a t() function -- this is to enable translation. If you're site doesn't serve pages in multiple languages, you don't need to use that, but it's considered best practice. Who knows if you'll need to support multiple languages later?

Note 2: The code above WILL work with Drupal 6, 7 & 8, but it technically doesn't follow coding standards for Drupal 8, which should declare the function like this instead:

<?php
use \Drupal\Core\Form\FormStateInterface;
 
function mytheme_form_user_login_alter(&$form, FormStateInterface $form_state) {
  $form['name']['#title'] = t('Nick name');
}
?>

In this article, we covered the 4th pitfall among the 7 we're going to cover in this 3-part series. If you haven't already, please check out the first article!

Have you encoutered Drupal sites includes unnecessary patching? Are there any other important examples that you've encountered when building a Drupal site or reviewing a site built by someone else? Please write a comment below!

Or, if you'd like us to audit your Drupal site, please contact us about a FREE site audit!

Mar 21 2017
Mar 21

We once offered you a selection of free responsive Drupal themes,
as well as advanced tutorials on creating themes and subthemes.
Today, our focus will be very specific: we will discuss
Drupal themes for construction websites.

Construction industry and Drupal web development have a lot in common. Drupal websites, like good buildings, can be very solid, beautifully designed, and convenient in every way. For them, architecture also matters! And they can also be built brick by brick — using Drupal modules and themes.

There are special Drupal themes perfectly suited for construction businesses — so welcome to check out their collection. We have included Drupal 7 and Drupal 8 themes with plenty of useful features. They help construction companies easily and elegantly showcase their offers by providing ready page templates, customizable to anyone’s liking.

These themes are based on the modern, fast and lightweight Bootstrap framework. Some have Drupal Commerce or Ubercart support for online shopping.

The themes that we have gathered are all responsive, so their design will neatly adapt to absolutely any device screen. Many of them are also Retina-ready — this latest trend basically means that the designs look great on high-definition phones or tablets with no pixelation even when zooming-in.

Well, examples are worth a thousand words, so let’s right now see what has been built by drupalers for builders!

Some great responsive Drupal themes for builder and construction websites

Yellow Hats

Yellow hats are something you are likely to see on the demo of every theme in the building category. So let a modern theme with exactly the same name be the first on our list! It is extremely flexible and customizable, responsive, Retina-ready, built on Bootstrap 3.x, and offers an impressive list of features like Parallax effect, HTML and CSS3 validation, Google Maps, +11 homepages, +250 HTML pages, an online shop due to Ubercart support, a wealth of headers, footers, blocks and more.

Great Drupal themes for construction websites

Housebuild

You could also give a try to Housebuild, a responsive and Retina-supportive Drupal theme, good for construction, renovation, electric works, and other business websites. This theme is clean and professional, easy to use and to customize. It features CSS3 animations, Google Maps integration, Bootstrap 3.x support, sliders, unlimited colors, and more.

Great Drupal themes for construction websites

Builder

The Builder Drupal theme is close to the above described Housebuild theme. It is also responsive, supports Retina displays, and offers a similar set of useful features, as well as Instagram integration and an online shop thanks to Drupal Commerce support.

Great Drupal themes for construction websites

Construction

Here is an easily customizable Drupal theme, great for websites of construction and related companies. It has a responsive layout, a special drag-and-drop tool to create layouts (Superhero framework), and a wealth of customization opportunities. The “Who we are”, services, shop (Drupal Commerce), news and other blocks are at your disposal.

Great Drupal themes for construction websites

Darna

The Darna Drupal 8 theme is meant for the construction, architecture, plumbing and renovation companies, as well as other businesses. It is responsive and ready to work with Retina displays. Being easy to use, Darna at the same time can boast an impressive list of features like reusable code, W3C validation, Pixel perfect, Google maps, Bootstrap 3x, multiple header and homepage options, shop pages (Ubercart), and more.

Great Drupal themes for construction websites

Gates

You could give a try to this multipurpose responsive Drupal 8 theme with a support for Retina displays, used for construction, as well as other types of websites. It lets you present a portfolio, a shop due to Ubercart support, news, services etc. The Gates theme is based on Bootstrap 3.3.5, has over 1000 icons, plenty of layout options, unlimited color schemes and much more.

Great Drupal themes for construction websites

Habitus

The Habitus theme for Drupal 8 means 14 clean and modern responsive HTML pages. It is based on a 12-column Bootstrap grid, has variations for homepage and “coming soon” page, a portfolio, a blog, a gallery, a 404 page and much more to meet the needs of your construction company or other business.

Great Drupal themes for construction websites

Construct

Here is another cool example for construction, renovation, electricity, isolation, maintenance and business companies. It is a responsive, Retina-ready and SEO optimized Drupal 7 theme on the basis of Bootstrap 3. The Construct theme has 11 custom blocks, valid HTML5 & CSS3, Twitter Feed, Google maps, etc.

Great Drupal themes for construction websites

Constractor One

Meet another nice responsive theme for Drupal 8 which will be very useful for construction and renovation industry companies by its limitless customization options. The Constractor One offers drag-and-drop layouts, more than 580 icons, Parallax effect, HTML5 and CSS3, support for video, RTL (right-to-left text direction) and more.

Great Drupal themes for construction websites

Structure

You could also take a look at the modern responsive and Retina-supportive Drupal theme called Structure. Definitely, structure is its strong point, because it offers +15 homepage and +3 header styles, Mega Slider, all pages required for building companies (about, prices, services, etc.), Drupal Commerce support and so on.

Great Drupal themes for construction websites

These are just some examples of excellent Drupal themes for construction websites. If these or other ready themes do not fully cover your needs, our developers will build a unique one for you. Builders will always understand each other, no matter they create houses or websites and themes. Let’s build awesome things together!

Mar 21 2017
Mar 21
Following on from my previous blog posts around how Drupal and open-source are growing in China, we must start looking at how the overall ecosystem can be nurtured to turn one of the most populous countries in the world on to Drupal.
Mar 21 2017
Mar 21

We are now very deep into our »Druplicon marathon«. After already presenting you Drupal Logos in Human and Superhuman forms, Drupal Logos as Fruits and Vegetables, Druplicons in the shapes of Animals and Drupal Logos taking part in the outdoor activities, it's now time to look at Drupal Logos representing the national identities.

A sense of belonging to one nation can be very strong. National identities are therefore also present in various Druplicons. Latter mostly represent active or inactive Drupal Groups from the specific countries. These groups connect Drupalistas from that specific countries, so in most cases, Druplicons contain colours from the countries' flags. Not in all cases, of course, but mostly they do. Furthermore, national identities are also represented by some Drupal Camps. Here's what we have found.

Druplicon Bolivia

Drupal Logo Bolivia

Druplicon France

Druplicon France

Druplicon Italy

Druplicon italy

Druplicon Denmark

Drupal Logo Denmark

Druplicon Tunisia (Drupal Camp Tunis 2015)

Druplicon Tunisia

Druplicon Senegal (Drupal Camp Dakar 2011)

Druplicon Senegal

Druplicon Austria (Drupal Austria Roadshow)

Drupal Logo Austria

Druplicon Belgium

Drupal Logo Belgium

Druplicon Uganda

Druplicon Uganda

When we finish a blog post about Druplicons, we are, in practical all cases, sure that we hardly missed any. Still, we don't doubt that you will still manage to find the missing Druplicons in the specific area, so we, in every case, encourage you to find them. By now, you have proven yourself and found many missing Drupal Logos.

Well, this time, despite the research, we are almost for certain that we have missed some Druplicons, which represent national identities. So, if you find any missing Drupal Logos, which represent national identities, post them on our twitter account and you'll be mentioned in our next blog post about Druplicons. Sadly, last time, no one found any of the Druplicons taking part in outdoor activities. Better luck this time!

Mar 21 2017
Mar 21

This is the third part of the blog series about the Drupal 8 security features. We already covered Drupal community’s general approach to security, Cross site scripting and SQL injection. I strongly recommend you to read the first and second part if you didn’t do that yet.

Cross site request forgery (XSRF)

Cross site request forgery is a vulnerability that allows attackers to transmit unauthorized commands from a 3rd party site to the site that trusts the given user. Its results can be similar to XSS, but it works in a slightly different way.

Let’s assume there would exist an URL, that would allow users to delete a Drupal node without confirmation. In this case attacker could build a web page that would try to trick someone with admin permissions on the attacked site to click on a link to that URL. Drupal’s confirmation page comes into the equation at this point.

Delete confirmation page.

Looks familiar?

Yes, this page does not exist to annoy you and your clients since “it introduces a UX regression due to an extra click”. It is actually there to protect you from the XSRF attacks. Thank you Drupal!

Another way of XSRF attack is through a form. Let’s assume attacker could trick admin user to do an unauthorized POST request to the permissions form in Drupal. This could have disastrous results.

In order to prevent that Drupal’s form API generates unique form token (using SHA256 HMAC and a secret key) each time a form is loaded or updated. If the token is missing or changed it won’t allow submissions of that form. This ensures that the user that is submitted the form actually loaded it too.

General Drupal’s recommendation is to never use custom HTML forms and to use Form API instead.

Example

Someone tries to do a POST request on admin user’s behalf. But since that person doesn’t know what is the valid value for the form token it tries to submit without it or with some arbitrary value.

Someone tampered with the form token.

Drupal detects that and refuses to accept the submitted data:

Drupal refuses to accept unsecure data.

That’s it for today. Next time we’ll see how Drupal sanitizes user-submitted data if forms. Follow us on Twitter to stay tuned!

Do you need security review of your Drupal site or module? Get in touch!

Mar 21 2017
Mar 21

Have you ever needed to persist some irregular data relating with a user account? Things like user preferences or settings that are kinda configuration but not really? Storing them as configuration means having to export them, and that’s no option. Storing them in State is an option, but not a good one as you’d have to maintain a State value map with each of your users and who wants to deal with that…

In Drupal 7, if you remember way back when, we had this data column in the user table which meant that we could add to the $user->data array whatever we wanted and it would get serialised and saved in that column (if we saved the user object). So what is the equivalent of this in Drupal 8?

I’m happy to say we have the exact same thing, but of course much more flexible and properly handled. So not exactly the same. But close. We have the user.data service provided by the User module and the users_data table for storage. So how does this work?

First of all, it’s a service. So whenever we need to work with it, we have to get an instance like so:

/** @var UserDataInterface $userData */
$userData = \Drupal::service('user.data');

Of course, you should inject it wherever possible.

The resulting object (by default) will be UserData which implements UserDataInterface. And using this object we can store as many pieces of data or information for a given user as we want. Let’s explore this a bit to learn more.

The interface has 3 methods or handling data: get(), set(), delete(). But the power comes in the method arguments. This is how we can store some data for User 1:

$userData->set('my_module', 1, 'my_preference', 'this is my preference');

So as you can see, we have 4 arguments:

  • The module name we want this piece of data to be findable by
  • The user ID
  • The name of the piece of data
  • The value of the piece of data

This is very flexible. First, we can have module specific data. No more colluding with other modules for storing user preferences. Stay in your lane. Second, we can have multiple pieces of data per user, per module. And third, the value is automatically serialised for us so we are not restricted to simple strings.

Retrieving data can be done like so:

$data = $userData->get('my_module', 1, 'my_preference');

This will return exactly this is my preference (in our case). And deserialisation also happens automatically if your data got serialised on input.

Deleting data is just as easy:

$userData->delete('my_module', 1, 'my_preference');

Moreover, most of the arguments of the get() and delete() methods are optional. Meaning you can load/delete multiple pieces of data at once. Check out UserDataInterface to see how omitting some arguments can return/delete sets of records rather than individual ones.

And that is pretty much it. And in true Drupal 8 form, there’s nobody stopping you from overriding the service and using your own UserDataInterface implementation that has that little extra something you are missing. So no, you probably don’t have to create that custom table after all.

Mar 21 2017
Mar 21

We have been running containers in production for more than a year now and want to share some of the lessons learnt, by open sourcing our container suite.

Why?

So why another container suite? Haven't you already written one (https://github.com/previousnext/docker-containers)? We have been running containers since Docker 0.7, at that time we were only running them as dev/test environments which attempted to mimic what we ran on production VMs. At the time, the easiest way forward was to write containers which felt familiar to us.

With our adoption of Kubernetes over the last year we were able to rethink the way that we ran containers, with the main goal of thinking about our containers as processes and not VMs.

Today we are open sourcing the containers that run on our internal platform, Skipper.

We are open sourcing these containers because we want to share some of the lessons learnt during this process.

Strong Foundations

I wanted to talk about this first because I believe it is the most important.

At every opitunity we get we leverage the "official" Docker Hub base images.

This means alot less work for us when a new version of PHP comes out, and we only add what we want on top.

We also get the benefits of the Docker Hub security scanning these base images.

Health Checks

Health checking is very important for running production applications, and even more important when running containers.

These checks allow container frameworks to call a HTTP endpoint, get a response code and make decisions on whether your application should be restarted or removed from the load balancer.

Our containers ship with 2 Drupal health checks:

These health checks can only be accessed from behind the load balancer (no public requests) and ask the application the following:

  • Can I connect to the database?
  • Can I bootstrap Drupal?
  • Can I write to the filesystem?

If a developer wants to provide their own set of health checks, they include them when we deploy a project (bake a new image). These checks added it to the folder "/var/www/healthz" folder and accessible via the same "/healthz/custom.php" url path.

Resizing

Typically when a container gets run it inherits the static configuration which gets baked into it at build time. If you provide that container with more memory it goes to

waste because you have not reconfigured your processes to consume all that new memory, sure, you can lock down your containers and document that they run at X amount of memory and CPU, but by doing that you are also restricting developers and their requirements.

To solve this we wrote a tool called: Tuner

Tuner is a simple app which does the following:

  • Takes in 3 environment variables TUNER_MAX, TUNER_PROC and TUNER_MULTIPLIER (for over allocation of resources)
  • Returns a configuration to Stdout which you can write to wherever you like

eg.

$ export TUNER_MAX=1024
$ export TUNER_PROC=64
$ export TUNER_MULTIPLIER=3
$ tuner --conf=apache > /etc/apache2/mods-enabled/tuner.conf

This allows container frameworks to pass in information about the size of the container, and use tuner to write the required configuration.

To use this with our containers, its as simple as setting those 3 environment variables at runtime:

$ docker run -d -e TUNER_MAX=1024 -e TUNER_PROC=64 previousnext/7.x-dev

Developer Containers

All our "dev" containers extend the corresponding production container to add extra tools, languages and scripts.

eg. php7.x-dev inherits from php7.x

This means we run local very close to what is run on production, and keep our production containers as slim as possible.

Conclusion

We have come along way since writing our first set of containers, at the time they looked alot more like VMs (got the job done, but not very elegant).

This container set is PreviousNexts way to put our best foot forward to show the community how we run containers, show off a couple of our ideas and get feedback.

Docker Kubernetes Skipper
Mar 20 2017
Mar 20

Drupal Modules: The One Percent — Alert to Administrator (video tutorial)

[embedded content]

Episode 24

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll take a look at Alert to Administrator, a module which displays an alert every time a user logs in as an administrator.

Mar 20 2017
Mar 20

As we approach the release of Drupal 8.3.0 and start working on 8.4.x, I want to take a look at technical debt.

In the past 18 months, a quick overview of technical debt as expressed in major and critical bugs (other technical debt isn't represented by these numbers):

  • 800 major bugs fixed
  • 200 critical bugs fixed
  • 21 critical bugs currently open
  • Over 600 major bugs currently open

Sometimes code is buggy in itself, no-one spots the bug in review, test coverage is incomplete, and then these get uncovered after commit.

However sometimes there is code which all works independently, but the interactions of a complex system like Drupal results in critical bugs arising seemingly out of the ether. As an example, let's talk about a <a href="https://www.thirdandgrove.com/accumulation-technical-debt-or-how-recentl...https://www.drupal.org/node/2858431">fifteen year old bug in a sixteen year old module</a>which got written up as a new core bug report just this week.

While this may be the oldest undiscovered critical bug in core, it points to a wider mechanism whereby technical debt is often introduced - simply via the addition of incomplete features over time.

Here's Book module's birthday commit in March 2001.

commit 6275348098baa21523f31d8c228cd7fa0dd64b2d

Author: Dries Buytaert Date: Sat Mar 24 16:36:13 2001 +0000

  the "faq module" and the "documentation module" are going to be bundled
 into a much more powerful and easier to maintain "book module": each "page"      in the big "drop.org/drupal book" is a node and everyone with a user account  can suggest new pages or updates of existing pages.

Book module allows you to organise nodes into a hierarchy, which allows for a table of contents and previous/next links.

Here's the commit that added revision support to nodes in November 2001.

commit a2e6910902bfb1263e1b6363e2c29ede68f89918

Author: Dries Buytaert

Date:   Sat Nov 3 18:38:30 2001 +0000

   - Made the node forms support "help texts": it is not possible to configure

     Drupal to display submission guidelines, or any other kind of explanation

     such as "NO TEST POSTS", for example.

   - Added node versioning: it is possible to create revisions, to view old

     revisions and to roll-back to older revisions.  You'll need to apply a

     SQL update.

     I'm going to work on the book module now, so I might be changing a few

     things to enable collaborative, moderated revisions - but feel free to

     send some first feedback, if you like.

   - Added some configuration options which can be used to set the minimum

     number of words a blog/story should consist of.  Hopefully this will

     be usefull to stop the (almost empty) test blogs.

   - Various improvements:

      + Fine-tuned new node permission system.

      + Fine-tuned the functions in node.inc.

      + Fine-tuned some forms.

      + XHTML-ified some code.

One important aspect of this is that the storage of that hierarchy happens in a dedicated table, controlled by the book module, which has not been kept up-to-date with changes in node storage.

I don't know if there was a time in between 2001 and 2017 where book storage supported revisions, but it didn't with the original commit, and it still doesn't now.

So now we have a node storage which maintains each version of a node, allowing you to roll back to earlier versions (and I never realised this was added in 2001 until today, that's impressive) combined with a book storage that is versionless.

Once those two features were combined, there's already a bug, or at least a limitation:

  1. Create a new book node A as a child of a different node B, revision 1
  2. Edit the node to change the parent node from B to C, creating a new revision 2
  3. Revert the change you just made to node A back to the revision 1 - the parent node will still be C

I wasn't able to find any existing bug report for this, but it persists in Drupal 8 today. Changing book parents is quite rare, and reverting revisions is also quite rare, so it's possible no-one noticed in 16 years.

We then skip forward 11 years, to adding support for draft revisions to the core entity API (there had been various implementations of this in contrib before this commit, but they all required extensive workarounds).

commit e203b73b6292adb19fb9197d532dd2b928133163

Author: webchick

Date: Thu Sep 6 13:32:19 2012 -0700

   Issue #218755 by jstoller, Gábor Hojtsy, stevector, mradcliffe, agentrickard, catch, Crell: Added Support revisions in different states.

Now you can save a new draft revision, without changing the published one, then publish that draft later on. However, it's only API support, so nothing user-facing changes.

Skip forward another four years to 2016, and the addition of Content Moderation as an experimental module.

commit bc00f081e6b8e35e3b7ee57eb963b7e5b92593a2

Author: Nathaniel Catchpole

Date:Mon Aug 8 13:26:31 2016 +0100

   Issue #2725533 by timmillwood, alexpott, amateescu, webchick, dixon_, larowlan, dawehner, catch, Crell, Bojhan, jibran, Wim Leers, agentrickard, Berdir: Add experimental content_moderation module

Content Moderation adds a user interface for creating draft revisions - the first time it's been possible to do so in core since the API capacity was added.

Additionally, there is workflow and access support, so that some users can only create draft changes while others review and publish them.

This then adds two new bugs on top of the 15 year old issue with reverting revisions:

  1. Changing the book parent when creating a draft revision unexpectedly changes the book outline globally as a side effect. Deleting the draft won't put it back either.
  2. Due to the first issue, where workflow access is set up so that people who are not allowed to do anything other than create drafts, this becomes an access bypass since they can make changes to the published book structure without permissions to publish.

If we look back at the original revisions commit in 2001, there's this note:

     I'm going to work on the book module now, so I might be changing a few

     things to enable collaborative, moderated revisions - but feel free to

     send some first feedback, if you like.
 

What we're doing with Drupal 8 and the Workflow Initiative is adding collaborative, moderated revisions to core fifteen years after it was first conceived. There are other revision issues with both menu links and path aliases, which also have a history going back more than ten years, also discovered within the past year as the result of working on the Workflow Initiative.

So when looking at where bugs are introduced, how can we deal with this? If we had a time machine to go back to 2001, would we tell Dries not to introduce revision support for nodes? I don't think we would, since this is very important to allow collaborative editing without data loss. Do we blame initiatives like Workflow when they expose 15 year old technical debt added before most or all participants had even heard of Drupal? I don't think we'd do that either.

What we can do though is to look for steps that might have been taken to either avoid the bug being introduced in the first place, or to have mitigated it since - and looking at some more history of Book module, we can see tendencies towards both. When reviewing these issues, these point to an approach towards anticipating and removing technical debt more generally.

As early as 2005, there were <a href="https://www.thirdandgrove.com/accumulation-technical-debt-or-how-recentl...https://www.drupal.org/node/23730">suggestions to merge forum and Book modules</a> into a generic ‘hierarchical entity outline' storage.

In 2007, Book module was <a href="https://www.thirdandgrove.com/accumulation-technical-debt-or-how-recentl...https://www.drupal.org/node/146425">refactored to rely on the menu link hierarchy</a> rather than having its own separate implementation.

In 2008, there was work to <a href="https://www.thirdandgrove.com/accumulation-technical-debt-or-how-recentl...https://www.drupal.org/node/344019">refactor the hierarchy storage of taxonomy module into an improved and generic subsystem for managing hierarchy</a>. This would then have been applied to the book and menu systems, however the issue was not completed and remains open.

In 2011, there was <a href="https://www.thirdandgrove.com/accumulation-technical-debt-or-how-recentl...https://www.drupal.org/node/1261130">a proposal to remove the Book module from core</a> and let it be maintained in contrib instead. This was not done and the issue remains open.

In 2013, <a href="https://www.thirdandgrove.com/accumulation-technical-debt-or-how-recentl...https://www.drupal.org/node/2100577">Book's hierarchy storage had to be factored back out into a custom implementation</a> as a side-effect of menu link system refactoring in Drupal 8 (which itself was a side-effect of adding a new routing system which had not taken into account interactions with the menu link system (and by extension, book hierarchy) in 2011).

If we look at these issues, both completed and open, there's an ‘alternative timeline' of the Book module, pointing towards either consolidation of Drupal core's three entity hierarchy storage implementations into a single implementation, or removing the Book module from core in order to have just two entity hierarchy systems to deal with. While refactoring and removal are very different approaches, they both stem from the understanding that it is hard work to maintain three separate entity hierarchy storage implementations in a single software product.

The Drupal 8 entity system work consolidated previously disparate implementations for user accounts, nodes, custom blocks, and taxonomy terms to rely on a single unified storage mechanism. This work does not immediately result in user-facing improvements. It does however make user-facing improvements much easier to apply across all those entity types once completed. Taxonomy terms will soon have revision support (and by extension, draft and workflow support) for the first time, relying on Drupal 8's entity storage to implement, but the storage of taxonomy hierarchy has the same inherent limitations as we just have with Book module. The adding of revisions to taxonomy module mirrors the addition of revisions to nodes 15 years ago, including the conflict with an existing unversioned hierarchy storage.

Now that we understand the process by which critical technical debt was introduced to book module and nodes fifteen years ago and stayed hidden most of that time, we can apply the same thinking to the addition of revisions to taxonomy module now. In future I hope we can broaden this analysis to recognise and anticipate technical debt much earlier in the future.
 

Mar 20 2017
Mar 20

Drupal security audit

Security of a website is a crucial thing that sometimes does not receive the attention it should.

Today, I’d like to share the routines we apply when checking up security of a Drupal-powered website. For the most part, this article is a summary of the report Dmitry Kochetov, our Drupal security specialist, made at DrupalCamp Krasnodar 2016.  

There are 3 main reasons behind requests to inspect the state of this or that Drupal site from the point of view of security. They are:

  1. viruses (browser or hosting provider informed owner of the site of malicious code); 
  2. switching developers (picking a new company to support the site and ditching the old one, a strange reason but it is there); 
  3. no core updates for a long time (obvious).

The result of the audit is a report covering all vulnerabilities the site shows.
Since we focus on Drupal, the routines and procedures take into account peculiarities of this CMS, like DB and files structure, site settings etc.

There are 10 steps to Drupal site security audit as performed by our team:

  1. Website isolation.
  2. Search for malicious code.
  3. Search for changes in Drupal core and modules.
  4. Search for new files in Drupal core and modules directories.
  5. Search for code needing security check.
  6. Search for PHP and JS in DB.
  7. Search for queries uncharacteristic for Drupal.
  8. Checking installation with security_review.
  9. Checking for susceptibility to Drupalgeddon.
  10. Checking web server settings.

1. Website isolation

We need to isolate the website to avoid changing files and database while performing the audit. Skip this step and you may end up with more files containing malicious code than there were at the outset.

There are two typical isolation options: 

  1. a docker container;
  2. a virtual server. 

Pick any you like, they are much the same from the point of view of results achieved.

2. Search for malicious code

There are two products we use when searching for malicious code in Drupal website files:

  1. AI-Bolit, virus and malicious scripts scanner (server-side);
  2. Linux Malware Detect (LMD), a scanner for Linux that searches for web shells, spam bots, trojans etc.

My personal grand prix goes to AI-Bolit, since it does a better job finding malicious PHP and Perl code. But, again, your choice is your choice.

3. Search for changes in Drupal core and modules

To search for changes in Drupal core and modules, we install hacked + diff module (manually or via drush) and launch it to get a report. 

Important: hacked module does not scan sites/all/libraries directory, you should check it manually. Here is how you can do that:

  1. download the library;
  2. launch diff:
    diff path_to_site/sites_default/libraries/name_library/ path_to_download_library/
    
  3. analyze the changes found.

Nothing really complicated.

4. Search for new files in Drupal core and modules directories

At this step, we look for files that should not be part of the original core and modules downloaded from drupal.org.

To run the check, we use bash script drupal_find_new_files.sh. If there are no changes or some files differ, the script returns OK (with the exception of theme files). The script needs three arguments:

  1. path to modules directory;
  2. path to site’s root directory;
  3. path to theme directory.

Launch the script in path_to_site/tmp/hacked_cache. This is where modules archives should be.

Once the script is through with the files, see report.log and check each file found with vi or less.

Source code of this script you can see on github: https://github.com/initlabopen/drupal-security-audit

5. Search for code needing security check

We need to do this because hacked module report does not show all changes made to custom and dev modules. This is the step that results in a list of them.

6. Search for PHP and JS in DB

This is when we search for code injections into the DB. First off, you need a DB dump:

mysqldump -uuser -ppassword db > db.sql

Then you can generate a number of reports:

cat db.sql | grep '<?php[^<]*>' > report_php.log 
cat db.sql | grep '<script[^<]*>' > report_script.log
cat db.sql | grep '<object[^<]*>' > report_object.log
cat db.sql | grep '<iframe[^<]*>' > report_iframe.log
cat db.sql | grep '<embed[^<]*>' > report_embed.log 

You can use any editor to analyze the reports. We prefer vi, since the search function is faster in this editor. In each report, we search for keywords from commands mentioned above, e.g. in report_embed.log we search for “embed” and look at the surroundings of each entry found. Sticking to the example, it can be some object between <embed></embed> tags.

7. Search for queries uncharacteristic for Drupal

At this stage, we analyze web server logs and look for queries you do not associate with a Drupal-powered website. 
This is a 3-step procedure: CD to logs directory, run commands, get your results and analyze them.
Scooping queries to PHP files:

cat log_access.log | awk ' { if($9==200)  print $7 } ' |  grep \.php | grep -v "index\.php" > php.log

Check what PHP files were executed successfully, see if any of them are on the list compiled at step 1 (antivirus check), open each file and analyze its contents.

Scooping CGI quesries:

cat log_access.log | awk ' { print $7 } ' | grep "\.cgi" > cgi.log

Scooping POST-queries:

cat log_access.log | grep POST | awk ' { print $6 $7 } ' | sort | uniq -c | sort -rn > post.log 

Here, we check successful POST-queries to the website, although the queries can be to PHP files as well. The routine applied is described above: see if any of them are on the list compiled at step 1 (antivirus check), open each file and analyze its contents.

8. Checking installation with security_review

As the name of the step implies, this is when we check the installation with the security_review module and get rid of all annoying nuances following recommendations.

Install security_review manually or via drush, launch it, get the report, check it and fix everything step-by-step.

9. Checking for susceptibility to Drupalgeddon

If the website’s core was not updated for a long time, it makes sense to check if it is susceptible to Drupalgeddon. This vulnerability enjoys some outstanding coverage online. We follow the routine described below:

  1. Checking with https://www.drupal.org/project/drupalgeddon

  2. Search for files:
    find ./ -type f -name "*.php" -exec grep -l '$form1=@$_COOKIE' {} \; >> report.log
    
  3. Search through the DB:
    SELECT * FROM menu_router WHERE access_arguments LIKE '%form1(@$_COOKIE%';
    SELECT * FROM role WHERE name='megauser';
    SELECT * FROM users WHERE name='drupadev';
    SELECT * FROM users WHERE name='drupaldev';
    SELECT * FROM users WHERE name='drupdev';
    

10. Checking web server settings

Web server environment for a Drupal website is a yet another topic covered excessively online. At drupal.org, there are many articles dealing with Drupal security: https://www.drupal.org/security/secure-configuration

We check the following:

This is it, the audit is over and it is time to make the report.

I hope you find this article useful. I am ready to discuss Drupal site security audit further, let’s talk in comments or on social media.

Mar 20 2017
Mar 20

I can proudly say that we have been on top of our test coverage in Drupal Commerce. Back in June of 2016 we had removed any trace of Simpletest based tests and . Once using PhantomJS for JavaScript testing landed in core we jumped ship. Test coverage is great for the individual project because we can ensure that we ship an (assumedly, mostly) bug-free product. But I believe we should do more than that. So I built my own .

What is a project template? Well you can pass it to Composer and have a set up Drupal 8 project skeleton. You'd run something like

composer create-project mglaman/commerce-project-template some-dir --stability dev --no-interaction

The end result is a built Drupal 8 site, with Drupal Commerce. You will also have a configuration for using Behat testing out of the box, with existing Drupal Commerce coverage provided. This means you can just tweak and add along the way. I have also added and integration, providing an example of how to ship your Drupal Commerce project with continuous integration to make sure you deliver a functioning project.

Running Tests

The project comes with a phpunit.xml.dist which has been set up to allow you to run any PHPUnit tests provided by Drupal or contrib from the root directory. Here's an example to how to run the Commerce Unit and Kernel test

./bin/phpunit --testsuite unit --group commerce
./bin/phpunit --testsuite kernel --group commerce

This makes it simpler for you to write your own PHPUnit tests for client code. The PHPUnit file shipped with Drupal core assumes it'll say in the root core directory, meaning it can get lost on any Drupal core update. Which is annoying. I use this setup to provide basic unit and kernel tests for API integrations on our Drupal Commerce projects.

The best part is Behat, of course!

  Scenario: Anonymous users can access checkout
    When anonymous checkout is enabled
      And I am on "/product/1"
      Then I should see "Commerce Guys Hoodie"
    When I press "Add to cart"
      Then I should see "Commerce Guys Hoodie - Cyan, Small added to your cart."
      And I click "your cart"
    Then I press "Checkout"

This allows us to make sure a user can visit the product and add it to cart and reach the checkout. It's obviously quite simple but is also an important check. You can see more examples here: https://github.com/mglaman/commerce-project-template/tree/master/tests/f...

Docker ready

In order to have a reproducible testing environment, the repository also contains my Docker setup. It is contained in a docker-composer.yml.dist so that it can be modified and changed. The config/docker directory contains the PHP, nginx, and MariaDB configurations. It ships with MailHog as an SMTP server so that you can debug emails easily. I used the MailHog SMTP server when working on the order receipts we provide in Drupal Commerce 2. And customer communication is a big deal with e-commerce.

Docker also provides a simpler way to ship a way to test Search API backed by Solr.

A way to provide a demo

The project has a script to install my mglaman/commerce_demo project, which provides base products and other configuration to try out Drupal Commerce. This is the base content for the Behat tests. So, if you want to try out Drupal Commerce 2 or pitch it to a client, CxO, or a friend this project makes it pretty simple to spin up an example Drupal Commerce 2 site.

What's next?

Next steps are to add an example catalog backed by Search API into the demo module using the database storage. Once that's set I'll work to have it using Solr as storage and test that, along with custom Solr configuration examples. I'd also like to show some deployment step examples in circleci.yml .

Mar 19 2017
Mar 19

A quick live podcast featuring a reaction from Mike and Ryan about Dries' Drupal 9 Blog Post. Recorded on YouTube Live, and this audio version is reposted to our podcast channel for your convenience.

[embedded content]

DrupalEasy News

Follow us on Twitter

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Mar 19 2017
Mar 19

Almost all of the DrupalEasy Podcast hosts congregate to take a look back and a look forward at Drupal 8. We discuss some of our favorite things about Drupal 8 as well as what we're looking forward to the most in the coming year. Also, Anna provides us with a first-person look at DrupalCamp Northern Lights (Iceland), and Ted leads a discussion on Drupal 8.3.

Interview

Our favorite things about Drupal 8 (so far).

  • Mike - everything you can do with just core, plugins.
  • Ted - object-oriented codebase, experimental modules.
  • Ryan - configuration management, migrate in core.
  • Anna - module and theme libraries in core and base themes in core, view modes.
  • Andrew - Restful services in core, Composer all the things.

What are we looking forward to the most in the Drupal universe in 2017?

DrupalEasy News

Three Stories

Sponsors

Upcoming Events

Follow us on Twitter

Five Questions (answers only)

  1. Brewing beer.
  2. Windows Subsystem for Linux.
  3. Hiking the Appalachian trail (Jim Smith's blog).
  4. Giraffe.
  5. Doing three Drupal sites in three months, the first Orlando Drupal meetups.

Intro Music

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Mar 17 2017
Mar 17

Any user on Drupal.org who has accepted our Git usage policy may now create full projects with releases. This is a big change in policy for the Drupal project, representing an evolution of the contribution ecosystem in the past half a decade.

What was the Project Application Process?

Ever since the days when Drupal's code was hosted in CVS there has been some form of project application process in the Drupal Community. To prevent duplicate, low-quality, insecure, or otherwise undesirable projects from flooding Drupal, users would submit sandbox projects to an application queue to be reviewed by a group of volunteers.

After resolving any issues raised in this review process, the user would be given the git vetted role, allowing them to promote their sandbox to a full project, claim a namespace, and create releases. Once a user had been vetted for their first project, they would remain vetted and be able to promote any future projects on their own, without submitting an additional application.

The Problem

Unfortunately, though the project application process was created with the best of intentions, in the long term it proved not to be sustainable. Drupal grew too fast for a group of volunteer reviewers to keep up with reviewing new projects, and at times there were applications waiting in queue for 6 months to 1 year, or even more. That is much too slow in the world of software development.

This put Drupal in a difficult situation. After years of subjecting new projects and contributors to a rigorous standard of peer review, Drupal has a well-deserved reputation for code quality and security. Unlike many open source projects, we largely avoided the problem of having many duplicate modules that exist to serve the same purpose. We unified our community’s effort, and kept up a culture of collaboration and peer review. At the same time, many would-be contributors were unable or unwilling to navigate the application process and so simply chose not to contribute.

The question became, how could we preserve the emphasis on quality while at the same time removing the barrier to contribution that the application process had become?

Constraints on a solution

Opening the contribution gates while retaining strong signals about code quality and security was a tricky problem. We established three constraints on a solution:

  1. We need to welcome new contributors, and eliminate the walls that prevent contribution.
  2. We need to continue to send strong signals about security coverage to users evaluating whether to use modules from Drupal.org.
  3. We need to continue our strong emphasis on quality and collaboration through changes to project discovery that will provide new signals about code quality, and by providing incentives and credit for peer review.

The Solution

In collaboration with the community, the security team, members of the board, and staff we outlined a solution in four phases:

Phase 1: Send strong signals about security advisory coverage.

  • We updated project pages to include messaging and a shield icon to indicate whether a project received security advisory coverage from the security team.
  • We now serve security advisory coverage information in the Updates status information provided by Drupal.org, and we're working on a patch to display that information directly on the updates page of users' Drupal sites.

Here are some examples of what these security signals look like on project pages:

If a project is not opted in to security advisory coverage, this message will appear at the top of the project page:

Warning at the top of Project pages

And this one will appear near the download table:

Warning above download table

If a project has opted in, this message will appear near the download table:

Project opt in notice

And covered releases will show the coverage icon (note how the stable 7.x release has coverage and the 8.x release candidate does not):

Release coverage icon

Phase 2: Set up an opt-in process for security advisory coverage

  • Previously any project with a stable release would receive security advisory coverage from the security team. As we opened the gates for anyone to promote full projects, the security team needed an opt in process so that they could enforce an extra level of vetting on projects that wish to receive advisory coverage.
  • We agreed to repurpose the project application queue to be a queue for vetting users for the ability to opt their projects in to receive security advisory coverage. Now that this process has been decoupled from creating full projects, the security team may revise it in future–in collaboration with staff and the community.
  • Now a project maintainer must opt in their project to receive advisory coverage and make a stable release in order to receive security advisory coverage from the security team.

Once a maintainer has been vetted by the security advisory opt in process, they can edit their project and use this field set to opt-in:

Project opt-in field

Phase 3: Open the gate to allow users to create full projects with releases without project applications.

This is the milestone we've just reached!

Phase 4: Provide both automated code quality signals, as well as incentives for peer review of projects - and factor these into project discovery

  • We are working on this phase of the project in the issue queues, and we appreciate your feedback and ideas!

What is the new process?

So in the end - what is the new process if you want to make a contribution by hosting a project on Drupal.org?

  1. You must have a Drupal.org account, and you must accept the git terms of service.
  2. You can create a sandbox or a full project
  • Note: We still strongly recommend that project maintainers begin with sandbox projects, until they are sure they will be able to commit to supporting the project as a full project, and until the code is nearly ready for an initial release.
  • That said, you can promote a sandbox project to a full project at any time, to reserve your name space and begin making releases.

At this point, you will have a full project on Drupal.org, and will be able to make releases that anyone can use on their Drupal site. The project will not receive security advisory coverage, and a warning that the project is not covered will appear on the project page and in the updates information.

If you want to receive security advisory coverage for your project, you will need to take these additional steps:

  1. You must apply for vetted status in the security advisory coverage queue.
  2. Members of the security team or other volunteers will review your application - and may suggest changes to your project.
  3. Once feedback is resolved, you will be granted the vetted role and be able to opt in this project, and any future projects you create, to receive security advisory coverage.
    • Note: Only *stable* releases receive security advisory coverage, so even after opting your project in you will not receive the advisory coverage shield except on stable releases.

What comes next?

Now that the project application process is no more, the gates are open. We are already seeing an uptick in projects created on Drupal.org, and have seen some projects that had migrated to other places (like GitHub) migrate back to Drupal.org. We can expect to see contributions from some great developers who previously felt gate-kept out of the community. We will also see an uptick in contributions that need work, from new developers and others who are still learning Drupal best practices.

That is why our next focus will be on providing good code quality signals for projects on Drupal.org. We want to provide both automated signals of code quality, and new incentives for peer review from existing members of the community. We're outlining that plan in the issue queues, and we welcome your feedback and contributions.

We also still have work to do to communicate this well. This is a big change for the Drupal community and so we want to make people aware of this change in every channel that we can.

Finally, after such a significant change, we're going to need to monitor the contrib ecosystem closely. We're going to learn a lot about the project in the next several months, and it's likely there will be additional follow ups and other changes that we'll need to make.  

Special Thanks

There are many, many contributors on Drupal.org who have put in time and effort to help make the contribution process better for new contributors to Drupal - the deepest thanks to all of you for your insight and feedback. We'd also like to specifically thank those who participated in the Project Application Revamp, including:

Mar 17 2017
Mar 17

The MidCamp session team has been hard at work placing all of our selected sessions in the perfect place.  Checkout the Friday and Saturday session schedules!

View all Friday sessions View all Saturday sessions

Call for Volunteers

Want to give back to the Drupal Community without writing a line of code? Volunteer to help out at MidCamp 2017.  We’re looking for people to help with all kinds of tasks including: 

If you’re interested in volunteering or would like to find out more, please contact us.

Volunteer!

Thursday is Training Day

On Thursday, we have four great full day Training sessions planned.  We have lined up a group of incredible trainers who are going to donate their time to lead full day, in depth training sessions.  Each session is $40, and is additional to the price of the camp.

View all trainings sessions

View all of the individual trainings:

Sprints

At MidCamp 2016, the Sprint room was always abuzz with activity.  There was so much activity on those who work on the Frontend of Drupal, and a concentrated effort to get Drupal Commerce to it's first Release candidate.

If you want to sprint, stop by these rooms any of the days.  If you are interested in mentoring, or leading sprints, please contact [email protected].  We ask if you are coming on Thursday and Sunday that you get a free ticket so we can make sure to get enough food and coffee.

  • Thursday sprints will take place in Room 220, with room for 80 people
  • Sprinting during sessions Friday and Saturday will take place in Room 120AB, starting after the keynote
  • Sunday sprints will take place in Room 314A and 314B, with room for 60 people in each room

Thanks for reading this far!  We hope to see you at the camp!

Mar 17 2017
Mar 17

Thursday is Training Day

On Thursday, we have four great full day Training sessions planned.  We have lined up a group of incredible trainers who are going to donate their time to lead full day, in depth training sessions.  Each session is $40, and is additional to the price of the camp.

View all trainings sessions

View all of the individual trainings:

Friday and Saturday Session Schedule

The MidCamp session team has been hard at work placing all of our selected sessions in the perfect place.  Checkout the Friday and Saturday session schedules!

View all Friday sessions View all Saturday sessions

Sprints

At MidCamp 2016, the Sprint room was always abuzz with activity.  There was so much activity on those who work on the Frontend of Drupal, and a concentrated effort to get Drupal Commerce to it's first Release candidate.

If you want to sprint, stop by these rooms any of the days.  If you are interested in mentoring, or leading sprints, please contact [email protected].  We ask if you are coming on Thursday and Sunday that you get a free ticket so we can make sure to get enough food and coffee.

  • Thursday sprints will take place in Room 220, with room for 80 people
  • Sprinting during sessions Friday and Saturday will take place in Room 120AB, starting after the keynote
  • Sunday sprints will take place in Room 314A and 314B, with room for 60 people in each room

Thanks for reading this far!  We hope to see you at the camp!

Mar 17 2017
Mar 17

Annnnnnnd we are back for round 2! Last year’s Government Summit was such a success that we offering it again, and this year we have managed to pack even more content and conversations into the day for y’all! We are still catering to the same government audience, so if you find yourself working for the government in some capacity (fed, state, local or contracting), then this Summit for you!

We are working hard to put together an even more engaging one-day Government Summit than last year that will dig deeper into best practices and ideal ways around the government red tape that you so often find yourself faced with.

We have changed our line up to better suit your desires! We have sought out people from near and far, gathering up the greatest Drupal government minds and have worked tirelessly to factor in your feedback from last year. Here are some of the ways we are kicking up into the next gear this year:

  • We have added event more breakouts which you loved so much last year to discuss topics that are relevant to you in real time.  Throughout the day we will have many discussion opportunities for you to interact with your peers  and one we are really excited about is Mass.gov’s Bryan Hirsch.
  • We will have a dedicated space for case studies from our sponsors Accenture and New Target so that you see some awesome government solutions in action
  • We will open the day with a panel all about D8 in the Government - a topic we all know you're interested in hearing about and seeing
  • We plant to feature a special lunch time speaker, Kendra Skeene from the State of Georgia talking about Drupal PaaS!  

Who should come?

The Government Summit is intended for anyone who uses Drupal in the context of government, whether it be at a local, state or federal level. All skill levels and roles are welcome. You’ll meet site builders, developers, themers, project managers, support specialists, and more.

Join the Government Summit

Although the Government Summit is part of the DrupalCon program, it (along with the other Summits) is a separate event and requires a separate registration. On your registration form, there is a section to select which topics pique your interest most, as well as an “Other” field where you can -- and should! -- suggest any other topics about which you are interested.  We look forward to seeing you there!

Date: Monday, April 24
Time: 9:00am-5:00pm
Cost: $199 advanced | $250 on-site

Register Now

 

Mar 17 2017
jam
Mar 17
My trusty microphone, camera, and I recorded a few great conversations in Mumbai that have never been released until now. Today, talking about full-time contribution to Drupal 8 while building the Acquia Lightning distribution, passion for community and paying it forward in open source, and Drupal in India with Abhishek Anand.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web