Author

May 26 2017
May 26

As you may remember from the fairy-tales, knowing the secret words helps you to move even the mountains and open treasure caves. The words “Open, Sesame” from "Ali Baba and the Forty Thieves” work somewhat similarly to modern website passwords. However, making passwords work perfectly is a complex art, and it is one of the touchstones of Drupal website security. Thankfully, Drupal equips you with lots of power for this, thanks both to its out-of-the box features and lots of useful modules.

Passwords: what should they be like?

At first glance, it might seem that setting complicated requirements for users’ passwords is necessary for security. However, this may sometimes work against you.

For example, forcing users to have a strict composition of letters and numbers in passwords may lead to passwords that are hard to remember and have to be saved somewhere. And asking people to change passwords too often may eventually annoy them so they end up creating weaker passwords.

In addition, you should never forget that, first of all, you need to convince people to register, which people are reluctant to do.

So what you need is a good mix of strictness and usability. This proportion largely depends on what kind of “treasures” are kept in a particular “cave” — for example, a website which involves payment processes deserves a more complicated entry control. In addition, there also are other means to enhance password security that do not require anything from users.

Let’s see how Drupal modules let you take into account all these and many more twists and turns in password policy, so you can choose what’s right for your website. The following is a blend of Drupal 7 and Drupal 8 modules, some of which are available for both versions, and some of which are in active development.

Some useful Drupal modules for working with passwords

Drupal password policy

This module lets you impose a set of requirements on passwords created by users. They include: length, digits, case, punctuation and more. You can set what kind of characters, and in what amount, have to be used in a password. The module also offers a password expiration feature. In the Drupal 7 version, there is a basic blacklist functionality, where you can add the most common words from the dictionary to prevent their use and avoid weak passwords. In Drupal 8, this feature is coming soon.

Security Questions

If your website requires it, add an additional lock to the doors by implementing security questions during the login and password reset procedures. This module will help you do it in a flexible way, using a number of configurable options.

Two-factor Authentication (TFA)

What if you need more than one lock? Here is a double lock. The TFA module adds an additional step to the authentication procedure. This may include one-time passwords, codes sent by SMS, or pre-generated codes, as well as integration with third-party services (Authy, Duo etc.). The module encrypts sensitive data with the use of the PHP mcrypt library.

Password Strength Disabler

Do you not need any special locks? If you find it justifiably unnecessary for your website (and if you have thought twice), you can disable the password strength check and let your users feel more at ease when creating passwords. In this case, usability takes the lead.

OneAll Social Login

People appreciate the convenience of using an all-in-one login. Here is a module that lets users sign up and sign in to your site using their accounts on social networks. The list includes Facebook, LinkedIn, Twitter, Amazon, Disqus, Pinterest, Instagram, Foursquare and more — 30+ in total.

Email Verify

If a user makes a typo while providing an email address in the signup form it can cause problems, because are not going to get confirmation or other emails. Luckily, there is a module that checks whether the address really exists, first on the domain level, and then on the actual username level.

Secure Login

Be protected by the power of HTTPS. If your website is available via both HTTP and HTTPS, the Secure Login module makes sure your user logination forms (or other pages) are transmitted via HTTPS, so their passwords are hidden from prying eyes.

Flood control

You can limit the number of login attempts using a convenient admin interface provided by the Flood control module.

Fail2ban Firewall Integration

Enhance the login attempt limitation by blocking out the sources of suspicious requests. This module, which provides an automated firewall tool, is ready to help you.

Username Enumeration Prevention

When hackers know the usernames of website’s users, they can attempt brute-force attacks. The Username Enumeration Prevention module makes it more difficult for them to find these usernames.

These are just some of the great modules dealing with passwords in Drupal. Good luck in using them in the best way for your website!

And we can enhance your luck, or transform it into a 100% positive outcome — all you need is to contact our cool drupalers who are ready to help in any website optimization issue.

May 26 2017
May 26

Recently I had a ticket come through that required me to add some JavaScript on several pages on a Drupal 7 site — with the possibility of needing to remove that JavaScript from certain pages depending on user reactions. Since I knew there was a good chance that things were going to change, I wanted to build something that could be changed quickly and easily.

The Problem

I needed to add some JavaScript to some landing pages, and also some node types. It is also possible that more pages/node types will need to be added or even removed based on user reactions.

The Solution

We are using the Context module on this particular site, so I opted to create a context reaction. That way, I can change the context condition for the reaction to meet the client’s needs, without needing to push new code. There is a fair amount of code for the reaction, but I think it is harder to make sense of it all in parts, so I’m just going to dump out the .module file and plugin itself and give a description of each after the fact.

my_reaction/my_reaction.module
<?php
 
/**
 * @file
 * Code for my_reaction.
 */
 
/**
 * Implements hook_ctools_plugin_api().
 */
function my_reaction_ctools_plugin_api($module = NULL, $api = NULL) {
  if ($module == "context" && $api == "plugins") {
    return array("version" => "3");
  }
}
 
/**
 * Implements hook_ctools_plugin_directory().
 */
function my_reaction_ctools_plugin_directory($module, $plugin) {
  return 'plugins/' . $plugin;
}
 
/**
 * Implements hook_context_registry().
 */
function my_reaction_context_registry() {
  $registry = array();
 
  $registry['reactions'] = array(
    'my_reaction' => array(
      'title' => t('My Reaction'),
      'description' => t('Does some stuff.'),
      'plugin' => 'my_reaction_plugin',
    ),
  );
 
  return $registry;
}
 
/**
 * Implements hook_context_plugins().
 */
function my_reaction_context_plugins() {
  $plugins = array();
 
  $plugins['my_reaction_plugin'] = array(
    'handler' => array(
      'path' => drupal_get_path('module', 'my_reaction') . '/plugins/context',
      'file' => 'my_reaction_plugin.inc',
      'class' => 'my_reaction_plugin',
      'parent' => 'context_reaction',
    ),
  );
 
  return $plugins;
}
 
/**
 * Implementation of hook_context_page_reaction().
 */
function my_reaction_context_page_reaction() {
  if ($plugin = context_get_plugin('reaction', 'my_reaction')) {
    $plugin->execute();
  }
}

Alright, the first two things that happen are hook_ctools_plugin_api() and hook_ctools_plugin_directory(). Those hooks are just letting Drupal know of the API version to use as well as where the plugins will be stored. Next we have hook_context_registry(), which defines the reaction and also associates it to the plugin called my_reaction_plugin. This is followed to a call to hook_context_plugins() which defines the plugin called my_reaction_plugin.

The last thing that happens in the module file is the definition of hook_context_page_reaction(). This hook is loading the custom reaction plugin and calls its execute() method. More on that later. Now we need to write the code for the plugin itself. The class definition of the my_reaction_plugin will extend context_reaction and will only have a few methods. Again, I will discuss the individual parts after the code itself.

my_reaction/plugins/context/my_reaction_plugin.inc
<?php
/**
 * @file
 * Extends class context_reaction for my_reaction.
 */
 
/**
 * Exposes My Reaction as a reaction in Context.
 */
class my_reaction_plugin extends context_reaction {
 
  /**
   * Provides the options form.
   */
  function options_form($context) {
    // NOTE: The context module will not save a reaction to a context unless it
    // provides some sort of option. I'm opting to use a value type here as it
    // will not render anything on screen.
 
    $form = array();
 
    $form['enabled'] = array(
      '#type' => 'value',
      '#value' => 'true'
    );
 
    return $form;
  }
 
  /**
   * Executes the logic the reaction if needed.
   */
  function execute() {
    $contexts = context_active_contexts();
 
    foreach ($contexts as $value) {
      if (!empty($value->reactions[$this->plugin])) {
        $this->doReaction();
        return;
      }
    }
  }
 
  /**
   * This function does the actual work of the reaction.
   */
  function doReaction() {
     drupal_add_js('alert("I reacted!");', 'inline');
  }
}

The first thing to note for the plugin is the options_form() method. If you want to make your reaction configurable, you would add your form fields here. You will notice that my example is empty with the exception of a comment and a value form field. While developing this plugin, I noticed that reactions that do not register options are not properly saved to the context. When I can, I would like to investigate this further to contribute a patch to fix that. For now, I have added a value form field as a workaround so that the reaction can be saved to the context.

Next, we have the execute() method. All this method does is to check the currently loaded context and call the doReaction() method if the plugin is found. Lastly, we have doReaction(). Normally this method does not exist, for usually the main section of code is in the execute() method. For better readability, I have pulled the code into its own method. All it does is add some JavaScript to the current page.

The Result

Once installed and enabled, you can configure the custom reaction to appear on any page, node type, or even some other condition that has been implemented within the context module. It wasn’t too hard to build, and it offers flexibility as I can change when this reaction fires with just a few click of the mouse. Better yet, this concept is not only for JavaScript — this has the potential to used for all sorts of things!

Happy Drupaling!

May 26 2017
May 26

Introduction

Deployment and configuration management are pretty common but very important parts of every project. These processes are too painful in Drupal 7. Drupal 7 stores all configurations in database together with content. To manage configurations, you probably have worked with a combination of Features, Strongarm and other related modules. These modules provide a way to package config into the special "features" Drupal modules.

Drupal 8 provides a completely different  way of managing configurations. The idea is that almost all configurations can be stored in files rather than in a database. It allows developers to move settings between development and live sites easily.

In this article we’ll dive into a new Configuration management system and will learn how to move your configurations from a development environment. As a result, we’ll develop a basic workflow which will help to keep your configurations synchronized between environments.

Before we start, we should determine which part of your website can be exported with a help of the new system.

Configuration management in Drupal 8

Manage configuration. Not content

Actually, everything besides content is configuration. Following this idea, it is wrong to think that a new configuration management system can help to move content from a development website to a production one. Using the new system you can manage:

  • modules settings and states
  • content types and settings
  • block types and settings
  • permissions
  • views
  • theme settings
  • and so on and so forth

The following things are considered as content and can’t be managed:

  • node
  • users
  • custom block and its content
  • other entities content

To move content from dev to prod you probably have to think about a migration system but this is a completely different story.

Let’s examine the configuration management system in action.

Basic settings and preparations

Configuration management functionality is provided by the “Configuration Manager” core module. Make sure it is enabled. After that a new admin page will be available: /admin/config/development/configuration.

This page allows you:

  • to import and export configurations (a full archive or a single one);
  • to synchronize configuration.

Out of the box active Drupal 8 website configuration is stored in DB in “config” tables. This is done for performance and security reasons.This new system allows us to export complete website configuration and store it in YAML files.

Before starting working with configuration let’s configure a folder where we’ll store our configuration YAML files. This folder is configured in setting.php file:

$config_directories['sync'] =

'sites/default/files/config_HASH/sync';

HASH here is a long hash generated at the stage of installation; it helps to protect configuration from web access.

Now we can export active website configurations manually or using Drush: drush config-export (or drush cex). We stick to Drupal way and will use Drush in this article.

Let’s stop at this point and think a bit. Almost all website configs are now available as YAML files. What does it mean? YAY! It means that we can store your configuration YAML files under a version control system!

Basic workflow with Git and Drush

We are almost sure that your sites/default/files folder is ignored by the version control system. No? Do it ASAP:)

It is recommended to move your config folder to the same level (a sibling of) as your docroot directory (it fully prevents web access to it). That is not always possible and you can move it to the same level as a public files folder (sites/default/files).

Just move the folder and change setting.php file:

$config_directories['sync'] = 'sites/default/config_HASH/sync';

After running Drush command, YAML files will be created and you can commit them. Let’s get back to admin UI.

Drupal 8 admin UI: no configuration changes to import

At this stage there are no configuration changes because we just have exported current active configuration. Let’s change something on a website (e.g. a site name) and see what happens.

Drupal 8 admin UI: changed configurations

We can see that one configuration item (file) was changed. Now we can see the changes:

Drupal 8 website: changed configurations

What's in the picture?

Stage configuration is a configuration which is saved in YAML files.

Active configuration is a current website configuration stored in a DB.

At this point it is very important to understand differences between export and import.

Export is for taking all of the configuration defined in the DB and saving it as YAML files.

Import does the opposite action: it imports the configurations out of YAML files into the DB.

In our example if you run import using "import all button" or running drush config-import (drush cim) command, it will wipe out a site name change to a state from YAML files (stage config).

If we want to change our staged config we have to run export. After that the appropriate YAML file will be changed and we can commit changes.

To summarize, a basic workflow to move changes from your dev environment to live one is the following:

On a development website:

  1. Install the configuration management module, configure sync folder, export active configuration and commit it.
  2. Change configurations.
  3. Export changes using command: drush cex
  4. Commit and push changes: git commit and git push

On a live website:

  1. Pull changed configs: git pull
  2. Import changes into live website active configuration: drush cim

As a result, our development and live websites get synchronized in a few simple steps.

Development and live websites get synchronized

Conclusion

Drupal 8 strives to make a process of exporting and importing site configuration easier using a new configuration management system. In this article we have examined a basic workflow which allows to keep your configurations synchronized between development and live environments. I hope that this workflow will help you to improve your deployment process.

Related (helpfull) modules / distributions

Features: allows you to bundle configs to be re-used on another site. Must have module for Drupal 8 distributions.

Configuration installer: Drupal profile which allows to install a site from an existing config.

Configuration Tools: automatically commits config changes to git.

Configuration Read-only mode: allows to lock any configuration changes via admin UI.

Configuration development: helps with developing configurations.

Configuration Synchronizer: provides methods for safely importing site configuration from updated modules, themes, or distributions.

May 25 2017
May 25

Here's three tools that work straight out of the box to quickly and dramatically improve the SEO of the content on your Drupal website.

Linkit Drupal Module

linkit

Linkit is a tool designed to simplify adding links to a page

The Linkit module is only useful if you use WYSIWYG.

It adds an autocomplete field in your WYSIWYG content editor for linking to other pages on your site, as well as external links. It allows you to easily add internal links that are not only well formed but that stay up to date and automatically use the correct path.

It also adds an advanced tab which provides the option to add specific HTML classes and IDs as well as making the link open in a new tab/window. It works straight out the box, no set up required.

How does this improve SEO?

Properly formed and placed links are one of the many elements that can affect your SEO performance. The Linkit module is an excellent tool to help you ensure you have good internal links set up.

Yoast Drupal Module

yoast

Yoast is a tool to optomise your content for your targeted keywords

The Yoast module helps to optimize content around key phrases.

It adds a section to the bottom of all node edit forms where you can fill in your target keyword and get analysis on how well your content uses the keyword(s) in terms of SEO.

It also evaluates other important factors that can improve the SEO of the current node and it works straight out the box, no set up required.

How does this improve SEO?

Good content is critical in improving SEO, this module highlights the various aspects of what makes good content based on SEO best practises.

Schedular Drupal Module

schedular

Schedular is a tool for scheduling when content is published

The Schedular module enables you to schedule nodes to be published and unpublished at a specified date and time.

This functionality allows you to plan and execute your content strategy which, takes some of the work out of implementing your SEO campaign. One of the fundamental things Google asseses when ranking your site is not only how good your content is but also how often you add new content.

Just like the other two this will work straight out of the box.

How does this improve SEO?

Google loves fresh content. If a website has fresh content, it ranks better. This also flows into your social media marketing stategy as the amount and type of traffic to your newly published content will vary depending on time and day of the week.

The Next Step

If you'd like to discuss how to improve the SEO of your Drupal website get in touch.

May 25 2017
May 25

Last time, we gathered together DrupalCon Baltimore sessions about Coding and Development. Before that, we explored the area of Project Management and Case Studies. And that was not our last stop. This time, we looked at sessions that were presented in the area of Drupal Showcase.

Ain’t No Body: Not Your Mama’s Headless Drupal by Paul Day from Quotient, Inc.

This session explores disembodied Drupal, also known as bodiless Drupal- an application that uses Drupal’s powerful framework to do things it does well while storing the actual domain data in a remote repository. Moreover, it explores applications that are both disembodied and headless - in which Drupal is the framework used to maintain data stored in a remote repository; and kiosk applications and other non-Drupal front end leverage the stored data, whether via Drupal paths or otherwise.

[embedded content]

Continuous Integration is For Teams by Rob Bayliss from Last Call Media and Drew Gorton from Pantheon

This session looked through the eyes of the real world at Continuous Integration. It went past the tooling hype and looked at the benefits of CI for developers, project managers, and clients. After all, a successful Continuous Integration practice makes a team work faster, safer, and more predictable.

[embedded content]

Extending your application to the edge: best practices for using a CDN from Fastly

In this session, attendees discussed caching strategies when using CDNs. More specifically, they covered caching long tail content, caching fast-changing content, invalidation, stale and error conditions, and best ways to interact with a CDN when it comes to cached content. The session was supported by real-world examples to showcase new ways of using a CDN as a platform that extends applications to the network edge.

[embedded content]

How Drupal.org Fights Spam Using Distil Networks by Dominick Fuccillo from Distil Networks

In this session, the author explored the unique challenges Drupal.org faced with spammers creating bogus accounts, the resources needed to manually remove spam content, and how fake accounts and spam were polluting the community engagement metrics.

[embedded content]

Scaling and Sharing: Building Custom Drupal Distributions for Federated Organizations by Alexander Schedrov from FFW and Craig Paulnock from YMCA of the Greater Twin Cities

In this session authors examined how they were leveraging open source, Drupal 8 with one of the largest federated non-profit organization in the world, the YMCA. They focused specifically on a community driven initiative, Open Y, which is a Drupal distribution custom built for YMCAs everywhere.

[embedded content]

Score.org: User Experience for 320+ sites on one flexible platform! by phase2

This session dug into the UX and administrative challenges that are common to large national organizations with several local chapters or offices. Moreover, it showed how these challenges can be solved by implementing a Drupal platform coupled with a robust user experience strategy.

[embedded content]

Security for Emerging Threats by Tony Perez from SucuriSecurity

In this session, the author looked back at various incidents in 2016, talked about what they meant to website owners, and talked about security technologies designed to help address tomorrow’s threats.

[embedded content]

So you want to be a "Digital Platform" Rock Star?! by David Valade from Comcast, Lisa Bernhard from PWW, Kiel Frost of Flight Center Travel Group, Brendan Janishefski of Visit Baltimore, and Subramanian "Subbu" Hariharan of Princess Cruises

In this session, attendees learned from the first-hand what it takes to be a Digital Platform Rock Star. The authors talked what inspired them, how they started, what resources they needed, what unexpected events happened, how they formed a ‘band’ from across their organization to ensure success. It was not a technical session.

[embedded content]

Strategy and Redesign for the U.S. Commission on Security and Cooperation in Europe by Sarat Tippaluru and Chris Jurchak from  U.S. Commission on Security and Cooperation in Europe

In this session member of U.S. Commission on Security and Cooperation in Europe explained why they have chosen Drupal. They also described the whole project and revealed the back-end development and front-end design.

[embedded content]

The Cure for Translation Headaches: Drupal Module Gives HID Global Pain Relief from Quality & Integration Problems by Peter Carrero from Lingotek

The session showed how the integrated translation solution provided by the Lingotek (Inside Drupal Module) quickly eliminated the company’s translation problems. It also showed how integrating translation inside Drupal had a positive business impact on the speed, accuracy, and quality of HID Global’s translations and helped free up time for their content managers and developers.

[embedded content]

Why Symfony, Magento and eZ Systems launched their cloud services in 2016, and why you should be using a PaaS too. by Kieron Sambrook-Smith from Platform.sh, Fabien Potencier from SensioLabs, Peter Sheldon from Magento Commerce and Roland Benedetti from eZ Systems

In this session, people from Magento, eZ Systems and SensioLabs came and talked about the Platform.sh PaaS, which enabled these new offerings. Moreover, they talked about why all these same benefits apply to a digital agency, an organisation, or a single developer.

[embedded content]

Note: Some of the sessions were already used in Case Studies

May 25 2017
May 25

I have just finished editing the session videos from the very first DrupalCamp Nordics.

DrupalCamp Nordics 2017 was held in Helsinki, on 11th to 12th of May 2017. The event was a great success, with over 120 participants from more than 10 different countries!

The topics of the sessions ranged from more high-level technology-related like Blockchain and GDPR to more practical developer-oriented matters like using the Migrate API and introduction to Drupal 8 caching.

Below is the playlist of all of the videos. Also there are links to the individual session videos. Be sure to also check out DrupalCamp Nordics's Youtube channel.

[embedded content]

Blockchaining the backend by Keir Finlow-Bates (Chainfrog) - DrupalCamp Nordics 2017

Contributing to Drupal - no experience needed! by João Ventura (Wunder) - DrupalCamp Nordics 2017

A look at caching in Drupal 8, with the help of tacos by Märt Matsoo (Chromatichq) - DrupalCamp Nordics 2017

Fun and profit from GDPR by Mikko Hämäläinen (Druid) - DrupalCamp Nordics 2017

I have Drupal, why do I need a CDN? by Michael Gooding (Akamai) - DrupalCamp Nordics 2017

Migrate API for flexible data importing by Jari Nousiainen (Siili) - DrupalCamp Nordics 2017

New EU privacy legislation (GDPR) and Drupal by Kalle Varisvirta (Exove) - DrupalCamp Nordics 2017

You need to grow to stay alive! by Janne Kalliola (Exove) - DrupalCamp Nordics 2017

Migrate Workshop by Lauris Igaunis (Wunder) - DrupalCamp Nordics 2017

Rocket Chat Comes to Drupal by Floris van Geel (040lab) - DrupalCamp Nordics 2017

Scalable Drupal in AWS by Mika Tuhkanen (Siili) - DrupalCamp Nordics 2017

Ecommerce solution with Drupal 8 and Shopify by João Ventura (Wunder) - DrupalCamp Nordics 2017

Incorporating User Experience Into Your Projects by Karl Kaufmann - DrupalCamp Nordics 2017

May 24 2017
May 24

by David Snopek on May 24, 2017 - 3:06pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Site Verify module to fix an Cross Site Scripting (XSS) vulnerability.

The Site Verify module enables privilege users to verify a site with services like Google Webmaster Tools using meta tags or file uploads.

The module doesn't sufficiently sanitize input or restrict uploads.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the Site Verify module, we recommend you update immediately.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

May 24 2017
May 24

As a part of Lullabot’s security team, we’ve been keeping track of how the Internet of Things plays a role in our company security. Since we’re fully distributed, each employee works day-to-day over their home internet connection. This subreddit reminds us that most “smart” devices are actually quite dumb as far as security goes. With malware like Mirai actively focusing on home IoT devices including cameras, we know that anything we plug in will be under constant assault. However, there can be significant utility in connecting physical devices to your local network. So, my question: is it possible to connect an “IoT” device to my home network securely, even when it has known security issues?

An opportunity presented itself when we needed to buy a new baby monitor that supported multiple cameras. The Motorola MBP853CONNECT was on sale, and included both Wifi and a “regular” proprietary viewer. Let’s see how far we can get.

The Research

Before starting, I wanted to know if anyone else had done any testing with this model of camera. After searching for “motorola hubble security” (Hubble is the name of the mobile app), I came across Push To Hack: Reverse engineering an IP camera. This article goes into great detail about the many flaws they found in a different Motorola camera aimed at outdoor use. Given that both cameras are made by Binatone, and connect to the same remote services, it seemed likely that the MBP853 was subject to similar vulnerabilities. The real question was if Motorola updated all of their cameras to fix the reported bugs, or if they just updated a single line of cameras.

These articles were also great resources for figuring out what the cameras were capable of, and I wouldn’t have gotten as far in the time I had without them:

Goals

I wanted to answer these three questions about the cameras:

  1. Can the cameras be used in a purely “local” mode, without any cloud or internet connectivity at all?
  2. If not, can I allow just enough internet access to the camera so it allows local access, but blocks access to the cloud services?
  3. If I do need to use the Hubble app and cloud service, is it trustworthy enough to be sending images and sounds from my child’s bedroom?

The Infrastructure

I recently redid my home network, upgrading to an APU2 running OPNSense for routing, combined with a Unifi UAP-AC-PRO for wireless access. Both software stacks support VLANs—a way to segregate and control traffic between devices on the same ‘physical’ network. For WiFi, this means creating a separate SSID for the cameras, and assigning it a VLAN ID in the UniFi controller. Then, in OPNSense, I created a new interface with the same VLAN ID. On that interface, I enabled DHCP, and then set up basic firewall rules to block all traffic. That way, I could try setting up the camera while using Wireshark on my laptop to sniff the traffic, without worrying that I was exposing my real network to anything nefarious.

Packet Sniffing

One of the benefits of running a “real” operating system on your router is that all of our favorite network debugging tools are available, including tcpdump. Since Wireshark will be running on our local workstation, and not our router, we need to capture the network traffic to a separate file. Once I knew the network interface name using ifconfig, I then used SSH along with -w - to reroute the packet dump to my workstation. If you have enough disk space on the router, you could also dump locally and then transfer the file after.

$ ssh [email protected] tcpdump -w - -i igb0_vlan3000 > packet-dump.pcap

After setting this up, I realized that this wouldn't show traffic of the initial setup. That’s because, in setup mode, the WiFi camera broadcasts an open WiFi network. You then have to use the Android or iOS mobile app to configure the camera so it has the credentials to your real network. So, for the first packet dump, I joined my laptop to the setup network along with my phone. Since the network was completely open, I could see all traffic on the network, including the API calls made by the mobile app to the camera.

Showing WiFi frames in packet dump

Verifying the setup vulnerability

Let's make sure this smart camera is using HTTPS and keeps my WiFi password secure.

I wanted to see if the same setup vulnerability documented by Context disclosing my WiFi passwords applied to this camera model. While I doubt anyone in my residential area is capturing traffic, this is a significant concern in high-density locations like apartment buildings. Also, since the cameras use the 2.4GHz and not the 5GHz band, their signal can reach pretty far, especially if all you’re trying to do is read traffic and not have a successful communication. In the OPNSense firewall, I blocked all traffic on the “camera” VLAN. Then, I made sure I had a unique, but temporary password on the WiFi network. That way, if the password was broadcast, at least I wasn’t broadcasting the password for a real network and forcing myself to reset it.

Once I started dumping traffic, I ran through the setup wizard with my phone. The wizard failed as it tests internet connectivity, but I could at least capture the initial setup traffic.

In Wireshark, I filtered to https traffic:

Filtering to HTTPS traffic

Oh dear. The only traffic captured is from my phone trying to reach 66.111.4.148. According to dig -x 66.111.4.148, that IP resolves to www.fastmail.com - in other words, my email app checking for messages. I was expecting to see HTTPS traffic to the camera, given that the WiFi network was completely open. Let’s look for raw HTTP traffic.

Filtering to HTTP traffic

This looks promising. I can see the HTTP commands sent to the camera fetching it’s version and other information. Wireshark’s “Follow HTTP stream” feature is very useful here, helping to reconstruct conversations that are spread over multiple packets and request / response pairs. For example, if I follow the “get version” conversation at number 3399:

GET /?action=command&command=get_version HTTP/1.1
User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O)
Host: 192.168.193.1
Connection: Keep-Alive
Accept-Encoding: gzip

HTTP/1.1 200 OK
Proxy-Connection: Keep-Alive
Connection: Close
Server: nuvoton
Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0
Pragma: no-cache
Expires: 0
Content-type: text/plain

get_version: 01.19.30

Let’s follow the setup_wireless command:

GET /?action=command&command=setup_wireless_save&setup=1002000071600000000606blueboxthisismypasswordcamera000000 HTTP/1.1
User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O)
Host: 192.168.193.1
Connection: Keep-Alive
Accept-Encoding: gzip

HTTP/1.1 200 OK
Proxy-Connection: Keep-Alive
Connection: Close
Server: nuvoton
Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0
Pragma: no-cache
Expires: 0
Content-type: text/plain

setup_wireless_save: 0

That doesn't look good. We can see in the GET:

  1. The SSID of the previous WiFi network my phone was connected to (“bluebox”).
  2. The password for the “camera” network (thisismypassword).
  3. The SSID of that network.

Presumably, this is patched in the latest firmware update. Of course, there’s no way to get the firmware without first configuring the camera. So, I opened up the Camera VLAN to the internet (but not the rest of my local network), and updated.

That process showed another poor design in the Hubble. When checking for firmware updates, the app fetches the version number from the camera. Then, it compares that to a version fetched from ota.hubble.in… over plain HTTP.

HTTP OTA version check

In other words, the firmware update itself is subject to a basic MITM attack, where an attacker could block further updates from being applied. At the least, this process should be over HTTPS, ideally with certificate pinning as well. Amusingly, the OTA server is configured for HTTPS, but the certificate expired the day I was writing this section.

Invalid SSL certificate

After the update had finished, I reset the camera to factory defaults and checked again. This time, the setup_wireless_save GET was at the least not in cleartext. However, I don’t have any trust that it’s not easily decryptable, so I’m not posting it here.

Evaluating Day-to-Day Security

Assuming that the WiFi password was at least secure from casual attackers, I proceeded to add firewall rules to allow traffic from the camera to the internet, so I could complete the setup process. This was a tedious process. tcpdump along with the OPNSense list of “blocked traffic” was very helpful here. In the end, I had to allow:

  • DNS
  • NTP for time sync
  • HTTPS
  • HTTP
  • UDP traffic

I watched the IPs and hostnames used by the camera, which were all EC2 hosted servers. The “aliases” feature in OPNSense allowed me to configure the rules by hostname, instead of dealing with constantly changing IPs. Of course, given the above security issues, I wonder how secure their DNS registrations are.

Needing to allow HTTP was a red flag to me. So, after the setup finished, I disabled all rules except DNS and NTP. Then, I added a rule to let my normal home LAN access the CAMERA VLAN. I could then access the camera with an RTSP viewer at the URL:

rtsp://user:[email protected]:6667/blinkhd/

Yes, the credentials actually are user and pass.

And tada! It looked like I had a camera I could use with my phone or laptop, or better yet at the same time as my wife. Neat stuff!

It All Falls Apart

After a fresh boot, everything seemed fine with the video streams. However, over a day or two, the streams would become more and more delayed, or would drop, and, eventually, I’d need to restart the camera. Wondering if this had something to do with my firewall rules, I re-enabled the HTTP, HTTPS, and UDP rules, and started watching the traffic.

Then, my phone started to get notification spammed.

At this point, I’d been using the cameras for about two weeks. As soon as I re-enabled access to Hubble, my phone got notifications about movement detected by the camera. I opened the first one… and there was a picture of my daughter, up in her room, in her jammies.

It was in the middle of the day, and she wasn’t home.

What I discovered is that the camera will save a still every time it detects movement, and buffer them locally until they can be sent. And, looking in Wireshark, I saw that the snapshots were being uploaded with an HTTP POST to snap.json without any encryption at all. Extracting the conversation, and then decoding the POST data (which was form data, not JSON!), I ended up with a picture.

I now had proof the camera was sending video data over the public internet without any security whatsoever. I blocked all internet access, including DNS, hoping that would still let local access work. It did!

Then, my wife and I started hearing random beeps in the middle of the night. Eventually, I tracked it to the cameras. They would beep every 15 minutes or so, as long as they didn’t have a working internet connection. This killed the cameras for home use, as they’d wake the whole family. Worse yet, even if we decided to allow internet access, if it was down in the middle of the night (our cable provider usually does maintenance at 3AM), odds are high we’d all be woken up. I emailed Motorola support, and they said there was no way to disable the beeping, other than to completely reset the cameras and not use the WiFi feature at all.

We’re now happily using the cameras as “dumb” devices.

Security Recommendations and Next Steps

Here are some ideas I had about how Motorola could secure future cameras:

  1. The initial setup problem could have been solved by using WPA2 on the camera. I’ve seen routers from ISPs work this way; the default credentials are unique per device, and printed on the bottom of the device. That would significantly mitigate the risk of a completely open setup process. Other devices include a Bluetooth radio for this purpose.
  2. Use encryption and authentication for all APIs. Of course, there are difficulties from this such as certificate management, hostname validation, and so on. However, this might be a good case where the app could validate based on a set of hardcoded properties, or accept all certificates signed by a custom CA root.
  3. Mobile apps should validate the authenticity of the camera to prevent MITM attacks. This is a solved problem that Binatone simply hasn’t implemented.
  4. Follow HTTP specifications! All “write” commands for the camera API use HTTP GETs instead of POSTs. That means that proxies or other systems may inadvertently log sensitive data. And, since there’s no authentication, it opens up the API to CSRF vulnerabilities.

In terms of recommendations to the Lullabot team, we currently recommend that any “IoT” devices be kept on completely separate networks from devices used for work. That’s usually as simple as creating a “guest” WiFi network. After this exercise, I think we’ll also recommend to treat any such devices as hostile, unless they have been proven otherwise. Remember, the “S” in “IoT” stands for “secure”.

Personally, I want to investigate hacking the camera firmware to remove the beeps entirely. I was able to capture the firmware from my phone (the app stores them in Android’s main storage), and since there’s no authentication, I’m guessing I could replace the beeps with silence, assuming they are WAV or MP3 files.

In the future, I’m hoping to find an IoT vendor with a security record that matches Apple’s, who is clearly the leader in mobile security. Until then, I’ll be sticking with dumb devices in my home.

May 24 2017
May 24

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

DrupalCon Baltimore logo Apr 24-28

At the end of April we joined the community at DrupalCon Baltimore. We met with many of you there, gave our update at the public board meeting, and hosted a panel detailing the last 6 months worth of changes on Drupal.org. If you weren't able to join us for this con, we hope to see you in Vienna!

Drupal.org updates

DrupalCon Vienna Full Site Launched!

DrupalCon Vienna logo Sep 26-29 2017

Speaking of Vienna, in April we launched the full site for DrupalCon Vienna which will take place from September 26-29th, 2017. If you're going to join us in Europe you can book your hotel now, or submit a session. Registration for the event will be opening soon!

DrupalCon Nashville Announced with new DrupalCon Brand

DrupalCon Nashville logo Apr 9-13 2018

Each year at DrupalCon the location of the next conference is held as closely guarded secret; the topic of speculation, friendly bets, and web crawlers looking for 403 pages. Per tradition, at the closing session we unveiled the next location for DrupalCon North America - Nashville, TN taking place from April 9-13th in 2018. But this year there was an extra surprise.

We've unveiled the new brand for DrupalCon, which you will begin to see as the new consistent identity for the event from city to city and year to year. You'll still see the unique character of the city highlighted for each regional event, but with an overarching brand that creates a consistent voice for the event.

Starring Projects

Users on Drupal.org may now star their favorite projects - making it easier to find favorite modules and themes for future projects, and giving maintainers a new dimension of feedback to judge their project's popularity. Users can find a list of the projects they've starred on the user profile. Over time we'll begin to factor the number of star's into a project's ranking in search results.

Starring Projects

At the same time that we made this change, we've also added a quick configuration for managing notification settings on a per-project basis. Users can opt to be notified of all issues for a project, only issues they've followed, or no issues. While these notification options have existed for some time, this new UI makes it easier than ever to control issue notifications in your inbox.

Project Browsing Improvements

One of the important functions of Drupal.org is to help Drupal site builders find the distributions, modules, and themes, that are the best fit for their needs. In April, we spent some time improving project browsing and discovery.

Search is now weighted by project usage so the most widely used modules for a given search phrase will be more likely to be the top result.

We've also added a filter to the project browsing pages to allow you to filter results by the presence of a supported, stable release. This should make it easier for site builders to sort out mature modules from those still in initial development.

Better visual separation of Documentation Guide description and contents

Better Documentation Guide Display

In response to user feedback, we've updated the visual display of Documentation Guides, to create a clearer distinction between the guide description text and the teaser text for the content within the guides.

Promoting hosting listings on the Download & Extend page

To leverage Drupal to the fullest requires a good hosting partner, and so we've begun promoting our hosting listings on the Download and Extend page. We want Drupal.org to provide every Drupal evaluator with all of the tools they need to achieve success—from the code itself, to professional services, to hosting, and more.

Composer

Sub-tree splits of Drupal are now available

Composer Façade

For developers using Composer to manage their projects, sub-tree splits of Drupal Core and Components are now available. This allows php developers to use components of Drupal in their projects, without having to depend on Drupal in its entirety.

DrupalCI

Automatic Requeuing of Tests in the event of a CI Error

DrupalCI logo

In the past, if the DrupalCI system encountered an error when attempting to run a test, the test would simply return a "CI error" message, and the user who submitted the test had to manually submit a new test. These errors would also cause the issues to be marked as 'Needs work' - potentially resetting the status of an otherwise RTBC issue.

We have updated Drupal.org's integration with DrupalCI so that instead of marking issues as needs work in the event of a CI Error, Drupal.org will instead automatically queue a retest.

Bugfix: Only retest one environment when running automatic RTBC retests

Finally, we've fixed a bug with the DrupalCI's automatic RTBC retest system. When Drupal HEAD changes, any RTBC patches are automatically retested to ensure that they still apply. It is only necessary to retest against the default or last-used test environment to ensure that the patch will work, but the automatic retests were being tested against every configured environment. We've fixed this issue, shortening queue times during a string of automatic retests and saving testing resources for the project.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who made it possible for us to work on these projects. In particular we want to thank:

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

May 24 2017
May 24

by David Snopek on May 24, 2017 - 9:30am

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security release for the AES encryption module.

The AES module provides an API for encrypting and decrypting data via AES. It also allows storing Drupal passwords encrypted in the database (rather than hashed) which can allow site administrators with high enough permissions to view user passwords.

Previously, the module implemented AES poorly, such that the encryption was weakened and could have potentially made it easier for an attacker to decrypt given enough examples of the encrypted data.

(A note about the timing of this release: the AES module was unsupported on March 1st, and we started working on a fix right away in the D6LTS queue. We usually release D6LTS patches the same day the D7/D8 patches are posted or two weeks after a module is unsupported, however, in this case we had only a single Enterprise customer using AES and so we worked on it according to a timeline dictated by them, which involved testing their custom modules using the AES API with their team. So, we're releasing this after it's been fully tested and deployed for our one affected customer - if more customers had been affect it would have been released same-day, as usual.)

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the AES module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

May 24 2017
May 24

Drupal VM on Docker Hub

Drupal VM has used Vagrant and (usually) VirtualBox to run Drupal infrastructure locally since its inception. But ever since Docker became 'the hot new thing' in infrastructure tooling, I've been asked when Drupal VM will convert to using Docker.

The answer to that question is a bit nuanced; Drupal VM has been using Docker to run its own integration tests for over a year (that's how I run tests on seven different OSes using Travis CI). And technically, Drupal VM's core components have always been able to run inside Docker containers (most of them use Docker-based integration tests as well).

But Docker usage was always an undocumented and unsupported feature of Drupal VM. But no longer—with 4.5.0, Drupal VM now supports Docker as an experimental alternative to Vagrant + VirtualBox, and you can use Drupal VM with Docker in one of two ways:

  1. Use the drupal-vm Docker Hub container.
  2. Use Drupal VM to build a custom Docker image for your project.

The main benefit of using Docker instead of Vagrant (at least at this point) is speed—not only is provisioning slightly faster (or nearly instantaneous if using the Docker Hub image), but performance on Windows and Linux is decidedly better than with VirtualBox.

Another major benefit? The Drupal VM Docker image is only ~250 MB! If you use VirtualBox, box files are at least twice that size; getting started with Drupal VM using Docker is even faster than Vagrant + VirtualBox!

Use the Docker Hub image

The simplest option—if you don't need much customization, but rather need a quick LAMP stack running with all Drupal VM's defaults—is to use the official Drupal VM docker image. Using it with an existing project is easy:

  1. Copy Drupal VM's example Docker Compose file into your project's root directory.
  2. Customize the Docker Compose file for your project.
  3. Add an entry to your computer's hosts file.
  4. Run docker-compose up -d.

If you're using a Mac, there's an additional step required to make the container's IP address usable; you currently have to create an alias for the IP address you use for the container with the command sudo ifconfig lo0 alias 192.168.88.88/24 (where the IP address is the one you have chosen in your project's Docker Compose file).

You can even customize the default image slightly using a Dockerfile and changing one line in the Docker Compose file; see Add a Dockerfile for customization.

Want a real-world example? See the Site for Drupal VM Prod Deployment Demonstrations codebase on GitHub—it's using this technique for the local environment.

Use Drupal VM to 'bake and share' a custom image

The second way you can use Drupal VM with Docker is to use some built-in functionality to build a completely custom Docker image. For teams with particular requirements (e.g. using Varnish, Solr, and PostgreSQL), you can configure Drupal VM using a config.yml file as usual, but instead of requiring each team member to provision a Drupal VM instance on their own, one team member can composer docker-bake a Docker container.

Then, save the image with composer docker-save-image, share it with team members (e.g. via Google Drive, Dropbox, etc.), then each team member can load in the image with composer docker-load-image.

See the documentation here: 'Bake and Share' a custom Drupal VM Docker image.

FAQs

Since there are bound to be a lot of questions surrounding experimental Docker support, I thought I'd stick a few of the frequently asked questions in here.

Why wasn't this done sooner?

The main reason I have held off getting Drupal VM working within Docker containers was because Docker's support for Mac has been weak at best. There were two major issues that I've been tracking, and thankfully, both issues are resolved with the most recent release of Docker:

  1. Using Docker on a custom IP address (so you can have multiple Drupal VM instances on different IP addresses all using port 80, 443, etc.).
  2. Docker for Mac is known for sluggish filesystem access when using default volumes.

The latter issue was a deal-breaker for me, because the performance using a Docker container with a Drupal codebase as a volume was abysmal—it was 18 times slower running the same Drupal codebase within a Docker container vs. running it in VirtualBox with an NFS mounted shared folder.

In the issue File system performance improvements, Docker has already added cached volume support, where reads are cached for native filesystem read performance, and support for delegated volumes will be added soon (which allows writes to be cached as well, so operations like composer update on a volume will not be sluggish).

As of May 2017, you need to download Docker's Edge release to use the cached or delegated volume options on Mac. For a good historical overview of these features, read File access in mounted volumes extremely slow.

Why aren't you using Vagrant with the Docker provider?

We're looking into that: Consider allowing use of Vagrant with docker provider to run and provision Drupal VM.

You're doing Docker wrong. You shouldn't use a monolith container!

Yeah, well, it works, and it means Docker can function similarly to a VirtualBox VM so we don't have to have two completely different architectures for Docker vs. a VM. At some point in the future, Drupal VM may make its Docker-based setup 'more correct' from a microservices perspective. But if you're pining for a Docker setup that's Docker-centric and splits things up among dozens of containers, there are plenty of options already.

You're doing Docker wrong. You're using privileged!

Yeah, well, it works. See the above answer for reasoning.

I wish there were other variations of the Drupal VM image on Docker Hub...

So do I; currently Docker Hub doesn't easily support having multiple Dockerfiles with the same build context in an automatically-built Docker Hub image. But Drupal VM might be able to build and manage multiple image versions (e.g. one with LEMP, one with LAMP + Solr, etc.) using a CI tool soon. I just haven't had the time to get something like this running.

Will this make Windows development awesome?

Sort-of. The nice thing is, now you can use Docker in Windows and drop the baggage of Vagrant + VirtualBox (which has always been slightly painful). But as Docker support is currently experimental, you should expect some bumps in the road. Please feel free to open issues on GitHub if you encounter any problems.

Should I use a Docker container built with Drupal VM in production?

Probably not. In my opinion, one of the best things about Docker is the fact that you can ship your application container directly to production, and then you can guarantee that your production environment is 100% identical to development (that whole 'it works on my machine' problem...).

But currently Drupal VM (just like about 99% of other local-environment-focused tools) is more geared towards development purposes, and not production (though you can use Drupal VM to build production environments... and I do so for the Drupal VM Production Demo site!).

May 24 2017
May 24

Spam is a problem that never goes away. Email spam. Comment spam.

This site has long used Mollom to protect the comment section and the contact form. They're closing down on 2nd April 2018, so lots of webmasters will be looking for alternatives.

I'd like to introduce you to Project Honey Pot.

By the way, three quick comments about the links to Project Honey Pot in this article. (i) They have a "referrer" query string in. They don't have an affiliate scheme that pays out real money. It just gives what they call "karma" to my user account: They like to keep track of what their users give back to the project, and referring others to their site is one way to pay it forward. (ii) I find, semi-frequently, that their site is down with nginx "bad gateway" errors. If you get that problem, try again in 5 minutes, and the site should be back up. (iii) They don't use https. They should.

What Is Project Honey Pot?

Project Honey Pot started in 2004, and is owned by Unspam Technologies.

Here's how they describe themselves:

Project Honey Pot is the first and only distributed system for identifying spammers and the spambots they use to scrape addresses from your website. Using the Project Honey Pot system you can install addresses that are custom-tagged to the time and IP address of a visitor to your site. If one of these addresses begins receiving email we not only can tell that the messages are spam, but also the exact moment when the address was harvested and the IP address that gathered it.

I'm going to have a go at explaining more clearly what they do, and how it works:

How Do They Collect Data?

A "honey pot" is a web page, email address, or other online service that you want spammer / abusers to visit. Typically, you want this so that you can identify who they are, and block them, or so that you can learn about how they work. You could create your own honey pot webpage or email address, and block any abusers you find. But that will be very ineffective: Spammer may only visit your site once, and by then it's too late.

Project Honey Pot invites website owners to create a honey pot page on their site. This page will, most frequently, contain an email address that spammers will harvest. No genuine emails will be sent to that address, so any email sent to that address is spam. By using a unique email address each time that honey pot page is served, Project Honey Pot can work out exactly which visit to your site led to the address being harvested. They can then analyse the visit, to work out how to block that same spammer / harvester in future.

Sometimes, the honey pot pages will contain other things, maybe a fake contact form, or a fake form to leave a comment. Again, anyone who completes that form is a spammer.

What Do They Need?

To run this infrastructure, they need people to donate two kinds of technology:

  • Webmasters can donate a webpage on their site to act as a honeypot. It's dead easy. Most websites run on servers that can serve PHP pages. You can log into your account on the Project Honey Pot site, and they'll walk you through creating a bespoke .php file for your site. Install it in a directory of your choice, activate it, and it's good to go. If, for some reason, your web host doesn't serve PHP pages, choose between ASP (.NET), Perl, Python or a few others.
  • Domain registrants can donate an MX record. Project Honey Pot need a plentiful supply of email addresses, so that each visit to a honeypot can present a unique email address. For that, they need lots of domains (or subdomains) where all email sent to that domain can go to their mail servers to be analysed. (If they only had a few domains that they used repeatedly, spammers would learn the domains never to send to). Many domains never need to receive email, so can afford to send their email to Project Honey Pot servers instead, and any domain could do this for a subdomain created for this purpose.

If lots of people donate those things, they'll have lots of web page honey pots, and lots of email addresses, and they can harvest lots of data about the activities of spammers.

Some website owners can't (or don't wish to) install a honey pot page of their own. Project Honey Pot will give you a "quick link" instead: You can link to someone else's honey pot page. That way, you can still help generate bot traffic for the honey pot pages.

How Do You Use Their Data?

So Project Honey Pot gather lots of data, crowd-sourcing it, on spammers and their activity. How can you make use of this to prevent spammers visiting your website?

They offer an HTTP Blacklist (or http:bl) service. This provides an API whereby you can query their database to find out if a given visitor is likely to be a spammer. This uses DNS, so each query is relatively quick. You can decide how aggressive this check will be. They return a threat rating for any given IP address that tells you just how much spam activity they've seen from that address. You can decide how high the rating must be before you block the visitor.

To use their http:bl service, you need an API key. For that, you need to create an account at Project Honey Pot. You also need to be an active contributor to their project, which is entirely reasonable. You do this by any one of the following:

  • Running your own honey pot
  • Giving them an MX record
  • Referring other people to their website

Those who help the service to run can use its data to protect their own sites.

How To Use Project Honey Pot in Drupal

Lastly, how do you use the http:bl service in a Drupal site?

In one of two ways.

Option 1: Implement Bad Behavior. I know — it's misspelt.

Bad Behavior is a set of PHP scripts which prevents spambots from accessing your site by analyzing their actual HTTP requests and comparing them to profiles from known spambots. It goes far beyond User-Agent and Referer.

There is a Bad Behavior Drupal module.

The pluses:

  • It's a mature project, with some experienced Drupal contributors behind it.
  • It's dead-easy to install.
  • If you use Drush, enabling the module will automatically download the Bad Behavior libraries.

The minuses:

  • The issue queue and git repository both suggest that the module is very minimally maintained. (Last commit: 21 Oct 2014)
  • New versions of the Bad Behavior library take forever for the module to support officially. [The most recent version of Bad Behavior is 2.2.19 released 25 Aug 2016. The module asks for 2.2.15, released 24 Dec 2013.]
  • Only Drupal 7 is supported, with no hints of work on Drupal 8 even being planned, let along begun. That's a shame: It's a good module, and a good library.

What's this got to do with Project Honey Pot? Bad Behavior implements support for Project Honey Pot. It's optional — you can use Bad Behavior without Project Honey Pot. But right there, within your Drupal module configuration settings, you can set the Project Honey Pot API key, and you're away.

Option 2: Use the Project Honey Pot module

There is an http:bl module. It says it's minimally maintained, but there's an active Drupal 8 (-dev) branch. The last commit (at time of writing) was April 10 2017 for 8.x-1.x and March 25 2017 for 7.x-1.x. This will allow you to use http:bl directly in your Drupal site.

The project page says "http:BL has been adopted for use to enhance protection on Drupal.org." I'm unclear whether the module itself is in production use on drupal.org, or whether that claim merely refers to the http:bl service.

Over To You

Comments are open. If you've got experiences of Project Honey Pot (good or bad), with or without Bad Behavior, please pile in.

I'll monitor this site to see how effective it is at reducing spam. What I'm watching for is a reduction in the number of spam attempts that Mollom has to block.

May 24 2017
May 24

It may sometimes be necessary to render a single field of a content or entity. For example, for a simplified display of contents relating to the content consulted, the use of specific fields in other contexts, etc. Obtaining programmatically the rendering of a field may be problematic for the Drupal 8 cache invalidation system, since the resulting render array would not contain the cache tags of the source entity. So this field displayed In another context can not be invalidated if the source node were to be updated.

Let's take a look at some solutions available to us.

Using View Mode

To avoid this problem, and to avoid having to start managing the cache tags manually (Drupal 8 is already doing very well, and certainly better), we can opt for a specific view mode for the entity in question, and this view mode will only contain the field that we want to display. Thus, we will render this field individually, by simply displaying this content in this specific view mode, and thus all the cache tags or context linked to our source content will be added automatically to the page in which this field will be used. And we do not have to manage the tags caches of the source node, instead of Drupal 8 core.

Rendering an individual field programmatically

This last solution applies if this type of need remains marginal, and the number of fields to be displayed individually is limited, because if we were to create as many view modes as individual fields, for a very large number of fields, this option could quickly become indigestible, time consuming and very painful to maintain.

We can then use a few lines of code to render this individual field, and be able to inject it into any page.

This snippet will allow us to retrieve the render array for a field individually.

/**
 * Implements hook_preprocess_HOOK().
 */
function my_module_preprocess_node(&$variables) {
  /** @var \Drupal\node\NodeInterface $node */
  $node = $variables['elements']['#node'];

  $entity_type = 'node';
  $entity_id = 2;
  $field_name = 'body';

  /** @var \Drupal\node\NodeInterface $source */
  $source = \Drupal::entityTypeManager()->getStorage($entity_type)->load($entity_id);
  $viewBuilder = \Drupal::entityTypeManager()->getViewBuilder($entity_type);
  $output = '';

  if ($source->hasField($field_name) && $source->access('view')) {
    $value = $source->get($field_name);
    $output = $viewBuilder->viewField($value, 'full');
    $output['#cache']['tags'] = $source->getCacheTags();
  }

  if ($node->id() == '1') {
    $variables['content']['other_body'] = $output;
  }

}

In this example, we retrieve the body field from a node (with the id 2), reconstruct its render array in the "full" view mode, and then add to the array the cache tag of the node Source (node:2). Finally, we add this field in the variables supplied to the Twig template for the node whose id is 1.

Without adding the source node's cache tag with the line below, then if the source node is updated, its body field rendered in another context will remain the same. Or worse, if we unpublish this source content, some of its content will always be visible in another context.

$output ['# cache'] ['tags'] = $source->getCacheTags();

It is enough to check in the headers of the page on the content (node ​​1), without this addition, only the cache of the current node is present (node:1).
 

Entêtes de la page et debug des cache tags

By adding the source node cache tag to the render array of the individual field, we ensure that this field is always up-to-date, regardless of the context in which it is displayed. Thus we can check the cache tags of the content (node ​​1) on which we injected this field coming from the node 2.

Entêtes de la page et debug des cache tags

We now have the cache tag node:2 present in the headers, making sure that this page will be invalidated as soon as the source content is modified. We can now, in all tranquility, resort to this method to inject individual fields of any entity into another context.

It is enough to not forget the cache tags (among others), and during the development phase, it is necessary to think to activate the cache on a regular basis (which does not deactivate the caches definitively during the development phase?). We were able to see, the cache system, its management, its invalidation, must be an integral part of the development process, otherwise there will certainly be some issues when deploying to production the Drupal 8 project.

May 24 2017
May 24

by David Snopek on May 23, 2017 - 10:23pm

Last week, I presented on "Docker & Drupal for local development" at Drupal414, the local Drupal meetup in Milwaukee, WI.

It included:

  • a basic introduction to the why's and how's of Docker,
  • a couple live demos, and
  • the the details of how we use Docker as our local development environment to support & maintain hundreds of Drupal sites here at myDropWizard

The presentation wasn't recorded at the time, but it was so well received that I decided to record it again at my desk so I could share it with a wider audience. :-)

Here's the video:

Video of Docker &amp; Drupal for Local Development

(Sorry, for the poor audio! This was recorded sort of spontaneously...)

And here are the slides.

Please leave any questions or comments in the comments section below!

May 24 2017
May 24

Sometimes you want a View that follows the internal logic of the filters you set up on the View, but also can have some items hand selected or cultivated to the top of the View. Or perhaps the other way to describe it is a Nodequeue View that is backfilled with some other View based logic  so that you end up with a full display regardless of how many items are actually in the Nodequeue.

To do this requires three adjustments to the View (assuming you have already built the normal View logic based on filters that are separate from Nodequeue.

  1. Make the Nodequeue a relationship to the View.
  2. Add the Nodequeu to the sort criteria.
  3. Restructure the filter settings to make it the Nodequeue logic OR the Filter logic.

Example: Nodequeue View with random Backfill

Let's say you have a 3 item View that gets used to display some promoted items on your home page.  You want the View to be populated by anything in the Nodequeue and then randomly backfilled with any other item(s) that match some filter criteria if the Nodequeue does not contain three items.

0) To start, create your View that has a maximum of 3 items and set the filter(s) to use your backfil critera (a status of published and limited to whatever entities you are using) and a sort of Global: Random to randomly pick from items that meet the filter criteria.

1) Add your Nodequeue as a relationship.

Add nodequeue relationshipYou want to limited to a specific Nodequeue. The relationship should not be required, or you will not have anything to backfill with.

2) Add the Nodequeue as sort criteria to the View.

Nodequeue sort criteria

Since we want the Nodequeue items to come first, and in order we have to set the sort order in front of the rest of the View sort criteria (which in this case is random).

3) Adjust the filter criteria and break it into logical sections.  The first section is the set of filters that must be applied to all items regardless of whether they are in the Nodequeue or not. (the purple region below)

Nodequeue backfill filter arrangement

Then you need to create another filter group AND in this group put the items that are either the default logic OR the Nodequeue.  The default logic in this case is that audience field matches some criteria.  The trick is to set the operator within this filter group to OR.

Now when you add, delete or rearrange items in the Nodequeue the VIew will match the order of the Nodequeue and if you don't have enough items in your que, it will backfill from other items that meet your criteria.

Caching Issues:  By default, updating a nodequeue will not cause the cache on the View to expire if the View is cached.  If you need the updates to be immediately seen by anonymous users, you can implement a hook_nodequeue_update() to clear the cache.on any changes to that nodequeue.

May 24 2017
May 24

On the ELMS:LN team, we’ve been working a lot with polymer and webcomponent based development this year. It’s our new workflow for all front-end development and we want Drupal to be the best platform for this type of development. At first, we made little elements and they were good. We stacked them together, and started integrating them into our user interfaces and polyfills made life happy.

But then, we started doing data integrations. We wanted more then just static, pretty elements, we wanted data driven applications that are (basically) headless or hybrid solutions that embed components into Drupal to streamline and decouple their development without being fully headless. I started writing a few modules which seemed to have a TON of code that was the same between them. So, a few refactors later and a lot of white boarding, and now we’ve got the ability to do autoloading of polymer based one-page apps by just enabling modules and dropping apps in discoverable directories!

This approach keeps the design team as disconnected as possible from Drupal while still being able to interface with it in a modular fashion. Think of being able to roll out a new dashboard module with all of it’s display being contained in a single element; now you’ve got 1 page app development workflows. Your design / front-end team doesn’t need to know Drupal (let’s say this again in bigger text):

YOUR FRONT END TEAM DOESN’T NEED TO SWEAR AT DRUPAL ANYMORE

No, not by knowing twig, but by learning a technology closer to the browser (that works anywhere) in Web Components!

Check out the Readme.md file for more details about this workflow as text or watch this video!

May 23 2017
May 23

What did we bring to DrupalCon Baltimore?

Aside from some awesome stickers and super-soft shirts to give out, we came with knowledge to share! Chromatic presented a total of 4 sessions, as well as moderating a Birds of a Feather session. If you missed any of them, you can read the slides or watch them online!

Culture is Curated

Dave presents "Culture is Curated" Dave presents "Culture is Curated"

Work/Life Balance - You CAN Have it All!

Alanna presents "Work/Life Balance - You CAN Have it All!" Alanna presents "Work/Life Balance - You CAN Have it All!"

JavaScript ES6: The best vanilla you’ve ever tasted

Ryan presents "JavaScript ES6: The Best Vanilla You've Ever Tasted" Ryan presents "JavaScript ES6: The Best Vanilla You've Ever Tasted"

Code Standards: It's Okay to be Yourself, But Write Your Code Like Everyone Else

What did we do at DrupalCon Baltimore?

Since Chromatic is a distributed company, DrupalCon is an important chance for team bonding. We went out for some great team dinners and drinks, hung out at the Lullabot/Pantheon party at the Maryland Science Center, and enjoyed the company of dinosaurs and a lot of great folks!

We also attended a few of the Monday summits: the Non-Profit Summit, the Media and Publishing Summit, and the Higher Ed Summit. We also had a volunteer at the Drupal Diversity & Inclusion booth.

What did we take away from DrupalCon Baltimore?

A lot of our team is excited about GraphQL after attending a session on it, as well as the ever-expanding world of decoupled Drupal. Pinterest’s case study on integrating Pattern Lab into a Drupal 8 theme got our front-enders excited, and some of our back-enders can’t wait to dig into the 35 Symfony components. JSON API has a lot of us pretty intrigued - overall, it’s fun to see what Drupal can do when you think outside of the box.

Märt thinks it’d be fun to code a bot so that his family has someone to talk to when he’s away at DrupalCon.

Christopher was happy to see that on the ops side of things, we’re up to date on best practices and doing what other people in the space are doing.

Chris and Elia at Camden Yards Chris and Elia at Camden Yards

We learned about debugging tools for front-enders, when to use {% embed %} in Twig templates and how powerful it can be in reusing structures/grids within another template. The live-coding demonstration of the back-end/front-end bits in a decoupled React app was really cool and informative. We learned about HTTP/2’s preconnect hint, which is an awesome performance win. We also heard about the importance of addressing mental health in the tech community.

Overall, we had a great time and learned a lot. We always come back from DrupalCon energized and ready to dig into new technologies, brush up our skills, and spruce our website - so keep an eye out!

May 23 2017
May 23

In 2007, Jay Batson and I wanted to build a software company based on open source and Drupal. I was 29 years old then, and eager to learn how to build a business that could change the world of software, strengthen the Drupal project and help drive the future of the web.

Tom Erickson joined Acquia's board of directors with an outstanding record of scaling and leading technology companies. About a year later, after a lot of convincing, Tom agreed to become our CEO. At the time, Acquia was 30 people strong and we were working out of a small office in Andover, Massachusetts. Nine years later, we can count 16 of the Fortune 100 among our customers, saw our staff grow from 30 to more than 750 employees, have more than $150MM in annual revenue, and have 14 offices across 7 countries. And, importantly, Acquia has also made an undeniable impact on Drupal, as we said we would.

I've been lucky to have had Tom as my business partner and I'm incredibly proud of what we have built together. He has been my friend, my business partner, and my professor. I learned first hand the complexities of growing an enterprise software company; from building a culture, to scaling a global team of employees, to making our customers successful.

Today is an important day in the evolution of Acquia:

  • Tom has decided it's time for him step down as CEO, allowing him flexibility with his personal time and act more as an advisor to companies, the role that brought him to Acquia in the first place.
  • We're going to search for a new CEO for Acquia. When we find that business partner, Tom will be stepping down as CEO. After the search is completed, Tom will remain on Acquia's Board of Directors, where he can continue to help advise and guide the company.
  • We are formalizing the working relationship I've had with Tom during the past 8 years by creating an Office of the CEO. I will focus on product strategy, product development, including product architecture and Acquia's roadmap; technology partnerships and acquisitions; and company-wide hiring and staffing allocations. Tom will focus on sales and marketing, customer success and G&A functions.

The time for these changes felt right to both of us. We spent the first decade of Acquia laying down the foundation of a solid business model for going out to the market and delivering customer success with Drupal – Tom's core strengths from his long career as a technology executive. Acquia's next phase will be focused on building confidently on this foundation with more product innovation, new technology acquisitions and more strategic partnerships – my core strengths as a technologist.

Tom is leaving Acquia in a great position. This past year, the top industry analysts published very positive reviews based on their dealings with our customers. I'm proud that Acquia made the most significant positive move of all vendors in last year's Gartner Magic Quadrant for Web Content Management and that Forrester recognized Acquia as the leader for strategy and vision. We increasingly find ourselves at the center of our customer's technology and digital strategies. At a time when digital experiences means more than just web content management, and data and content intelligence play an increasing role in defining success for our customers, we are well positioned for the next phase of our growth.

I continue to love the work I do at Acquia each day. We have a passionate team of builders and dreamers, doers and makers. To the Acquia team around the world: 2017 will be a year of changes, but you have my commitment, in every way, to lead Acquia with clarity and focus.

To read Tom's thoughts on the transition, please check out his blog. Michael Skok, Acquia's lead investor, also covered it on his blog.

Tom and dries
May 23 2017
May 23
Ditch Drupal 6 for the all new Drupal 8 Migrate Drupal 6 to Drupal 8

Drupal 6 kicked off way back in 2008. For the time it was a major breakthrough in technology, and the platform supported many major websites including whitehouse.gov. Over its lifespan Drupal 6 had more than 700 contributed modules and 600 custom themes. It boasted a nicer menu structure and an easier installation process than its predecessors, as well as improved security and a handy drag and drop menu. Drupal 6 was well ahead of its time. Now it is unsupported, outdated and frankly, old. It’s time for you and your website to move on.

The complete history of Drupal

What’s new in Drupal?

Drupal 8 (released November 2015) comes with a whole set of new built-in gadgets, including mobile responsive themes, built in web services to make it an API-first CMS, improved editorial experience, accessibility, powerful multilingual tools (at last), improved performance, HTML5, and better SEO and analytics tools. With over 18 months since releasing, it has become reliably stable, secure, and ready for you to make the switch.

Check out our 7 Reasons why Now is the Right Time to Move to Drupal 8

Why Drupal 6 isn’t a safe bet anymore

Without support from the community, Drupal 6 is going to be opened to more and more security risks. It’s modules will become outdated and unwieldy, and users will struggle to be able to get the performance they’ve come to expect with modern websites. While upgrading may seem like a daunting task, the business risks of remaining with Drupal 6 are far higher.

Migrations - easier than you think?

 editing

 

Believe it or not, Drupal 8 is stacked full of migration modules and toolsets to help you move your content from one platform to another. While many of these focus simply on moving a site between completely different platform, there are some that are designed to assist with moving between versions of Drupal. Depending on how your website was developed these can be tricky to use, and can lead to many hours of rework ‘rebuilding’ your website at the other end. If your website is stacked full of custom features, you may find that stock migration modules don’t quite provide the service you need.

Partners in Migration

If you’re a tech-whizz with a small website and plenty of time, you might find migrating your site on your own an exciting and economically sound venture. However, Drupal has become such a user friendly platform that many of its users skillsets are in marketing, communications and social relations. If that’s you, perhaps the thought of trying to move all your web content to another platform is so daunting you’ve been carefully looking the other way while Drupal 8 was released and took the world by storm.

With our assistance, your migration can not only be smooth and painless, but an opportunity to resolve some of those niggling website issues, and take a step forward into greater customer engagement. A shift to Drupal 8 can help you improve your conversions whilst making site maintenance easier.

Vardot - Drupal experts since 2011

Here at Vardot we’ve been supporting people since 2011. With our specialist team of Drupal experts we’re prepared to help migrate anything from a small two-page website, to a large scale page with multiple custom modules and integrations. Working with our team you’ll be on first name basis with our staff, and there is no shuffling between departments.

We believe in empowering our customers and our community - by giving back to the open source community. We promote a vibrant culture that benefits everyone involved. Working with us goes hand in hand with giving back, and you can be sure we’ll equip you with the skills and knowledge you need for the day-to-day management of your website moving forward.

If you have a site that needs migrating, or just a refresh, get in touch with us, we can’t wait to hear from you.

May 23 2017
May 23

Make your plans to join us for the Drupal Midwest Developer Summit, August 11-13, on the University of Michigan campus, in Ann Arbor MI.

Register here

The Event
Join us for 3 days this summer in Ann Arbor, Michigan, for the 2017 Midwest Drupal Summit.
For this year’s Summit, we’ll gather on the beautiful University of Michigan campus for three days of code sprints, working on issues such as porting modules and writing, updating documentation and informal presentations. We will start around 10AM and finish around 5PM each day.
Food
Lunch, Coffee and Snacks will be provided each day.

What’s New This Year at MWDS?
This year, we’re adding lightning talks (more Drupal learnings!) and a social outing (more Drupal fun!)

What’s The Same?

Relaxed, low-key sprinting and socializing with Drupal core contributors and security team members.

What you can expect:

  • An opportunity to learn from Drupal core contributors and mentors, including Angie “webchick” Byron, Michael Hess, Peter Wolanin, Neil Drumm and xjm.
  • Code sprints. Let’s clear out some queues!
  • Help Porting modules to Drupal 8.
  • Lighting talks
  • Security issue sprints
  • Documentation writing
  • Good food and good community.

Location

Ann Arbor is about 30 minutes by car from Detroit Metro Airport. Ann Arbor is also served by Amtrak.
Questions? Contact [email protected]

May 23 2017
May 23

Acquia has announced the end of life for Mollom, the comment spam filtering service.

Mollom was created by Dries Buytaert and Benjamin Schrauwen, and launched to a few beta testers (including myself) in 2007. Mollom was acquired by Acquia in 2012.

The service worked generally well, with the occasional spam comment getting through. The stated reason for stopping the service is that spammers have gotten more sophisticated, and that perhaps means that Mollom needs to try harder to keep up with the ever changing tactics. Much like computer viruses and malware, spam (email or comments) is an arms race scenario.

The recommended alternative by Acquia is a combination of reCAPTCHA and Honeypot.

But there is a problem with this combinationa: reCAPTCHA, like all modules that depend on the CAPTCHA module, disable the page cache for any form that has CAPTCHA enabled.

This is due to this piece of code in captcha.module:

// Prevent caching of the page with CAPTCHA elements.
// This needs to be done even if the CAPTCHA will be ommitted later:
// other untrusted users should not get a cached page when
// the current untrusted user can skip the current CAPTCHA.
drupal_page_is_cacheable(FALSE);

Another alternative that we have been using that does not disable the page cache is antibot module.

To install the antibot module, you can use your git repository, or the following drush commands:

drush dis mollom
drush dl antibot
drush en antibot

Visit the configuration page for antibot if you want to add more forms that use the module, or disable it from other forms. The default settings work for comments, user registrations, and use logins.

Because of the above mentioned arms race situation, expect spammers to come up with circumvention techniques at some point in the future, and there will be a need to use other measures, be they in antibot, or other alternatives.

May 23 2017
May 23

Last time, we gathered together DrupalCon Baltimore sessions about Project Management. Before that, we explored Case Studies. We promised that we will also look in some other areas. Therefore, we will this time see, which sessions were present in the area of Coding and Development.

Code Standards: It's Okay to be Yourself, But Write Your Code Like Everyone Else by Alanna Burke from Chromatic

In this session, attendees learned both formatting standards for their code and documentation standards, as well as some specifics for things like Twig, and object-oriented programming in Drupal 8. The session was appropriate for beginners and experts of Drupal and it also covered how to implement coding standards using tools like Coder and PHP Codesniffer, and how to make the editor do some of the work for Drupal users.

[embedded content]

 

Deep dive into D8 through Single Sign-On Example by Arlina Espinoza Rhoton from Chapter Three

In this session, the author went through a single sign-on (SSO) example module, which helped everybody understand, how OOP works its magic into the modules, making them easier to write, understand and debug. In the way, they uncovered some of Drupal's design patterns.

[embedded content]

Devel - D8 release party by Moshe Weitzman from Drupal Association

The session covers a Devel project, which has been revitalized and revamped for Drupal 8. Therefore, the attendees learned about all Devel's new features and APIs.

[embedded content]

Drupal Commerce Performance by Shawn McCabe from Acro Media

This session gives an overview of performance problems with Drupal Commerce and Drupal 7 and ways to fix or mitigate them. It also touches changes coming with Commerce 2.x for Drupal 8 and how this will affect performance in Drupal, especially related to cache contexts and bigpipe.

[embedded content]

Entities 201 - Creating Custom Entities by Ron Northcutt from Acquia

This session covers code generation using Drupal Console, creating a custom module to “house” the custom entity, understanding basic object inheritance, understanding folder naming and namespace, permissions and routing and fields and database storage. With that knowledge, attendees should create a basic custom entity and custom module in under 5 minutes.

[embedded content]

Improving your Drupal 8 development workflow by Jesus Manuel Olivas from weKnow

This session will show you how to use composer to improve your development workflow and reduce project setup and onboarding time. Furthermore, it will show you how to implement automated analysis tools for code review and code coverage and finally how to build and artifact and deploy your project.

[embedded content]

Magento and Drupal fall in love: A new way to approach contextual commerce at enterprise scale by Mike DeWolf and Justin Emond from Third & Grove

In this session, authors used many Drupal and Magento integration projects and shared everything you need to know to successfully combine these two marketing-leading systems into a digital experience platform. More specifically, authors went over technical best practices for combining Drupal and Magento, shared their integration approach decision matrix, discussed how to make it scale at the enterprise level, and reviewed how to ensure outages don’t impact orders with a tool called the conductor.

[embedded content]

Masters of the Universe: Live-coding a React Application using Drupal Services by Erin Marchak and Justin Longbottom from Myplanet

In this session, authors coded live both the back- and the front-end of a decoupled React application. They used Drupal Console to generate the services, and React Scaffold to generate the components. This session is appropriate for React developers looking to dip their toes into Drupal, Drupal developers wanting to take a peek at React, and Architects looking to understand how the two combine.

[embedded content]

Migrate all the things! by Dave Vasilevsky from Evolving Web

In this session, the author talks about his experiences with several different Migrate-based workflows that he and his Evolving Web team had used before.

[embedded content]

Progressive Web Apps for Drupal - Reliable, Fast, Engaging by Ronald te Brake from GoalGorilla

In this session, attendees learned more about push notifications and how the company used this new progressive web app feature in their Drupal 8 distribution Open Social, to improve user engagement with their product.

[embedded content]

Rescue Me: Recovering a sad, broken Drupal by Matt Corks from Evolving Web

This session describes the process used to recover sites with no filesystem or database backups after a Drupalgeddon infiltration. It also describes the process used to rescue a site containing multiple major unsupported core patches. Moreover, it briefly mentions the related project management difficulties associated with having inherited a completely broken site, in particular, one belonging to an external client.

[embedded content]

Testing for the Brave and True by Gabe Sullice from Aten

In this session, the author discussed the how and the why from the get-go. He cleared as many obstacles as he could and presented his mistakes. Specifically, he thought everybody how to extend Drupal's testing classes like UnitTestCase, KernelTestBase, and WebTestBase to test everybody's custom applications from top to bottom.

[embedded content]

Understanding the Dark Side: An Analysis of Drupal (and Other) Worst Practices by Kristen Pol from Hook 42

This session is a collection of some of the worst practices that are pretty common in the Drupal world and beyond Drupal world. For example, if you don't know what "hacking core" is or why you shouldn't do it, you must listen to this session.

[embedded content]

May 23 2017
May 23

Last time, we gathered together DrupalCon Baltimore sessions about Project Management. Before that, we explored Case Studies. We promised that we will also look in some other areas. Therefore, we will this time see, which sessions were present in the area of Coding and Development.

Code Standards: It's Okay to be Yourself, But Write Your Code Like Everyone Else by Alanna Burke from Chromatic

In this session, attendees learned both formatting standards for their code and documentation standards, as well as some specifics for things like Twig, and object-oriented programming in Drupal 8. The session was appropriate for beginners and experts of Drupal and it also covered how to implement coding standards using tools like Coder and PHP Codesniffer, and how to make the editor do some of the work for Drupal users.

[embedded content]

 

Deep dive into D8 through Single Sign-On Example by Arlina Espinoza Rhoton from Chapter Three

In this session, the author went through a single sign-on (SSO) example module, which helped everybody understand, how OOP works its magic into the modules, making them easier to write, understand and debug. In the way, they uncovered some of Drupal's design patterns.

[embedded content]

Devel - D8 release party by Moshe Weitzman from Drupal Association

The session covers a Devel project, which has been revitalized and revamped for Drupal 8. Therefore, the attendees learned about all Devel's new features and APIs.

[embedded content]

Drupal Commerce Performance by Shawn McCabe from Acro Media

This session gives an overview of performance problems with Drupal Commerce and Drupal 7 and ways to fix or mitigate them. It also touches changes coming with Commerce 2.x for Drupal 8 and how this will affect performance in Drupal, especially related to cache contexts and bigpipe.

[embedded content]

Entities 201 - Creating Custom Entities by Ron Northcutt from Acquia

This session covers code generation using Drupal Console, creating a custom module to “house” the custom entity, understanding basic object inheritance, understanding folder naming and namespace, permissions and routing and fields and database storage. With that knowledge, attendees should create a basic custom entity and custom module in under 5 minutes.

[embedded content]

Improving your Drupal 8 development workflow by Jesus Manuel Olivas from weKnow

This session will show you how to use composer to improve your development workflow and reduce project setup and onboarding time. Furthermore, it will show you how to implement automated analysis tools for code review and code coverage and finally how to build and artifact and deploy your project.

[embedded content]

Magento and Drupal fall in love: A new way to approach contextual commerce at enterprise scale by Mike DeWolf and Justin Emond from Third & Grove

In this session, authors used many Drupal and Magento integration projects and shared everything you need to know to successfully combine these two marketing-leading systems into a digital experience platform. More specifically, authors went over technical best practices for combining Drupal and Magento, shared their integration approach decision matrix, discussed how to make it scale at the enterprise level, and reviewed how to ensure outages don’t impact orders with a tool called the conductor.

[embedded content]

Masters of the Universe: Live-coding a React Application using Drupal Services by Erin Marchak and Justin Longbottom from Myplanet

In this session, authors coded live both the back- and the front-end of a decoupled React application. They used Drupal Console to generate the services, and React Scaffold to generate the components. This session is appropriate for React developers looking to dip their toes into Drupal, Drupal developers wanting to take a peek at React, and Architects looking to understand how the two combine.

[embedded content]

Migrate all the things! by Dave Vasilevsky from Evolving Web

In this session, the author talks about his experiences with several different Migrate-based workflows that he and his Evolving Web team had used before.

[embedded content]

Progressive Web Apps for Drupal - Reliable, Fast, Engaging by Ronald te Brake from GoalGorilla

In this session, attendees learned more about push notifications and how the company used this new progressive web app feature in their Drupal 8 distribution Open Social, to improve user engagement with their product.

[embedded content]

Rescue Me: Recovering a sad, broken Drupal by Matt Corks from Evolving Web

This session describes the process used to recover sites with no filesystem or database backups after a Drupalgeddon infiltration. It also describes the process used to rescue a site containing multiple major unsupported core patches. Moreover, it briefly mentions the related project management difficulties associated with having inherited a completely broken site, in particular, one belonging to an external client.

[embedded content]

Testing for the Brave and True by Gabe Sullice from Aten

In this session, the author discussed the how and the why from the get-go. He cleared as many obstacles as he could and presented his mistakes. Specifically, he thought everybody how to extend Drupal's testing classes like UnitTestCase, KernelTestBase, and WebTestBase to test everybody's custom applications from top to bottom.

[embedded content]

Understanding the Dark Side: An Analysis of Drupal (and Other) Worst Practices by Kristen Pol from Hook 42

This session is a collection of some of the worst practices that are pretty common in the Drupal world and beyond Drupal world. For example, if you don't know what "hacking core" is or why you shouldn't do it, you must listen to this session.

[embedded content]

May 22 2017
May 22

Drupal Modules: The One Percent — Footermap (video tutorial)

[embedded content]

Episode 28

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll investigate Footermap, a module which renders the results expanded menus in a block.

May 22 2017
May 22

Setting a clear list of expectation to the client for a project delivery goes a long way to great client relationships. A mismatched and misunderstood project goal and target always leads to dissatisfaction among team members, account head, and all other stakeholders.

I manage a team of a few developers who build web applications in Drupal. While working on projects with my team, I have had the chance to practice a few of the points that I have mentioned in the article. It has not only kept us on track but also kept people happy and motivated.

What should you do?

Be involved from the beginning

When you begin a project makes sure that you and your team members are involved in the project from the beginning. There are times when the team would expand and you have to accommodate a new member. Make sure to walk the new member through the whole project and not just the part of the project they would be working on. This gives a wholesome idea and would help the newbies to understand the context of the work being done.

Involve all stakeholders

Including decision makers and stakeholders during the project planning and goal setting phase will ensure that no unplanned task gets in. This will avoid any last minute changes or any further approvals that would need to be taken later on into the project and help with managing expectations.

Identify pain points

Know exactly what value your project would be creating, know the nitty-gritty of why and how to fully understand the context of the project. To understand how to build the solution the problem statement needs to be understood. That can be done with a proper requirement analysis.

Be transparent

Let the stakeholders know what is possible and what is not. Be clear with the outcomes and process. Also, let your team members know.

Underpromise and overdeliver

If you have a project that will take a month and a half to get over, promise 2 months for so. There might be bottlenecks, you might run into unplanned problems and that might delay your delivery. It is best to set a timeline or task by keeping something in hand. In that way, if you finish before the time that’s a win-win.

Have clear scopes and boundaries - Set realistic goals

When you know something won’t be possible, just let the stakeholders know. In a project define the scope of a task. A particular task can be executed in a lot of ways. Be transparent about the way you would do it and mention the scope of the task. There should not be space for ambiguity.

Set smaller targets, plan - prioritize

You have one end goal in mind - good, break that up into smaller tasks, that way you could make sure to always be on track and time. You have to deliver a project in 6 months - make weekly project plans, review, update and track the progress.

Make sure everyone understands their responsibilities and are accountable

When you assign a task to someone, make sure they understand their responsibility. This is a cut-throat world and everyone is busy. No one would want to take up additional responsibilities or clean up one someone else’s behalf, it's just not right. For example, when you assign a coding task to someone, make them understand that they are not to push a buggy code and wait for a QA to see that. When you assign a task, it is important that the responsible person is also accountable for the task.

Communicate - follow up with team

Communication is the key to any collaborative task. Have frequent reviews, meeting and understand how you could help them in the process. Understand that it is you in between the client and the team. If there is a screw up with the project or any of your tasks it is highly likely that you have failed to communicate with your team and that will be a real pain in the back.

Communicate with client

Keep your client updated with the recent happenings, have frequent meetings and understand that all of you are on the same page. Client communications will ensure everyone understands and is on board.

Escalate issues

Bottlenecks on the projects are unavoidable, but not knowing about them is a serious offense. Right from the beginning encourage your members to let you know about issues. If issues are escalated to the right authority at the right time it will keep the project from going off track.  

Update plan changes

Plans and tracks will often need to be altered. New changes will need to be accommodated. Once you get to know that from your members or from the stakeholders - explain and let people know that. Your members might be working with a project plan in mind and you have another, the sooner they get to know about any change of plans the better it is for managing expectations.

Project management is not a difficult task and communication is the key. We put a lot of emphasis on client communication and stress on the details, that way it ensures we set the right expectations for a project delivery. We deliver high-end experiences in Drupal for Media and Publishing, Hi - Tech and Pharma, all of these for a global clientele while ensuring the right tracks for handling a project.  

Image Courtesy : https://newsignature.com/wp-content/uploads/2017/02/project-management…

May 22 2017
May 22

We have built a shop to buy digital and print subscriptions as part of the newspaper platform südostschweiz.ch: http://www.suedostschweiz.ch/somedia/shop. The whole site and the user and subscription management is closely integrated into an external subscription management system that takes care of things like recurring billing, both invoices and credit cards. That system has no public interface, so if for example a credit card expires, the user needs to be able to renew the subscription through the website. Product information is synchronized with the external system and completed orders are sent to that system automatically for processing. It also does not support to buy multiple subscriptions at once, so we don’t need a cart.

Aboshop

Commerce 2.x provides an out of the box shop experience that is quite different from our requirements for the suedostschweiz.ch project. Thus we had to change that. We were able to find solutions for all those challenges and worked closely with the CommerceGuys team to find ways to contribute back. We also worked on a few missing features as well as making existing features more flexible, so we could customize it for our use case.

Storing and reusing payment methods… but not really

Commerce 2.x has much improved support for storing payment methods and using them again. The default use case that it supports is to make those stored payment methods visible again to the users in the checkout process, allowing them to avoid entering the payment information again.

We internally need to store payment methods so we can send it to the subscription system. However, the vast majority of users only buys a new subscription if they don’t have one yet or if their old one expired, in which case they either have no credit card yet or it expired.

That’s why we decided to replace the default payment gateway selection checkout pane with a custom one, that does not offer to re-use existing payment methods. Still, having the built-in default storage to store the credit card alias from the external payment gateway is a great improvement over Commerce 1.x.

Checkout existing customer

Different address handling

A second reason for that decision was that a stored payment method also has a corresponding billing address, so the selection of the billing address and the payment method in Commerce 2.x is combined and the payment gateway and billing address is selected/provided in the same pane, before the review step.

Our design/requirements required a different workflow for the address information checkout page

  • A required delivery address and optionally a different billing address
  • A few additional fields to enable existing customers to connect through a subscription number. By entering a valid subscription number we automatically fetch the address and fill out the address field.
  • The ability to provide a later subscription start date 
  • Custom validation to prevent that users can buy a subscription for a newspaper that they already have.

Additionally to replacing the payment gateway pane, we also replaced the address information pane with our own that had all those fields and validation logic in a single place, which simplifies maintenance.

Renewing an expired credit card

In case a credit card expires or the payment failed for another reason, the customer receives an e-mail with a link. That link leads to a page where they can do a manual payment again, which is then again sent to the external system. We wanted to make this step as fast as possible and jump directly to a review/info checkout page where the customer can just click once to start the payment.

This was surprisingly easy to implement, we defined a second checkout flow where we disabled all unnecessary checkout panes and pages, an order type that used that checkout flow as well as an order item type that used that order type and did not have a referenced product. We then programmatically create an order item, an order and send the user to the checkout process.

Edit variation

Synchronization of product variations and submitted orders

The external system defines the concept of an offer, which is a certain subscription with a given runtime, a price, a title as well as some other information like which payment gateways are allowed (monthly subscriptions can only be paid with credit card, others also with invoices).

There can also be special promotions. To avoid that this information gets out of sync with the website, the shop manager only needs to select the offer as well as providing a few web specific things, everything else is then automatically fetched from the API and updated in the background. Inline Entity Form made this fairly easy, as we can easily control which fields are displayed and hook into the process of building an entity from that.

The opposite happens when a user completes the checkout process, then we need to send that data to the external system, so it can be reviewed and processed. To be able to do this, we defined a custom order workflow, which is easy to define in YAML file, then used that for our order types. When an order reaches a certain step in that workflow, we collect all the information, convert it to the structure that the external system understands and forward it that information.

Checkout register

E-Mail only registration

The site doesn’t use usernames, the registration is customized to hide the username and build it from the e-mail. Commerce has a built-in login or register checkout page that is hardcoded to a few specific fields and shows both the username and the e-mail.

Initially, we started an issue that provides a setting to control whether the e-mail should be shown or not, but that is a bit controversial and likely requires a bunch of additional settings to define how the username is defined etc. As we mentioned in the previous blog post, Commerce 2.x tries to avoid having many settings, so we had to try something else.

Our second and current approach is to improve the checkout pane instead so it is easier to customize and extend. We ended up needing that anyway, as we need to show a few additional fields on the registration form too.

Direct checkout

Not having a cart and instead going directly to the checkout is a fairly common requirement, so there was already an issue for this with other people looking for this feature.

We worked on that and provided a first patch that is currently being reviewed. A new setting changes the button to go directly to the checkout page and any messages about cart are skipped. Commerce 2.x currently has the same limitation as 1.x, that anonymous users can only go through the checkout if the order is in the cart session and we currently have to work around that a bit. That also means the checkout and cart functionality are still tightly coupled. Checkout currently depends on cart, so you can not have a checkout without having the cart module completely disabled. That is possibly something that will still be improved.

Conclusion

When combining all those changes, we ended up replacing or extending almost every checkout pane and a few other components that Commerce 2.x provides to be able to achieve the UI and processes we required. We were however able to do so in a very clean way and re-use a lot of code because Commerce 2.x is designed to support exactly that, thanks to the flexibility of the framework underneath the default UI.

Integrating with external systems also became easier in Drupal 8/Commerce 2.x compared to similar requirements in in a Commerce 1.x project. Converting data structures, error handling and general interaction with one or multiple external systems still requires a considerable amount of code and development time, services, plugins and improved APIs support developers in writing clean and maintainable code.

We are convinced that Commerce 2 is a great framework to use for both standard shops with products, shipping and a standard checkout process as well as heavily customized ecommerce solutions. We are looking forward to our next commerce projects.

Contact us if you are planning a Drupal Commerce project.

May 22 2017
tim
May 22

*Cross-posted from Millwood Online

Over the past month there has been a lot of focus on Drupal, the community. More recently it seems people are back to thinking about the software. Dave Hall and David Hernandez both posted eye opening posts with thoughts and ideas of what needs doing and how we can more forward.

A one line summary of those posts would be "We should slim down core, make it more modular, and have many distros".

To a degree this makes sense, however it could cause divergence. Core is not great at all following the same pattern, but contrib is even worse. As part of the Workflow Initiative specifically there is a lot of work going on to try and get the Entity API aligned, get many more entity types revisionable and publishable, using common base classes, traits, and interfaces. If we maintained Node, Block Content, Taxonomy, Comment, etc all as separate projects then there's a chance less of this would happen. Also by doing this we are laying out foundations and setting examples to be followed.

One solution to this may be to follow Symfony (yet again), they have a monolithic project but then split this up into the various components, which are "read only" repos. It's be pretty awesome if we could do this with Drupal. From there we could make Drupal downloadable without many of the core modules. People with the right skills can create a composer.json file to pull in exactly what parts of Drupal are needed, others could use a form on d.o to select which parts are wanted, which downloads a compiled zip.

What would be more awesome is if we could abstract more of Drupal out of Drupal. Imagine if the Entity API was a PHP generic library. Imagine if you could create a Laravel or Symfony app with Nodes. This would be really tricky, especially since Dries announced the plans to make Drupal upgrades easy forever, but possible.

Currently most Drupal sites are still on 7, and here we're talking about what would be Drupal 9? Maybe we need to take step back and look at why sites aren't being upgraded. Dave mentions "A factor in this slow uptake is that from a developer's perspective, Drupal 8 is a new application. The upgrade path from Drupal 7 to 8 is another factor." Although another reason is also why would a company spend the tens of thousands upgrading to Drupal 8? It looks at works (from a users point of view) the same as Drupal 7. Drupal is a CMS, a Content Management System, and the management of content is more or less the same. Yes, with initiatives like Workflow and Media this is changing, but even then similar functionality can be achieved in Drupal 7 with contrib modules. Will Drupal 8 be the version to skip? go straight from 7 to 9?

As Drupal is now pretty firmly an enterprise platform we need to look at this from a marketing point of view. What is going to sell Drupal 9? Why are people going to upgrade? What do they really want? Is a slimmed down core and more modular application really the selling feature that's wanted?

Drupal is a CMS, quoting Dave again "do one thing and do it well". We need to focus on making the authoring experience awesome, and the workflows that go along with it awesome too. This should all be done in a consolidated way to make managing Node content, Block content, etc just as intuitive as each other. If during this process we can also make things more modular, and less Drupally, that'd be awesome! 

May 22 2017
May 22

Ha sido un placer trabajar con el equipo de La Drupalera, también desde Reino Unido y en remoto. Se aseguraron en todo momento de que la distancia y el idioma no fuera un problema, manteniéndonos al día del progreso del proyecto y resolviendo todas las dudas y retos durante el proceso de desarrollo con Drupal hasta su lanzamiento.

May 22 2017
May 22

What is a Rich snippet?

Rich snippet is a term used by search engines for the enhanced listing items on search engine result pages.

Rich snippets include elements like:

For eCommerce products:

  • Star ratings
  • Number of reviews
  • Product name
  • Product price
  • Availability

For content articles:

  • Author
  • Title
  • Image
  • Ratings
  • Publication date.

What does a Rich Snippet look like?

Basic snippet:

poor snippet

Rich Snippet:

rich snippet

What is Structured Data?

"Schema.org is a joint effort, in the spirit of sitemaps.org, to improve the web by creating a structured data markup schema supported by major search engines.

On-page markup helps search engines understand the information on web pages and provide richer search results. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts.

Search engines want to make it easier for people to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure." - Schema.org

Types of schemas that you can match your content to are defined here.

How do we use structured data?

The use of Structured Data / Rich Snippets / Schemas in your HTML markup is not required, but it does benefit general SEO and everyone who uses your website whether they are visitors or other site operators.

Just like Opengraph the objective is to add as much context to the search result snippet as possible to give the user relevant information that will convince them that your result is the one they are looking for. Due to this the snippets have became known as  Rich snippets.

Rich Snippets Benefits

  • Search engines are able to return more relevant results by having more info
  • Users can determine the relevancy of specific results more easily thanks to added context
  • Site operators may benefit from better visitors with less bounce rates due to users having more context about any given content before they decide to click through

Getting Rich Snippets on your Drupal site

Rich Snippets are a privilege, not a right!

The fact that you have implemented structured data on your site does not mean search engines are going to show your snippet as a rich snippet. Google in particular will first analyse and assess your markup before starting to display richer results.

From what we have researched, and from our own experience, we tend to follow this particular pattern for seeing rich snippets, which starts as soon as your production site has structured data implemented.

  1. Google only starts analysing the new markup 10 to 14 days after it is first introduced on a website
  2. Once Google is happy with your implementation they will then start to show rich snippets for some pages
  3. After roughly 5-7 days those rich snippets will disappear
  4. Another 5-7 days later some rich snippets will reappear
    The rich snippets may be for a different set of pages so don't panic if you cant find the same snippets
  5. Steps 3 & 4 may be repeated a number of times
  6. After roughly 8 weeks you will be rewarded with Rich Snippets throughout your site. These are likely to disappear and reappear based on where you are appearing in organic search

To get from step 1 to 6 you must not have any issues with your markup so make sure to test it using this tool and remember that Google and other search engines regularly change their guidelines for how structured data should be implemented, so make sure to keep up to date and test regularly!

How to configure Rich Snippets in Drupal 8?

The RDF UI module for Drupal helps us add extra data in our HTML that will produce Rich Snippets. Use Schema.org to match your content to the relevant schema (types of structured data).

You will also need to add your site's information in your HTML.

Use Googles documentation as a reference for testing that your schema has been defined correctly. Use Schema.org to define and create your structured data in JSON-LD format. They have examples for each type of schema.

The Next Step

If you'd like to discuss how to use social media marketing to engage with your audience, or improve the effectiveness of social media on your Drupal website get in touch.

May 21 2017
May 21

Lightning has used the Layout Plugin module since before our first beta release. Starting in Drupal 8.3.0, the functionality provided by the Layout Plugin module was largely duplicated in Layout Discovery and released as part of the Core Experimental group. Lightning migrated to Layout Discovery in 2.1.1.

The Lightning team feels like it's a win anytime we can migrate from contrib to core. But another advantage of this is that since Layout Discovery is in Core, security issues can be filed against it in the Core security issue queue which is monitored by the Security Team. Layout Plugin, being alpha, didn't have a security issue queue.

Technically, Layout Discovery is an Experimental core module though. And the new Status Report page warns users if any Experimental modules are enabled. As a result users of Lightning are presented with this unhelpful message when they visit /admin/reports/status:

The problem is, this message isn't actionable. Lightning made the decision to enable it. The only way to disable it would be to completely opt out of all of Lightning's Layout functionality.

To be clear, the Lightning team feels that the Layout Discovery module is certainly stable enough to run predictably and reliably on production websites. This warning from core is supposed to indicate that the underlying API might change or that it might ultimately be removed from the core package. Under either of those circumstances, Lightning would provide a migration script or otherwise support users.

We feel that warning a user after they (or their site builder) has made the decision to use an experimental module is in-actionable nagging. We support warning site builders when installing an experimental module, but not constantly reminding them of that decision.

Starting in 2.1.4, Lightning will include a core patch that removes the warnings for experimental modules on the status page. The patch does not affect the existing warning that is shown during installation of experimental modules.

There are two other "nagging" warnings that Lightning will remove in 2.1.4. Specifically, it will stop warning the user if:

Related, there is also a larger discussion around what the requirements should be for reporting on the status page. Discuss!

Summary of new patches related to reporting that will be included in 2.1.4:

  • Remove scary 'experimental module' messages from appearing everywhere on the site (#2880374)
  • Config sync should not throw a warning when not being writable (#2880445)
  • Disable warning about update notifications (#2869592)
May 20 2017
May 20

To give more insight into Drupal Association financials, we are launching a blog series. This is the first one in the series and it is for all of you who love knowing the inner workings. It provides an overview of:

  • Our forecasting process
  • How financial statements are approved
  • The auditing process
  • How we report financials to the U.S. government via 990s

There’s a lot to share in this blog post and we appreciate you taking the time to read it.

Replacing Annual Budgets With Rolling Forecasts

Prior to 2016, the Drupal Association produced an annual budget, which is a common practice for non-profits. However, two years ago, we found that the Drupal market was changing quickly and that impacted our projected revenue. Plus, we needed faster and more timely performance analysis of pilot programs so we could adjust projections and evaluate program success throughout the year. In short, we needed to be more agile with our financial planning, so we moved to a rolling forecast model, using conservative amounts.

Using a rolling forecast means we don’t have a set annual budget. Instead, we project revenue and expense two years out into a forecast. Then, we update the forecast several times a year as we learn more. The first forecast of the year is similar to a budget. We study variance against this version throughout the year. As we conduct the additional forecasts during the year, we replace forecasts of completed months with actual expenses and income (“actuals”) and revise forecasts for the remaining months. This allows us to see much more clearly if we are on or off target and to adjust projections as conditions that could impact our financial year change and evolve. For example, if we learn that the community wants us to change a drupal.org ad placement that could impact revenue, we will downgrade the revenue forecast appropriately for this product.

In 2017, we there will be three forecasts:

  • December 2016:  The initial forecast was created. This serves as our benchmark for the year and we run variances against it.
  • May 2017: We updated the forecast after DrupalCon Baltimore since this event has the biggest impact on both our expenses and revenue for the year.
  • October 2017: We will reforecast again after DrupalCon Vienna. This is our final update before the end of the year and will be the benchmark forecast for 2018.

Creating and approving the forecasts is a multi-party process.

  1. Staff create the initial forecast much like you would a budget. They are responsible for their income and expense budget line items and insert them into the forecasting worksheet. They use historical financials, vendor contracts and quotes, and more to project the amount for each line item and document all of their assumptions. Each budget line manager reviews those projections and assumptions with me. I provide guidance and challenge assumptions and sign off on the inputs

  2. Our virtual CFO firm, Summit CPA, analyzes the data and provides financial insight including: Income Statement, Balance Sheet, Cash Flow, and Margin Analysis. Through these reports, we can see how we are positioned to perform against our financial KPIs. This insight allows us to make changes or strengthen our focus on certain areas to ensure we are moving towards those KPIs - which I will talk about in another blog post. Once these reports are generated, the Drupal Association board finance committee receives them along with the forecasting assumptions. During a committee meeting, the committee is briefed by Summit and myself. They ask questions to make sure various items are considered in the forecast and they provide advice for me to consider as we work to improve our financial health.  

  3. Once the committee reviews the forecast and assumptions, then, the full board reviews it in an Executive Session. The board asks questions and provides advice as well. This review process happens with all three forecasts for the year.

Approving Financial Reports

As we move through the year, our Operations Manager and CFO team work together to close the books each month. This ensures our monthly actuals are correct. Then, our CFO team creates a monthly financial report that includes our financial statements (Income Statement and Balance Sheet) for the finance committee to review and approve. Each month the finance committee meets virtually and the entire team reviews the most recently prepared report. After asking questions and providing advice, the committee approves the report.

The full board receives copies of the financial reports quarterly and is asked to review and approve the statements for the preceding three months. Board members can ask questions, provide advice, and approve the statements in Executive Session or in the public board meeting. After approval, I write a blog post so the community can access and review the financial statements. You can see an example of the Q3 2016 financial statement blog here. The board just approved the Q4 2016 financials and I will do a blog post shortly to share the financial statements.

Financial Audits

Every two or three years the Association contracts to have the financial practices and transactions audited.  For the years that we do not conduct a full audit, we will contract for a “financial review” by our CPA firm (which is separate from our CFO firm) to ensure our financial policies and transactions are in good order.

An audit is an objective examination and evaluation of the financial statements of an organization to make sure that the records are a fair and accurate representation of the transactions they claim to represent. It can be done internally by employees of the organization, or externally by an outside firm.  Because we want accountability, we contracted with an external CPA firm, McDonald Jacobs, to handle the audit.

The Drupal Association conducts audits for several reasons:

  1. to demonstrate our commitment to financial transparency.

  2. to assure our community that we follow appropriate procedures to ensure that the community funds are being handled with care.  

  3. to give our board of directors outside assurance that the financial statements are free of material misstatements.

What do the auditors look at?  For 2016, our auditors will generally focus on three points:

  • Proper recording of income and expense: Auditors will ensure that our financial statements are an accurate representation of the business we have conducted. Did we record transactions on the right date, to the right account, and the right class? In other words, if we said that 2016 revenue was a certain amount, is that really true?

  • Financial controls: Preventing fraud is an important part of the audit. It is important to put the kinds of controls in place that can prevent common types of fraud, such as forged checks and payroll changes. Auditors look to see that there are two sets of eyes on every transaction, and that documentation is provided to verify expenses and check requests.

  • Policies and procedures: There are laws and regulations that require we have certain policies in place at our organization. Our auditors will look at our current policies to ensure they were in place and, in some cases, had been reviewed by the board and staff.

The primary goal of the audit is for the auditor to express an opinion on two aspects of the financial statements of the Association: the financial statements are fairly presented, and they are in accordance with generally accepted accounting principles (GAAP). Generally accepted accounting principles are the accepted body of accounting rules and policies established by the accounting profession. The purpose of these rules is to promote consistency and fairness in financial reporting throughout the business community. These principles provide comparability of financial information.

Once our audit for 2016 is complete and approved by the board (expected in early summer), we can move to have the 990 prepared.  We look to have this item completed by September 2016.

Tax Filing: The Form 990

As a U.S.-based 501c3 exempt organization, and to maintain this tax-exempt status, the U.S. Internal Revenue Service (IRS) requires us to file a 990 each year. Additionally, this form is also filed with state tax departments as well. The 990 is meant for the IRS and state regulators to ensure that non-profits continue to serve their stated charitable activities. The 990 can be helpful when you are reviewing our programs and finances, but know that it’s only a “snapshot” of our year.  

You can find our past 990s here.

Here are some general points, when reviewing our 990.

FORM 990, PART I—REVENUES, EXPENSES, AND CHANGES IN NET ASSETS OR FUND BALANCES

Lines 8-12 indicates our yearly revenue revenue. Not only how much total revenue (line 12), but also where we have earned our income, broken out into four groups. Line 12 is the most important: total income for the year.

Lines 13-18 shows expenses for the year, and where we focused.

Cash Reserves are noted on lines 20-22 on page 1.

The 990 has a comparison of the net assets from last year (or the beginning of the year) and the end of the current year, as well as illustrates the total assets and liabilities of the Association.

FORM 990, PART II—STATEMENT OF FUNCTIONAL EXPENSES

Part II shows our expenditures by category and major function (program services, management and general, and fundraising).

FORM 990, PART III—STATEMENT OF PROGRAM SERVICE ACCOMPLISHMENTS

In Part III, we describe the activities performed in the previous year that adhere to our 501c3 designation.  You can see here that Drupal.org, DrupalCon and our Fiscal Sponsorship programs are noted.

FORM 990, PART IV—BALANCE SHEETS

Part IV details our assets and liabilities. Assets are our resources that we have at our disposal to execute on our mission.  Liabilities are the outstanding claims against those assets.

FORM 990, PART V—LIST OF OFFICERS, DIRECTORS, TRUSTEES AND KEY EMPLOYEES

Part V lists our board and staff who are responsible in whole or in part for the operations of an organization. These entries do include titles and compensation of key employees.

FORM 990, PART VI—OTHER INFORMATION

This section contains a number of questions regarding our operations over the year. Any “yes” answers require explanation on the following page.

Schedule A, Part II—Compensation of the Five Highest Paid Independent Contractors for Professional Services

We list any of our contractors, if we have paid them more than $50,000, on this schedule.

Once our 990 is complete and filed we are required to post the return publicly, which we do here on our website.  We expect to have the 2016 990 return completed, filed and posted by September 2017.

Phew. I know that was long. Thank you for taking the time to read all of the steps we take to ensure financial health and accuracy. We are thankful for the great team work that goes into this process. Most of all we are thankful for our funders who provide the financial fuel for us to do our mission work.

Stay tuned for our next blog in this series: Update on Q4 2016 financial (to follow up on our Q3 2016 financial update)

May 19 2017
May 19

The important first step for media support in core just landed in Drupal 8.4.x: a new beta experimental Media module to support storing media of various types. While Drupal core already has generic file upload and image upload support, the new module will support asset reuse and be extensible to support video, remote media, embedding, and so on.

This is a huge testament to individuals and organizations with shared interests pulling together, figuring out how to make it happen in core, and getting it done. 89 individuals (both volunteering their own time and from various companies all across the world) contributed both directly in the core patch and via involvement with the contributed Media Entity module:

That said, this is just the first step. (If you go and enable the core Media module, all it can do right now is give you an error message that no media types can be created.) The next steps are to add a file/document media plugin and an image media plugin so these types of media may be created on the site with the module. Then, widgets and formatters for the upload field and image field interfaces will be added so we can reproduce the existing core functionality with media. Adam Hoenich wrote up a concise summary of the next steps, and granular details are listed in the followup roadmap.

There are definitely more tasks than people available, so your contributions would be more than welcome! Now is the time to make sure media is integrated in a way that your projects can best utilize it. Get involved through the media IRC meetings happening at 2pm GMT every week in #drupal-media. (See https://www.drupal.org/irc for more information on Drupal IRC). Or, if you are available at other times, ask in the channel. The issues are listed on the Media Initiative plan.

Let's put the remaining pieces in place together!

May 19 2017
May 19

We are modernizing our JavaScript by moving to ECMAScript 6 (ES6) for core development. ES6 is a major update to JavaScript that includes dozens of new features. In order to move to ES6, contributors will have to use a new workflow. For more details, see the change record. We have adopted the AirBnB coding standards for ES6 code.

May 19 2017
May 19

It is critical that the Drupal Association remains financially sustainable so we can fulfill our mission into the future. As a non-profit organization based in the United States, the responsibility of maintaining financial health falls on the Executive Director and the Drupal Association Board.

Association board members, like all board members for US-based organizations, have three legal obligations: duty of care, duty of loyalty, and duty of obedience. Additionally, there is a lot of practical work that the board undertakes. These generally fall under the fiduciary responsibilities, which includes overseeing financial performance.

The Drupal Association’s sustainability impacts everyone in the community. For this reason, we want to provide more insight into our financial process and statements with a series of blog posts covering the following topics:

  • How we create forecasts, financial statements, and ensure accounting integrity

  • Update on Q4 2016 financial (to follow up on our Q3 2016 financial update)

  • Which countries provide funding and which countries are served by that funding (a question asked in the recent public board meeting by a community member)

If you would like additional topics covered, please tell us via the comments section. 

May 19 2017
May 19

I recently had the opportunity to migrate content from a Drupal 6 site to a Drupal 8 site. This was especially interesting for me as I hadn’t used Drupal 6 before. As you’d expect, there are some major infrastructure changes between Drupal 6 and Drupal 8. Those differences introduce some migration challenges that I’d like to share.

The Migrate module is a wonderful thing. The vast majority of node-based content can be migrated into a Drupal 8 site with minimal effort, and for the content that doesn’t quite fit, there are custom migration sources. A custom migration source is a small class that can provide extra data to your migration in the form of source fields. Typically, a migration will map source fields to destination fields, expecting the fields to exist on both the source node type and destination node type. We actually published an in-depth, two-part blog series about how we use Drupal Migrate to populate Drupal sites with content in conjunction with Google Sheets in our own projects.

In the following example, we are migrating the value of content_field_text_author from Drupal 6 to field_author in Drupal 8. These two fields map one-to-one:

id: book
label: Book
migration_group: d6
deriver: Drupal\node\Plugin\migrate\D6NodeDeriver
source:
  key: migrate
  target: d6
  plugin: d6_node
  node_type: book
process:
  field_author: content_field_text_author
destination:
  plugin: entity:node

This field mapping works because content_field_text_author is a table in the Drupal 6 database and is recognized by the Migrate module as a field. Everyone is happy.

However, in Drupal 6, it’s possible for a field to exist only in the database table of the node type. These tables look like this:

mysql> DESC content_type_book;
+----------------------------+------------------+------+-----+---------+-------+
| Field                      | Type             | Null | Key | Default | Extra  |
+----------------------------+------------------+------+-----+---------+-------+
| vid                        | int(10) unsigned | NO   | PRI | 0             |   |
| nid                        | int(10) unsigned | NO   | MUL | 0           |   |
| field_text_issue_value     | longtext         | YES  |     | NULL |   |
+----------------------------+------------------+------+-----+---------+-------+

If we want to migrate the content of field_text_issue_value to Drupal 8, we need to use a custom migration source.

Custom migration sources are PHP classes that live in the src/Plugin/migrate/source directory of your module. For example, you may have a PHP file located at src/Plugin/migrate/source/BookNode.php that would provide custom source fields for a Book content type.

A simple source looks like this:

namespace Drupal\custom_migrate_d6\Plugin\migrate\source;

use Drupal\node\Plugin\migrate\source\d6\Node;

/**
 * @MigrateSource(
 *   id = "d6_book_node",
 * )
 */
class BookNode extends Node {

  /**
   * @inheritdoc
   */
  public function query() {
    $query = parent::query();

    $query->join('content_type_book', 'book', 'n.nid = book.nid');
    $query->addField('book', 'field_text_issue_value');

    return $query;
  }

}

As you can see, we are using our migration source to modify the query the Migrate module uses to retrieve the data to be migrated. Our modification extracts the field_text_issue_value column of the book content type table and provides it to the migration as a source field.

To use this migration source, we need to make one minor change to change to our migration. We replace this:

plugin: d6_node

With this:

plugin: d6_book_node

We do this because our migration source extends the standard Drupal 6 node migration source in order to add our custom source field.

The migration now contains two source fields and looks like this:

id: book
label: Book
migration_group: d6
deriver: Drupal\node\Plugin\migrate\D6NodeDeriver
source:
  key: migrate
  target: d6
  plugin: d6_book_node
  node_type: book
process:
  field_author: content_field_text_author
  field_issue: field_text_issue_value
destination:
  plugin: entity:node

You’ll find you can do a lot with custom migration sources, and this is especially useful with legacy versions of Drupal where you’ll have to fudge data at least a little bit. So if the Migrate module isn’t doing it for you, you’ll always have the option to step in and give it a little push.

Get In Touch

Questions? Comments? We want to know! Drop us a line and let’s start talking.

Learn More Get In Touch
May 19 2017
May 19

DrupalCamp St. Louis logo - Fleur de Lis

DrupalCamp St. Louis 2017 will be held September 22-23, 2017, in St. Louis, Missouri. This will be our fourth year hosting a DrupalCamp, and we're one of the best camps for new presenters!

If you did something amazing with Drupal, if you're an aspiring themer, site builder, or developer, or if you are working on making the web a better place, we'd love for you to submit a session. Session submissions are due by August 1.

This year's Camp will kick off with a full day of Drupal training on Friday, September 22, then a full day of sessions and community networking on Saturday, September 23. Come and join us at DrupalCamp St. Louis—we'd love to see you! (Registration will open soon.)

May 18 2017
May 18

Imagine you have a view that lists upcoming events on your Drupal 8 site. There's a date filter that filters out any event who's start date is less than the current date. This works great until you realize that the output of the view will be cached in one or many places (dynamic page cache, internal page cache, varnish, etc). Once it's cached, views doesn't execute the query and can't compare the date to the current time, so you may get older events sticking around.

One way of fixing this is to assign a custom cache tag to your view, and then run a cron task that purges that cache tag at least once a day, like so:

/**
 * Implements hook_cron().
 */
function YOUR_MODULE_misc_cron() {
  // Invalidate the events view cache tag if we haven't done so today.
  // This is done so that the events list always shows the proper "start"
  // date of today when it's rendered. If we didn't do this, then it's possible
  // that events from previous days could be shown.
  // This relies on us setting a custom cache tag "public_events_block" on the
  // view that lists the events via the views_custom_cache_tag module.
  $state_key = 'events_view_last_cleared';
  $last_cleared = \Drupal::state()->get($state_key);
  $today = date('Y-m-d');
  if ($last_cleared != $today) {
    \Drupal::state()->set($state_key, $today);
    \Drupal::service('cache_tags.invalidator')->invalidateTags(['public_events_block']);
  }
}

Assuming you have cron running just after midnight, this will refresh the cache of the view's block and the page at an appropriate time so that events from the previous day are not shown.

May 18 2017
May 18

There is one significant trend that I have noticed over and over again: the internet's continuous drive to mitigate friction in user experiences and business models.

Since the internet's commercial debut in the early 90s, it has captured success and upset the established order by eliminating unnecessary middlemen. Book stores, photo shops, travel agents, stock brokers, bank tellers and music stores are just a few examples of the kinds of middlemen who have been eliminated by their online counterparts. The act of buying books, printing photos or booking flights online alleviates the friction felt by consumers who must stand in line or wait on hold to speak to a customer service representative.

Rather than interpreting this evolution as disintermediation or taking something away, I believe there is value in recognizing that the internet is constantly improving customer experiences by reducing friction from systems — a process I like to call "friduction".

Open Source and cloud

Over the past 15 years, I've watched open source and cloud computing solutions transform content management into digital experience management. Specifically, I have observed open source and cloud-computing solutions remove friction from legacy approaches to technology. Open source takes the friction out of the technology evaluation and adoption process; you are not forced to get a demo or go through a sales and procurement process, or deal with the limitations of a proprietary license. Cloud computing also took off because it also offers friduction; with cloud, companies pay for what they use, avoid large up-front capital expenditures, and gain speed-to-market.

Cross-channel experiences

Technology will continue to work to eliminate inefficiencies, and today, emerging distribution platforms will continue to improve user experience. There is a reason why Drupal's API-first initiative is one of the topics I've talked and written the most about in 2016; it enables Drupal to "move beyond the page" and integrate with different user engagement systems. We're quickly headed to a world where websites are evolving into cross­channel experiences, which includes push notifications, conversational UIs, and more. Conversational UIs, such as chatbots and voice assistants, will eliminate certain inefficiencies inherent to traditional websites. These technologies will prevail because they improve and redefine the customer experience. In fact, Acquia Labs was founded last year to explore how we can help customer bring these browser-less experiences to market.

Personalization and contextualization

In the 90s, personalization meant that websites could address authenticated users by name. I remember the first time I saw my name appear on a website; I was excited! Obviously personalization strategies have come a long way since the 90s. Today, websites present recommendations based on a user's most recent activity, and consumers expect to be provided with highly tailored experiences. The drive for greater personalization and contextualization will never stop; there is too much value in removing friction from the user experience. When a commerce website can predict what you like based on past behavior, it eliminates friction from the shopping process. When a customer support website can predict what question you are going to ask next, it is able to provide a better customer experience. This is not only useful for the user, but also for the business. A more efficient user experience will translate into higher sales, improved customer retention and better brand exposure.

To keep pace with evolving user expectations, tomorrow's digital experiences will need to deliver more tailored, and even predictive customer experiences. This will require organizations to consume multiple sources of data, such as location data, historic clickstream data, or information from wearables to create a fine-grained user context. Data will be the foundation for predictive analytics and personalization services. Advancing user privacy in conjunction with data-driven strategies will be an important component of enhancing personalized experiences. Eventually, I believe that data-driven experiences will be the norm.

At Acquia, we started investing in contextualization and personalization in 2014, through the release of a product called Acquia Lift. Adoption of Acquia Lift has grown year over year, and we expect it to increase for years to come. Contextualization and personalization will become more pervasive, especially as different systems of engagements, big data, the internet of things (IoT) and machine learning mature, combine, and begin to have profound impacts on what the definition of a great user experience should be. It might take a few more years before trends like personalization and contextualization are fully adopted by the early majority, but we are patient investors and product builders. Systems like Acquia Lift will be of critical importance and premiums will be placed on orchestrating the optimal customer journey.

Conclusion

The history of the web dictates that lower-friction solutions will surpass what came before them because they eliminate inefficiencies from the customer experience. Friduction is a long-term trend. Websites, the internet of things, augmented and virtual reality, conversational UIs — all of these technologies will continue to grow because they will enable us to build lower-friction digital experiences.

May 18 2017
May 18
Mike and Matt host two of Drupal's JavaScript maintainers, Théodore Biadala and Matthew Grill, as well as Lullabot's resident JavaScript expert Sally Young, and talk about the history of JavaScript in Drupal, and attempts to modernize it.
May 18 2017
ed
May 18

In this post, we show you how to add a new field to a content type in Drupal 8 and resave all nodes with default values for the field, using the Batch API and hook_update_N.

Let’s say you have existing content on your Drupal 8 site and you want to add a new field to a content type. In our example, we’ll add a boolean field_registered to the Person content type. You can do this easily enough through the GUI, but after you create the field, field_registered will not have any values in the database. This means that if you have a View which filters by field_registered, it will return no results.

To solve this problem, we need to assign a default value to field_registered for all Person nodes.

use Drupal\node\Entity\Node;
 
$nids = \Drupal::entityQuery('node')
  ->condition('type', 'person')
  ->execute();
 
foreach($nids as $nid) {
  $node = Node::load($nid);
  $node->field_registered->value = 0;
  $node->save();
}

This code should be executed in hook_update_N(), which should be put in the .install file of an appropriate custom module. This allows us to execute the code through the GUI at /update.php or using the drush command drush updb.

use Drupal\node\Entity\Node;
 
/**
* Implements hook_update_N().
*
* Set default value to new field field_registered on all Person nodes.
*/
function MY_MODULE_update_8201() {
  $nids = \Drupal::entityQuery('node')
    ->condition('type', 'person')
    ->execute();
 
  foreach($nids as $nid) {
    $node = Node::load($nid);
    $node->field_board_member->value = 0;
    $node->save();
  }
}

If there are less than 50 Person nodes, this should suffice, but if you already have lots of existing content on the site, loading and saving all these nodes could cause PHP to time out. In this case, we should make use of the $sandbox parameter in hook_update_N($sandbox) to indicate that the Batch API should be used for our update. This is known as a multi-pass update.

To run a multi-pass update, you must set $sandbox[‘#finished’] equal to a number between 0 and 1 within hook_update_N(). This number should indicate the percent of work complete. When $sandbox[‘#finished’] is equal to 1, Drupal knows the batch process is complete. Note that $sandbox is passed by reference (indicated by the $ symbol). For example, the code below will loop through 10 times before $sandbox[‘#finished’] == 1 and the process is complete.

/**
* Implements hook_update_N().
*/
function MY_MODULE_update_8001(&$sandbox) {
 
  if (!isset($sandbox['total'])) {
    $sandbox['total'] = 10;
    $sandbox['current'] = 0;
  }
 
  $sandbox['current']++;
 
  //Once $sandbox['#finished'] == 1, the process is complete.
  $sandbox['#finished'] = ($sandbox['current'] / $sandbox['total']);
}

Now we can assign default field_registered values to all of our Person nodes in batches. For the first pass through, we’ll set $sandbox[‘total’] to be the total number of Person nodes, and $sandbox[‘current’] to be zero. We’re using Drupal 8’s entity.query service to find all these nodes. When we’re ready to process the first batch of nodes, we can use the range() method to only grab the nodes in our batch, starting at $sandbox[‘current’] and ending at $sandbox[‘current’] + $nodes_per_batch.

use Drupal\node\Entity\Node;
 
/**
* Implements hook_update_N().
*
* Set default value to new field field_registered on all Person nodes.
*/
function MY_MODULE_update_8001(&$sandbox) {
  // Initialize some variables during the first pass through.
  if (!isset($sandbox['total'])) {
    $nids = \Drupal::entityQuery('node')
      ->condition('type', 'person')
      ->execute();
    $sandbox['total'] = count($nids);
    $sandbox['current'] = 0;
  }
 
  $nodes_per_batch = 25;
 
  // Handle one pass through.
  $nids = \Drupal::entityQuery('node')
    ->condition('type', 'person')
    ->range($sandbox['current'], $sandbox['current'] + $nodes_per_batch)
    ->execute();
 
  foreach($nids as $nid) {
    $node = Node::load($nid);
    $node->field_board_member->value = 0;
    $node->save();
    $sandbox['current']++;
  }
 
  drupal_set_message($sandbox['current'] . ' nodes processed.');
 
  $sandbox['#finished'] = ($sandbox['current'] / $sandbox['total']);
}

That’s it! We have successfully set new default field values on all nodes of the Person content type, without overloading PHP by processing them all at once. Now our view which filters by our new field will work because all the Person nodes will have values set for the field.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web