Mar 30 2015
Mar 30

By default Search API (Drupal 7) reindexes a node when the node gets updated. But what if you want to reindex a node / an entity on demand or via some other hook i.e. outside of update cycle? Turned out it is a quite simple exercise. You just need to execute this function call whenever you want to reindex a node / an entity:

  1. search_api_track_item_change('node', array($nid));

See this snippet at dropbucket: http://dropbucket.org/node/1600 search_api_track_item_change marks the items with the specified IDs as "dirty", i.e., as needing to be reindexed. You need to supply this function with two arguments: entity_type ('node' in our example) and an array of entity_ids you want to be reindexed. Once you've done this, Search API will take care of the rest as if you've just updated your node / entity. Additional tip: In some cases, it's worth to clear field_cache for an entity before sending it to reindex:

  1. // Clear field cache for the node.

  2. cache_clear_all('field:node:' . $nid, 'cache_field');

  3. // Reindex the node.

  4. search_api_track_item_change('node', array($nid));

This is the case, when you manually save / update entity values via sql queries and then want to reindex the result (for example, radioactivity module doesn't save / update a node, it directly manipulates data is sql tables). That way you'll ensure that search_api reindexes fresh node / entity and not the cached one.

Mar 30 2015
Mar 30

I had a case recently, where I needed to add custom data to the node display and wanted this data to behave like a field, however the data itself didn't belong to a field. By "behaving like a field" I mean you can that field at node display settings and able to control it's visibility, label and weight by dragging and dropping that field. So, as you may have undestood, hook_preprocess_node / node_view_alter approach alone wasn't enough. But we do Drupal right? Then there should be a clever way to do what we want and it is here: hook_field_extra_fields() comes for help! hook_field_extra_fields() (docs: https://api.drupal.org/api/drupal/modules!field!field.api.php/function/hook_field_extra_fields/7) exposes "pseudo-field" components on fieldable entities. Neat! Here's how it works, let's say we want to expose a welcoming text message as a field for a node, here's how we do that:

  1. /**

  2. * Implements MODULE_NAME_field_extra_fields().

  3. */

  4. function hook_field_extra_fields() {

  5. $extra['node']['article']['display']['welcome_message'] = array(

  6. 'label' => t('Welcome message'),

  7. 'description' => t('A welcome message'),

  8. 'weight' => 0,

  9. );

  10. return $extra;

  11. }

As you see in example above, we used hook_field_extra_fields() to define an extra field for an enity type of 'node' and 'article' bundle (content type). You can actually choose any other type of entity that's available on your system (think user, taxonomy_term, profile2, etc). Now if you'll clear your cache and go to display settings for Node -> Article you should see 'A welcome message' field available. Ok the last bit is to actually force our "extra" field to output some data, we do this in hook_node_view:

  1. /**

  2. * Implements hook_node_view().

  3. */

  4. function MODULE_NAME_node_view($node, $view_mode, $langcode) {

  5. // Only show the field for node of article type

  6. if ($node->type == 'article') {

  7. $node->content['welcome_message'] = array(

  8. '#markup' => 'Hello and welcome to our Drupal site!',

  9. );

  10. }

  11. }

That should be all. Now you should see a welcome message on your node oage. Please note, if you're adding an extra field to another entity type (like, taxonomy_term for example), you should do the last bit in this entity's _view() hook.

UPDATE: I put code snippets for this tutorial at dropbucket.org here: http://dropbucket.org/node/1398

Mar 30 2015
Mar 30

I'm a big fan of fighting with Drupal's inefficiencies and bottlenecks. Most of these come from contrib modules. Everytime we install a contrib module we should be ready for surprises which come on board with the module.

One of the latest examples is Menu item visibility (https://drupal.org/project/menu_item_visibility) that turned out to be a big trouble maker on one of my client's sites. Menu item visibility is a simple module that let's you define link visibility based on a user's role. Simple and innocent... until you look under the hood.

The thing is Menu item visibility stores it's data in database and does a query per every menu item on the page. In my case it produced around 30 queries per page and 600 queries on menu/cache rebuild (which normally equals to the number of menu items you have in your system).

The functionality that this module gives to an end user is good and useful (according to drupal.org: 6,181 sites currently report using this module) but as you see, storing these settings in db can become a huge bottleneck for your site. I looked at the Menu item visibility source and came to this "in code" solutions that fully replicates the module functionality but stores data in code.

Step 1.

Create a custom module and call it like Better menu item visibility., machine name: better_menu_item_visibility.

Step 2.

Let's add the first function that holds our menu link item id (mlid) and role id (rid) data:

  1. /**

  2. * This function returns a list of mlid's with a list of roles that have access to link items.

  3. * You can change the list to add new menu items or/and roles

  4. * The list is presented in a format:

  5. * 'mlid' => array('role_id', 'role_id),

  6. */

  7. function better_menu_item_visibility_menu_item_visibility_role_data() {

  8. return array(

  9. '15' => array('1', '2'),

  10. '321' => array('1'),

  11. '593' => array('3'),

  12. // Add as many combinations as you want.

  13. );

  14. }

This function returns an array with menu link item ids and roles that can access the item. If you already have Menu item visibility installed, you can easily port the data from the db table {menu_links_visibility_role} into this function.

Step 3.

And now let's do the dirty job and process the menu items:

  1. /**

  2. * Implements hook_translated_menu_link_alter().

  3. */

  4. function better_menu_item_visibility_translated_menu_link_alter(&$item, $map) {

  5. if (!empty($item['access'])) {

  6. global $user;

  7. // Menu administrators can see all links.

  8. if ($user->uid == '1' || (strpos(current_path(), 'admin/structure/menu/manage/' . $item['menu_name']) === 0 && user_access('administer menu'))) {

  9. return;

  10. }

  11. $visibility_items_for_roles = better_menu_item_visibility_menu_item_visibility_role_data();

  12. if (!empty($visibility_items_for_roles[$item['mlid']]) && !array_intersect($visibility_items_for_roles[$item['mlid']], array_keys($user->roles))) {

  13. $item['access'] = FALSE;

  14. }

  15. }

  16. }

In short this function skips access check for user 1 and for user that has 'administer menu' permission and does the access check for link menu items listed in better_menu_item_visibility_menu_item_visibility_role_data. As you see, instead of calling database it gets data from the code which is really fast. Let me know what you think and share your ways of fighting with Drupal's inefficiencies.

Mar 30 2015
Mar 30

Drupal Views offers us a cool feature: ajaxified pagers. When you click on a pager, it changes the page without reloading the main page itself and then scrolls to the top of the view. It works great, but sometimes you may encounter a problem: if you have a fixed header on your page (the one that stays on top when you scroll the page) it will overlap with the top of your view container thus scroll to top won't work preciselly correct and the header will cover the top part of your view.

I've just encountered that problem and making a note here for the future myself and, probably yourself, about how I solved this problem. If you'll look into the views internal, you'll see it uses internal Drupal JS Framework command called viewsScrollTop that's responsible for scrolling to the top of the container. What we need here is to override this command to add some offset to the top of our view.

1. Overriding JS Command

Thankfully, Views is flexible enough and provides hook_views_ajax_data_alter() so we can alter js data and commands before they got sent to the browser, let's overwrite viewsScrollTop command with our own. In your custom module put something like this:

  1. /**

  2. * This hook allows to alter the commands which are used on a views ajax

  3. * request.

  4. *

  5. * @param $commands

  6. * An array of ajax commands

  7. * @param $view view

  8. * The view which is requested.

  9. */

  10. function MODULE_NAME_views_ajax_data_alter(&$commands, $view) {

  11. // Replace Views' method for scrolling to the top of the element with your

  12. // custom scrolling method.

  13. foreach ($commands as &$command) {

  14. if ($command['command'] == 'viewsScrollTop') {

  15. $command['command'] = 'customViewsScrollTop';

  16. }

  17. }

  18. }

Now, everytime Views emits viewsScrollTop command, we replace it with our own custom one customViewsScrollTop.

2. Creating custom JS command

Ok, custom command is just a JS function attached to Drupal global object, let's create a js file and put it into it:

  1. (function ($) {

  2. Drupal.ajax.prototype.commands.customViewsScrollTop = function (ajax, response, status) {

  3. // Scroll to the top of the view. This will allow users

  4. // to browse newly loaded content after e.g. clicking a pager

  5. // link.

  6. var offset = $(response.selector).offset();

  7. // We can't guarantee that the scrollable object should be

  8. // the body, as the view could be embedded in something

  9. // more complex such as a modal popup. Recurse up the DOM

  10. // and scroll the first element that has a non-zero top.

  11. var scrollTarget = response.selector;

  12. while ($(scrollTarget).scrollTop() == 0 && $(scrollTarget).parent()) {

  13. scrollTarget = $(scrollTarget).parent();

  14. }

  15. var header_height = 90;

  16. // Only scroll upward

  17. if (offset.top - header_height < $(scrollTarget).scrollTop()) {

  18. $(scrollTarget).animate({scrollTop: (offset.top - header_height)}, 500);

  19. }

  20. };

  21. })(jQuery);

As you may see, I just copied the standard Drupal.ajax.prototype.commands.viewsScrollTop function and added header_height variable that equals to the offset/fixed header height. You may play with this value and set it according to your own taste. Note the name of the function Drupal.ajax.prototype.commands.customViewsScrollTop, the last part should match your custom command name. Save the file in your custom module dir, in my case it's: custom_views_scroll.js

3. Attaching JS to the view

There are multiple ways to do it, let's go with with the simplest one, to your custom_module.info file add scripts[] = js/custom_views_scroll.js and clear caches, that'll make this file to be autoloaded on every page load. That's all, since now, your views ajax page scrolls should be powered by your customViewsScrollTop instead of stock viewsScrollTop, see the difference?

Mar 30 2015
Mar 30

If you have a fieldgroup in a node, you may want to hide it on some conditions. Here's how to do that programmatically. At first, we need to preprocess our node like this:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. }

The tricky part starts here, if you'll google for "hide a fieldgroup" you'll get lots of results referencing a usage of field_group_hide_field_groups() like this snippet: http://dropbucket.org/node/130.

While this function perfectly works on forms it is useless if you apply it in hook_preprocess_node() (at least I couldn't make it work). The problem is fieldgroup uses 'field_group_build_pre_render' function that is get called at the end of the preprocessing call and populates your $variables['content'] with a field group and its children, so you can't alter this in hook_preprocess_node(). But as always in Drupal there's a workaround. At first let's define some simple logic in our preprocess_node() to determine if we want to hide a field group:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. if ($variables['uid'] != 1) {

  6. // You can call this variable any way you want, just put it into $variables['element'] and set as TRUE.

  7. $variables['element']['hide_admin_field_group'] = TRUE;

  8. }

  9. }

Ok, so if user's id is not 1 we want to hide some fantasy 'admin_field_group'. We define logic here and pass result into elements array that is to be used later. As I previously noted, field group uses 'field_group_build_pre_render' to combine fields into a group, so we just need to alter this call in our module:

  1. /**

  2. * Hide admin field group on a node display.

  3. */

  4. function MODULE_NAME_field_group_build_pre_render_alter(&$element) {

  5. if (isset($element['hide_admin_field_group']) && isset($element['hide_admin_field_group'])) {

  6. $element['hide_admin_field_group']['#access'] = FALSE;

  7. }

  8. }

We made a check for our condition and if it is met, we set field group's access to FALSE that means: hide the field group. So now you should have a field group hidden on your node display. Of course, this example is the simplest case, you may add dependencies on node view_mode, content type and other conditions, so sky is the limit here. You can find and copy this snippet at dropbucket: http://dropbucket.org/node/927 I wonder, if you have another way of doing this?

Mar 30 2015
Mar 30

Within the Lift ecosystem, "contexts" can be thought of as pre-defined functionality that makes data available to the personalization tools, when that data exists in the current state (of the site/user/environment/whatever else).

Use cases

The simplest use of contexts is in mapping their data to User Defined Fields (UDFs) on /admin/config/content/personalize/acquia_lift_profiles. When the context is available, its data is assigned to a UDF field and included with Lift requests. For example, the Personalize URL Context module (part of the Personalize suite) does exactly this with query string contexts.

First steps

The first thing to do is to implement hook_ctools_plugin_api() and hook_personalize_visitor_contexts(). These will make the Personalize module aware of your code, and will allow it to load your context declaration class.

Our module is called yuba_lift:

/**
 * Implements hook_ctools_plugin_api();
 */
function yuba_lift_ctools_plugin_api($owner$api) {
  if (
$owner == 'personalize' && $api == 'personalize') {
    return array(
'version' => 1);
  }
}
/**
 * Implements hook_personalize_visitor_contexts();
 */
function yuba_lift_personalize_visitor_context() {
  
$info = array();
  
$path drupal_get_path('module''yuba_lift') . '/plugins';$info['yuba_lift'] = array(
    
'path' => $path '/visitor_context',
    
'handler' => array(
      
'file' => 'YubaLift.inc',
      
'class' => 'YubaLift',
    ),
  );

  return 

$info;
}

The latter hook tells Personalize that we have a class called YubaLift located at /plugins/visitor_context/YubaLift.inc (relative to our module's folder).

The context class

Our context class must extend the abstract PersonalizeContextBase and implement a couple required methods:

<?php
/**
 * @file
 * Provides a visitor context plugin for Custom Yuba data.
 */
class YubaLift extends PersonalizeContextBase {
  
/**
   * Implements PersonalizeContextInterface::create().
   */
  
public static function create(PersonalizeAgentInterface $agent NULL$selected_context = array()) {
    return new 
self($agent$selected_context);
  }
/**
   * Implements PersonalizeContextInterface::getOptions().
   */
  
public static function getOptions() {
    
$options = array();$options['car_color']   = array('name' => t('Car color'),);
    
$options['destination'] = array('name' => t('Destination'),);

    foreach (

$options as &$option) {
      
$option['group'] = t('Yuba');
    }

    return 

$options;
  }
}

The getOptions method is what we're interested in; it returns an array of context options (individual items that can be assigned to UDF fields, among other uses). The options are grouped into a 'Yuba' group, which will be visible in the UDF selects.

With this code in place (and cache cleared - for the hooks above), the 'Yuba' group and its context options become available for mapping to UDFs.

Values for options

The context options now need actual values. This is achieved by providing those values to an appropriate JavaScript object. We'll do this in hook_page_build().

/**
 * Implements hook_page_build();
 */
function yuba_lift_page_build(&$page) {
  
// build values corresponding to our context options
  
$values = array(
    
'car_color' => t('Red'),
    
'destination' => t('Beach'),
  );
// add the options' values to JS data, and load separate JS file
  
$page['page_top']['yuba_lift'] = array(
    
'#attached' => array(
      
'js' => array(
        
drupal_get_path('module''yuba_lift') . '/js/yuba_lift.js' => array(),
        array(
          
'data' => array(
            
'yuba_lift' => array(
              
'contexts' => $values,
            ),
          ),
          
'type' => 'setting'
        
),
      ),
    )
  );
}

In the example above we hardcoded our values. In real use cases, the context options' values would vary from page to page, or be entirely omitted (when they're not appropriate) - this will, of course, be specific to your individual application.

With the values in place, we add them to a JS setting (Drupal.settings.yuba_lift.contexts), and also load a JS file. You could store the values in any arbitrary JS variable, but it will need to be accessible from the JS file we're about to create.

The JavaScript

The last piece of the puzzle is creating a new object within Drupal.personalize.visitor_context that will implement the getContext method. This method will look at the enabled contexts (provided via a parameter), and map them to the appropriate values (which were passed via hook_page_build() above):

(function ($) {
  
/**
   * Visitor Context object.
   * Code is mostly pulled together from Personalize modules.
   */
  
Drupal.personalize Drupal.personalize || {};
  
Drupal.personalize.visitor_context Drupal.personalize.visitor_context || {};
  
Drupal.personalize.visitor_context.yuba_lift = {
    
'getContext': function(enabled) {
      if (!
Drupal.settings.hasOwnProperty('yuba_lift')) {
        return [];
      }

      var 

0;
      var 
context_values = {};

      for (

i in enabled) {
        if (
enabled.hasOwnProperty(i) && Drupal.settings.yuba_lift.contexts.hasOwnProperty(i)) {
          
context_values[i] = Drupal.settings.yuba_lift.contexts[i];
        }
      }
      
      return 
context_values;
    }
  };

})(

jQuery);

That's it! You'll now see your UDF values showing up in Lift requests. You may also want to create new column(s) for the custom UDF mappings in your Lift admin interface.

You can grab the completed module from my GitHub.

Mar 30 2015
Mar 30

So I recently participated in my first ever hackathon over the weekend of March 28. Battlehack Singapore to be exact (oddly, there was another hackathon taking place at the same time). A UX designer friend of mine had told me about the event and asked if I wanted to join as a team.
Me: Is there gonna be food at this thing?
Her: Erm…yes.
Me: Sold!
Joking aside, I’d never done a hackathon before and thought it’d be fun to try. We managed to recruit another friend and went as a team of three.

Battlehack Singapore 2015

The idea

The theme of the hackathon was to solve a local or global problem so before the event, we kicked around a couple of ideas and settled on a Clinic Finder app. Think Yelp for clinics. Singapore provides a large number of medical schemes that offer subsidised rates for healthcare. Not every clinic is covered by every scheme though, so we thought it’d be good if people could find clinics based on the medical scheme they are covered by.

Of course, there will be people who aren’t covered by any medical scheme at all, like me. But I’ve also had the experience of being brought on a wild goose chase by Google while trying to find an open clinic at 2am in the morning. I’d like to think this is a relatable scenario. Being idealistic people, we wanted our app to provide updated information on each clinic, like actual opening hours and phone numbers with real people on the other end of the line. And trust me, we’ve pondered the BIG question of: where will you ever get such data?

The most viable idea we could think of at the time was to work with the relevant government agencies that had access to such data. But since it was a hackathon project, we just wanted to see if we could build out the functionality and make it look decent within 24 hours. Then, there was the decision of which platform the app would run on. Ideally, this would work well on a mobile device, but our team recognised that we didn’t have the capabilities to build out a mobile app in 24 hours.

Our expertise was building Drupal sites. Thus, that would be our best bet to have a working application at the end of 24 hours. Maybe one day we’ll join hackathons to win, but not this time. This time, we just wanted to finish. Gotta know how to crawl before learning to walk, and walk before learning to run.

Day 1

Battlehack Singapore took place at the Cliftons office in the Finexis Building. Rooms on both floors were set up for teams of four, with power points and LAN cables for each member. We took a spot near the wall because there was a nice spot to the side for napping. Turns out there wouldn’t be much of that.

Shortly into the hackathon, we hit our first snag. The internet access went out. Definitely an “Oh, crap” moment for me. I mentioned in my last post how much I used Google throughout the day. I guess the Hackathon Fates decided, no Google for you, kiddo.

No internet also meant no way to download modules. Luckily for me, I had a bunch of local development sites still sitting in my hard drive, and a majority of the module files I needed were in there somewhere. Sure, they were outdated, but beggars can’t be choosers. The organisers were working hard to fix the problem, so I figured I’d just download the newer versions when we got back online. The moral of the story is: Don’t delete all your old development sites, you never know when they might come in handy.

I’ll admit I got a little grumpy about the situation, but pouting wasn’t going to solve anything, so why not take a little time to chill with the Dinosaur Game? Just in case you didn’t know, as of version 39, the guys at Chrome snuck an easter egg into the browser. Useless trivia: I eventually got to a 1045 high score :satisfied:

Chrome Dinosaur Game

We wanted the app to have proximity location capabilities. There are quite a number of solutions for this on Drupal. Coincidentally, I’d listened to the latest episode of Talking Drupal the night before and the topic was Map Rendering. The two modules that stuck in my mind were Leaflet and IP Geolocation as it was mentioned they seemed “smoother”.

The IP Geolocation module had very good integration with Views and the end result (after the all-nighter, of course) was pretty close to the original design we had in mind. Given the tight schedule we had, this was definitely a plus. The only custom code I had to write were minor tweaks to facilitate theming, one to add placeholder attribute to the search filter, and another to add CSS classes to boolean fields based on their values.


/**
 &ast; Implements hook_form_alter().
 */
//Add placeholder attribute to search boxes
function custom_form_alter(&$form, &$form_state, $form_id) {
  if($form_id == "views_exposed_form") {
    if (isset($form['field_geofield_distance'])) {
      $form['field_geofield_distance']['#origin_options']['#attributes'] = array('placeholder' => array(t('Enter Postal Code/Street Name')));
    }
    if (isset($form['field_medical_scheme_tid'])) {
      $form['field_medical_scheme_tid']['#options']['All'] = t('Medical Scheme');
    }
  }
}

/*
 &ast; Implements template_preprocess_field()
 */
function clinicfinder_preprocess_field(&$variables) {
  //check to see if the field is a boolean
  if ($variables['element']['#field_type'] == 'list_boolean') {
    //check to see if the value is TRUE
    if ($variables['element']['#items'][0]['value'] == '1') {
      //add the class .is-true
      $variables['classes_array'][] = 'is-true';
    } else {
      //add the class .is-false
      $variables['classes_array'][] = 'is-false';
    }
  }
}

Even though it was a hackathon, and we were pressed for time, I still tried my best to adhere to Drupal best practices. So the template_preprocess_field went into the template.php file while the hook_form_alter went into a custom module.

Day 2

The presentation at the end of the hackathon was only two minutes long. We figured that as long as we could articulate the app’s key features and demo those features successfully, that would be our pitch. As Sheryl Sandberg said:

Done is better than perfect.

Clinic Finder home page

The Battlehack guys were really helpful in this regard. There were rehearsal slots the next morning for us to present our pitch to a panel of mentors, who’d provide feedback on our idea and presentation pitch. Their suggestion to us was to get to straight to the point on our key feature, the bit about medical schemes, since that was the local problem we were trying to address.

Clinic Finder map page

That was a really good piece of advice, as we watched a number of participants who presented before us run out of time before they got to best part of their product. We managed to pitch our app within the time and answer the judges’ questions. As expected, we did get the “so where will you get the data?” question. So we talked about partnership with government organisations. Another question we got was about advertising, which tied into a point we didn’t really consider, on the sustainability of the app.

Hackathon takeaways

  1. Expect that things may go wrong and adapt accordingly.
  2. Be focused. You only have 24 hours.
  3. Keep your pitch concise. Two minutes goes by quicker than you think.
  4. Unless you’re a ninja coder, you won’t get much sleep.
  5. Consciously remind yourself to be a nice person, especially when you haven’t slept at all.

At the end of the day, we did manage to build a working application in 24 hours and present it on time. Definitely a valuable learning experience. It’s always nice to build something that works, especially if you do it together with friends. Looking forward to the next one.

Mar 29 2015
Mar 29

I’m approaching the 4 year mark at my agency and along with it my 4 year mark working with Drupal. It’s been an interesting journey so far and I’ve learned a fair bit, but, as with anything in technology, there’s still a great deal left to discover. This is my journey so far and a few key points I learned along the way.

Picking up a new technology to use can be a daunting task, aside from deciding if it fits your needs/requirements you need to ensure it will be something you enjoy working with as well as something you can make a living from. I never really choose to use Drupal, it just happened to be one of the CMS’ my agency use and as such I picked it up. However after working with it for the last 4 years I can say that I do enjoy projects using Drupal (though as with any technology it has it’s cons as well as pros). It also appears to be growing and going from strength to strength, so is likely to be around for a fair while yet.

Getting up to speed at first was tricky, learning how the different elements that make Drupal slot together took some time, and more than a few attempts (rather like an ikea flatpack). Helping me along the way were a few resources, such as the documentation, fantastic community and of course any question can be a quick google away from an answer.

The dev tools

As I started to get involved with Drupal I learnt about the tools of the trade. Most noteworthy of these being Drush - An awesome tool I regularly use now. A massive time saver at the start of builds for setting up core and modules when used alongside make files. Also useful during development and hugely when updating core/modules.
It can be downloaded from here: http://docs.drush.org/en/master/install
A library of drush commands is available here: http://www.drushcommands.com

Alongside this are developer modules designed to help speed up development and test the code written such as devel - a suite of modules to help module developers and themers, and coder - a module that scans your code and reports any issues available.

Re-inventing the wheel is lengthy and unneeded, it’s the same with writing code that already exists.
http://dropbucket.org provides a code snippet repo that can be searched and added to, useful for adding anything you found helpful or as a starting block/idea for your own code. Each snippet can be tagged and commented upon.

For deployments from development to production sites:

  • Features - The biggest and most documented module. It places config changes into new modules that can be committed, deployed and installed. Each new module can be overridden, deleted and reverted if needed.
  • Configuration Management - A backport of the D8 core module, a ‘features lite’ module it provides similar functionality but rather than creating new modules configurations are saves in tar files. An example of this process can be seen here: https://www.drupal.org/node/1872288

For modules that don’t store these settings in files you can use:

  • Bundle Copy - This provides an export/import (similar to views) for Vocabs, Content Types, Users and fields. The dev version adds support for field collections, cloning content types and commerce entity bundles.
  • Taxonomy CSV import/export - This allows you to import/export vocabs and terms from a csv.

The community

Outside of the CMS I began to get involved in other aspects, namely Drupal.org. Here I was able to talk to other developers, post bug reports (to which I always received a speedy response) and patches where I could. I was also able to create my own module and have it reviewed by other developers before it could be downloaded and used by others. Getting involved in this way has not only allowed me to give back to the community that has helped me, but also allows me to develop further - which can only be a good thing.

Over the last couple of years I’ve also attended DrupalCamp London and a couple of other Drupal events (such as beer and chat), which is great way to get to know other people and talk all things Drupal.  

Mar 29 2015
Mar 29

This is the second in a series of articles involving the writing and launching of my DurableDrupal Lean ebook series website on platform.sh. Since it's a real world application, this article is for real world website and web application developers. If you are starting from scratch but enthusiastic and willing to learn, that means you too. I'm fortunate enough to have their sponsorship and full technical support, so everything in the article has been tested out on the platform. A link will be edited in here as soon as it goes live.

Diving in

Diving right in I setup a Trello Kanban Board for Project Inception as follows:

Project Inception Kanban

Both Vision (Process, Product) and Candidate Architecture (Process, Product) jobs have been completed, and have been moved to the MVP 1 column. We know what we want to do, and we're doing it with Drupal 7, based on some initial configuration as a starting point (expressed both as an install profile and a drush configuration script). At this point there are three jobs in the To Do column, constituting the remaining preparation for the Team Product Kickoff. And two of them (setup for continuous integration and continuous delivery) are about to be made much easier by virtue of using platform.sh, not only as a home for the production instance, but as a central point of organization for the entire development and deployment process.

Beginning Continuous Integration Workflow

What we'll be doing in this article:

Overcoming the confusion between Continuous Integration (team development with a codebase) and Continuous Delivery (deploying to an environment).

"So what is CI? In short, it is an integration of code into a known or working code base.... The top benefits are to provide fast feed back to the members of the team and to ensure any new changes don’t break the working branch."

"CD... is an automated process to deliver a software package to an environment.... we can now extend the fast feedback loops and reduction of constraints with packaging techniques, automation workflows, and integrated tools that keep track of the software versions in different environments."

"CI and CD are two completely separated practices that are tightly interlocked to create a unified ALM [Application Lifecycle Management] workflow." – Bryan Root

And let's take it step by step by breaking the CI Workflow Job into tasks:

Continuous Integration Tasks

Create Ansible playbook to create local development VM suitable for working with platform.sh

Based on Jeff Geerling's great work both on Ansible itself as well as its use with Drupal provisioning, I whipped up and tested ansible-vm-platformsh. Dependencies:

Clone the playbook from GitHub into a workspace on local dev box and vagrant up

With the dependencies installed, and once the ubuntu/trusty box was downloaded (I had already used it before on several projects), it only took a few minutes to bring up our local dev box:

$ git clone [email protected]:DurableDrupal/ansible-vm-platformsh.git platformsh-vk-sandbox
$ cd platformsh-vk-sandbox
$ vagrant up

The box is brought up in a private network with ip 192.168.19.46 (editable in Vagrantfile). So to be able to bring up the website in a local browser by name, I edited my dev box's /etc/hosts file by including the following line in accordance with the Virtual Host template parameters (editable in provisioning/vars.yml):

$ grep platformsh /etc/hosts
192.168.19.46 platformshvk.dev

ssh into the local dev box, log into platform.sh and manage ssh keys

platformsh-vk is a newly provisioned development box, so the user vagrant hasn't yet got a set of public and private keys for the purpose of securely authenticating onto the platform.sh account using ssh.

We first login to our local newly provisioned dev box (that's already configured by playbook operations), also using ssh:

$ vagrant ssh
vagrant@vagrant-ubuntu-trusty-64:~$
vagrant@vagrant-ubuntu-trusty-64:~$ ssh-keygen -t rsa -C "[email protected]"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vagrant/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/vagrant/.ssh/id_rsa.
Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub.

The Ansible Playbook has already installed the platform.sh CLI (command-line interface), called, appropriately enough, platform, so we're all set!

vagrant@vagrant-ubuntu-trusty-64:~$ cd /var/www
vagrant@vagrant-ubuntu-trusty-64:/var/www$ ls -l
total 4
drwxr-xr-x 2 root root 4096 Mar 22 16:10 html

The first time you execute the CLI you will be asked to login with your platform.sh account.

vagrant@vagrant-ubuntu-trusty-64:/var/www$ platform
Welcome to Platform.sh!
Please log in using your Platform.sh account
Your email address: [email protected]
Your password:
Thank you, you are all set.
Your projects are:
+---------------+---------------------+-------------------------------------------------+
| ID            | Name                | URL                                             |
+---------------+---------------------+-------------------------------------------------+
| myproject | Victor Kane Sandbox | https://us.platform.sh/#/projects/myproject |
+---------------+---------------------+-------------------------------------------------+
Get a project by running platform get [id].
List a project's environments by running platform environments.
Manage your SSH keys by running platform ssh-keys.
Type platform list to see all available command

Now, you can manage your keys from your online sandbox on platform.sh. But by using the platform CLI you can do anything from your local dev box that can be done online. To upload your public key, we did:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev$ platform ssh-keys
Add a new SSH key by running platform ssh-key:add [path]
Delete an SSH key by running platform ssh-key:delete [id]
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev$ platform ssh-key:add ~/.ssh/id_rsa.pub
Enter a name for the key: vkvm

In a single line:

$ platform ssh-key:add --name=vkvm0323 ~/.ssh/id_rsa.pub
The SSH key id_rsa.pub has been successfully added to your Platform.sh account

Get the project and sync the platform server and local databases

The Ansible playbook has already set up the virtual host to be edited in the file /etc/apache2/sites-available/platformshvk.dev.conf and will expect the project to be cloned under /var/www. So to get the project from the platform server, we first locate ourselves at /var/www/platformsh-vk-dev and then get the project. Permissions have already been taken care of in this directory anticipating that we are operating locally as the user vagrant in the dev box.

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev$ platform get myproject
  Cloning into 'myproject/repository'...
  Warning: Permanently added the RSA host key for IP address '54.210.49.244' to the list of known hosts.
Downloaded myproject to myproject
Building application php using the toolstack php:drupal
  Beginning to build                                                   [ok]
  /var/www/platformsh-vk-dev/myproject/repository/project.make.
  drupal-7.34 downloaded.                                              [ok]
  drupal patched with                                                  [ok]
  install-redirect-on-empty-database-728702-36.patch.
  Generated PATCHES.txt file for drupal                                [ok]
  platform-7.x-1.3 downloaded.                                         [ok]
Saving build archive...
Creating file: /var/www/platformsh-vk-dev/myproject/shared/settings.local.php
Edit this file to add your database credentials and other Drupal configuration.
Creating directory: /var/www/platformsh-vk-dev/myproject/shared/files
This is where Drupal can store public files.
Symlinking files from the 'shared' directory to sites/default
Build complete for application php

In order to sync the local database (already created with the Ansible playbook, you guessed it!) from the platform server, we must first enter the credentials (user root and name taken from the domain variable on line 105 of provisioning/playbook.yml) into the settings.local.php file created in the shared sub-directory of the project build.

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/shared$ cat settings.local.php
<?php

// Database configuration.
$databases['default']['default'] = array(
  'driver' => 'mysql',
  'host' => 'localhost',
  'username' => 'root',
  'password' => '',
  'database' => 'platformshvk',
  'prefix' => '',
);

Now let's grab the drush aliases and sync databases! First I did a remote drush status on the platform

$ platform drush status

Then I grabbed the aliases:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$ platform drush-aliases
Aliases for Victor Kane Sandbox (myproject):
    @myproject._local
    @myproject.master

And used them to sync the local dev box database with what is on the platform server:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$ drush sql-sync @myproject.master @myproject._local
WARNING:  Using temporary files to store and transfer sql-dump.  It is recommended that you specify --source-dump and --target-dump options on the command line, or set '%dump' or '%dump-dir' in the path-aliases section of your site alias records. This facilitates fast file transfer via rsync.
You will destroy data in platformshvk and replace with data from ssh.us.platform.sh/main.
You might want to make a backup first, using the sql-dump command.
Do you really want to continue? (y/n): y
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$

Configure local dev virtual host and bring up website

We can now confirm that our local instance is alive and well via a drush status:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$ cd www
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/www$ drush status
 Drupal version                  :  7.34
 Site URI                        :  http://default
 Database driver                 :  mysql
 Database username               :  root
 Database name                   :  platformshvk
 Database                        :  Connected
 Drupal bootstrap                :  Successful
 Drupal user                     :  Anonymous
 Default theme                   :  bartik
 Administration theme            :  seven
 PHP executable                  :  /usr/bin/php
 PHP configuration               :  /etc/php5/cli/php.ini
 PHP OS                          :  Linux
 Drush version                   :  6.6-dev
 Drush configuration             :
 Drush alias files               :  /home/vagrant/.drush/myproject.aliases.drushrc.php
 Drupal root                     :  /var/www/platformsh-vk-dev/myproject/builds/2015-03-23--11-27-44--master
 Site path                       :  sites/default
 File directory path             :  sites/default/files
 Temporary file directory path   :  /tmp

We now configure the virtual host on the dev box to take into account the project name. After editing:

$ sudo vi /etc/apache2/sites-available/ 
<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    ServerName platformshvk.dev
    ServerAlias www.platformshvk.dev
    DocumentRoot /var/www/platformsh-vk-dev/myproject/www
    <Directory "/var/www/platformsh-vk-dev/myproject/www">
        Options FollowSymLinks Indexes
        AllowOverride All
    </Directory>
</VirtualHost> 

After reloading the Apache server configuration we can point our browser at the local web server

Local website instance

Exercise CI workflow by doing an upgrade and pushing via repo to platform

First we find out what needs updating

$ drush pm-update
Update information last refreshed: Sun, 03/22/2015 - 19:21
 Name    Installed Version  Proposed version  Message
 Drupal  7.34               7.35              SECURITY UPDATE available

Don't update with drush! Instead edit drush makefile and rebuild locally to test

We edit drush make file with the change in Drupal core version

$ cd repository
$ vi project.make
api = 2
core = 7.x
; Drupal core.
projects[drupal][type] = core
projects[drupal][version] = 7.35

We build locally to test

$ platform project:build
Building application php using the toolstack php:drupal
  Beginning to build                                                   [ok]
  /var/www/platformsh-vk-dev/myproject/repository/project.make.
  drupal-7.35 downloaded.                                              [ok]
  drupal patched with                                                  [ok]
  install-redirect-on-empty-database-728702-36.patch.
  Generated PATCHES.txt file for drupal                                [ok]
  platform-7.x-1.3 downloaded.                                         [ok]
Saving build archive...
Symlinking files from the 'shared' directory to sites/default
Build complete for application php

We confirm that the version has indeed been updated interactively in the browser.

Push to platform (automatically rebuilds everything on master!)

Wow! Talk about “everything in code”:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git config --global user.email [email protected]
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git config --global user.name jimsmith
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git commit -am "Updated Drupal core to 7-35"
[master 6a0f997] Updated Drupal core to 7-35
 1 file changed, 1 insertion(+), 1 deletion(-)
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git push origin master
Counting objects: 5, done.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 295 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
Validating submodules.
Validating configuration files.
Processing activity: **Victor Kane** pushed to **Master**
    Found 1 new commit.
    Building application 'php' with toolstack 'php:drupal' (tree: b53af8a)
      Installing build dependencies...
        Installing php build dependencies: drush/drush
      Making project using Drush make...
        Executing `drush -y make --cache-duration-releasexml=300 --concurrency=8 project.make /app/out/public`...
          Beginning to build project.make.                                            [ok]
          drupal-7.35 downloaded.                                                     [ok]
          drupal patched with                                                         [ok]
          install-redirect-on-empty-database-728702-36.patch.
          Generated PATCHES.txt file for drupal                                       [ok]
          platform-7.x-1.3 downloaded.                                                [ok]
      Moving checkout directory to `/sites/default`.
      Detected a `/sites/default` directory, initializing Drupal-specific files.
      Creating a default `/sites/default/settings.php` file.
      Creating the environment-specific `/sites/default/settings.local.php` file.
      Executing pre-flight checks...
      Compressing application.
      Beaming package to its final destination.
    W: Route 'www.{default}' doesn't map to a domain of the project, mangling the route.
    W: Route '{default}' doesn't map to a domain of the project, mangling the route.
    Re-deploying environment myproject-master.
      Environment configuration:
        php: size M
        mysql: size M
        redis: size M
        solr: size M
      Environment routes:
        http://master-myproject.us.platform.sh/ is served by application `php`
        http://www---master-myproject.us.platform.sh/ redirects to http://master-myproject.us.platform.sh/
        https://master-myproject.us.platform.sh/ is served by application `php`
        https://www---master-myproject.us.platform.sh/ redirects to http://master-myproject.us.platform.sh/
To [email protected]:myproject.git
   f2e12d8..6a0f997  master -> master

We confirm via browser pointed to platform server that the update has been effectuated.

Next time we'll drill down into some serious development workflow by implementing the startup landing page for the website.  

Bookmark/Search this post with

Mar 29 2015
Mar 29

Drupal's Form API has everything that we love about the Drupal framework. It's powerful, flexible, and easily extendable with our custom modules and themes. But lets face it; it's boooorrrrriinnnnggg. Users these days are used to their browsers doing the heavy lifting. Page reloads are becoming fewer and fewer, especially when we are expecting our users to take action on our websites. If we are asking our visitors to take time out of their day to fill out a form on our website, that form should be intuitive, easy to use, and not distracting.

Lucky for us, the Form API has the ability to magically transform our forms into silky smooth ajax enabled interfaces using the #ajax property in the form array. The #ajax property allows us to jump in at any point in form's render array and add some javascript goodness to improve user experience.

Progressive Enhancement

The beautiful thing about utilizing the #ajax property in our Drupal forms is that, if done correctly, it degrades gracefully allowing users with javascript disabled to use our form without issue. "If done correctly" being the operative phrase. The principals of progressive enhancement dictate that the widget must work for everyone before you can begin taking advantage of all of the cool new features available to us as developers. And yes, javascript is still in the "cool new feature" bucket as far as progressive enhancement goes; but that's an argument for a different time :). So this means that the form submission, validation, and messaging must be functional and consistent regardless of javascript being enabled or not. As soon as we throw that #ajax property on the render array, the owness is on us to maintain that consistency.

As an example, lets take a look at the email signup widget from the Listserv module. The form is very basic. It consists of a text field and a submit button. It also has a validation hook, to make sure the user is entering a valid email, as well as a submit handler that calls a Listserv function to subscribe the users email to a listserv.


/**
 * Listserv Subscribe form.
 */
function listserv_subscribe_form($form, &$form_state) {
  $form['email'] = array(
    '#type' =--> 'textfield',
    '#title' => t('Email'),
    '#required' => TRUE,
  );
  $form['submit'] = array(
    '#type' => 'submit',
    '#value' => t('Subscribe'),
  );
  return $form;
}

/**
 * Listserv Subscribe form validate handler.
 */
function listserv_subscribe_form_validate($form, &$form_state) {
  $email = $form_state['values']['email'];
  // Verify that the email address is valid.
  if (!valid_email_address($email)) {
    form_set_error('email', t('You must provide a valid email address.'));
  }
}

/**
 * Listserv Subscribe form submit handler.
 */
function listserv_subscribe_form_submit($form, &$form_state) {
  $email = $form_state['values']['email'];
  listserv_listserv_subscription($email, 'subscribe');
}

On paper, this form does everything we want, but in the wild it isn't all that we had hoped. This is not the type of form that requires its own page. It's meant to be placed in sidebars and footers throughout the site. If we place this in the footer of our homepage, our users aren't going to be happy when we take their email, then refresh the page and change their scroll position.

Lets image a user that accidently leaves a ".com" off of the end of their address. When the page is refreshed and they are returned to the top of the page, totally removed from the flow of content, only to see a message that says "Please enter a valid email address", what are the chances that they actually scroll back down the page to fulfill that action? I'm no UX expert but I'm guessing it's not good.

AJAX It Up

We can definitely do better. In our theme's template.php file lets add a form alter.


/**
 * Impliments hook_form_FORMID_alter.
 */
function heymp_form_listserv_subscribe_form_alter(&$form, &$form_state, $form_id) {
  $form['submit']['#ajax'] = array(
    'callback' =--> 'heymp_ajax_listserv_subscribe_callback',
    'wrapper' => 'listserv-subscribe-form',
  );
}

In this form alter, we are tapping into the specific signup form that we want to enhance. We then add the '#ajax' property to the submit button which tells form that when the user clicks the "submit" button, we are going to jump in and handle things with ajax. We define what function is going to handle our logic under 'callback', and we tell the form which DOM element on the page is going to display our results with 'wrapper'.

Now lets add our callback.


/**
 * Callback for heymp_form_listserv_subscribe_form_alter
 */
function heymp_ajax_listserv_subscribe_callback($form, &$form_state) {
  if (form_get_errors()) {
    $form_state['rebuild'] = TRUE;
        $commands = array();
        $commands[] = ajax_command_prepend(NULL, theme('status_messages'));  
    return array('#type' => 'ajax', '#commands' => $commands);
  } 
  else {
    $system_message = drupal_get_messages();
    return t('Thank you for your submission!');
  }
}

Validation

In this callback we are doing a few things. For one, we are checking to see if there are any form validation errors. The form submission is still using the validation handler from our listserv_subscribe_form_validate function above. If the submission doesn't pass form validation it will throw an error, allowing us to see it when we call form_get_errors() in our ajax callback. If there is an error, we need to tell the form to rebuild itself by setting $form_state['rebuild'] = TRUE'. This will allow the form to submit more entries.

Error messaging

Next we need to handle printing out the error message. Drupal's Ajax framework has some built in commands for interacting with the DOM without needing to write any javascript at all. By adding operations to the $commands array, we can display our error messaging as well as adding and removing elements as needed. In our case we are going to use the ajax_command_prepend function to print a message. We then return the commands in a render array fashon. See the Ajax framework documentation for a full list of command options.

Override system message

We also need prevent the system message from printing out our success/error messages when the user does eventually reload the page. Since the messages are sitting in a queue, we can easily empty that queue by calling drupal_get_messages(). The theme('status_messages') function takes care of calling it for us in the error if statement so we need to explicitly call it in the success statement.

Success

If there are no errors, we are just going to return a message that lets our user know that they have successfully completed the operation. The message is returned in whatever element we specified our ajax wrapper property above, which was the #listserv-subscribe-form element.

Summary

So there you go! With a minimal amount of code we've drastically improved our user experience using the #ajax property in the Forms API.

In Part 2 we'll take a look at ditching the Form API's commands array and writting our own javascript.

Mar 29 2015
Mar 29

In the last episode, we learned about the Drupal Subuser module. In this episode, we continue where we left off but take a look under the hood at the module code of the Drupal Subuser module.

By following along with this episode you will learn some things such as:

  • How to open up a Drupal module file and what to expect
  • How to find and locate an issue within a Drupal module file
  • How modules modify forms with hook_form_alter()
  • How to debug PHP variables in a Drupal module
  • How to test our fix to ensure it works correctly

If you have never seen a Drupal module before this might be a little intimidating and I might go a little fast, but you will still learn a lot. You should be able to start seeing patterns within different Drupal modules on how modules are structured. Good luck and happy module investigating!

Mar 28 2015
Mar 28

After my #epicfail that was BADCamp, to say that I was entering MidCamp with trepidation would be the understatement of the year. Two full days of sessions and a 1-and-1 track record was weighing heavily upon my soul. Add to the mix that I was coming directly off of a 5-day con my company runs, and responsible for MidCamp venue and catering logistics. Oh right, and I ran out of time to make instructions and train anyone else on setup, which only added to my on-site burden.

Testing is good.

After BADCamp, I added a powered 4-port USB hub to the kits, as well as an accessory pack for the H2N voice recorder, mainly for the powered A/C adapter and remote. All total, these two items bring the current cost of the kit to about $425.

In addition, at one of our venue walk-throughs, I was able to actually test the kits with the projectors UIC would be using. The units in two of the rooms had an unexplainable random few-second blackout of the screens, but the records were good and the rest of the rooms checked out.

Success.

After the mad scramble setting up three breakout rooms and the main stage leading up to the opening keynote, I can't begin to describe the feeling in the pit of my stomach after I pulled the USB stick after stopping the keynote recording. I can’t begin to describe the elation I felt after seeing a full record, complete with audio.

We hit a few snags with presenters not starting their records (fixable) and older PCs not connecting (possibly fixable), and a couple sessions that didn’t have audio (hello redundancy from the voice recorder). Aside from that, froboy and I were able to trim and upload all the successful records during the Sunday sprint.

A huge shout out also goes to jason.bell for helping me on-site with setups and capture. He helped me during Fox Valley’s camp, so I deputized him as soon as I saw him Friday morning.

Learnings.

With the addition of the powered USB hub, we no longer need to steal any ports from the presenter laptop. For all of the first day, we were unnecessarily hooking up the hub’s USB cable to the presenter laptop. Doing this caused a restart of the record kit. We did lose a session to a presenter laptop going to sleep, and I have to wonder whether we would have still captured it if the hub hadn’t been attached.

The VGA to HDMI dongle is too unreliable to be part of the kit. When used, either there was no connection, or it would cycle between on and off. Most, if not all, machines that didn’t have mini display port or direct HMDI out had full display port. I will be testing a display port to HDMI dongle for a more reliable option.

Redundant audio is essential. The default record format for the voice recorders is a WAV file. These are best quality, but enormous, which is why I failed at capturing most of BADCamp’s audio (RTFM, right?). By changing the settings to 192kbs MP3, two days of session audio barely made a dent in the 2GB cards that are included with the recorders. Thankfully, this saved three session records: two with no audio at all (still a mystery) and one with blown out audio.

Trimming and combining in YouTube is a thing. Kudos again to froboy for pointing me to YouTube’s editing capabilities. A couple sessions had split records (also a mystery), which we then stitched together after upload, and several sessions needed some pre- or post-record trimming. This can all be done in YouTube instead of using a video editor and re-encoding. Granted, YouTube takes what seems like forever to process, but it works and once you do the editing, you can forget about it.

There is a known issue with mini display port to HDMI where a green tint is added to the output. Setting the external PVR to 720p generally fixed this. There were a couple times where it didn’t, but switching either between direct HDMI or mini display port to HDMI seemed to resolve most of the issues. Sorry for the few presenters that opted for funky colors before we learned this during the camp. The recording is always fine, but the on-site experience is borked.

Finally, we need to tell presenters to adjust their energy saver settings. I take this for granted, because the con my company runs is for marketing people who present frequently, and this is basically just assumed to be set correctly. We are a more casual bunch and don’t fret when the laptop sleeps or the screen saver comes up during a presentation. Just move the cursor and roll with it. But that can kill a record...even with the Drupal Association kits. I do plan to test this, now that I’ve learned we don’t need any power at all from the presenter laptop, but it’s still an easy fix with documentation.

Next steps.

Documentation. I need to make simple instructions sheets to include with the kits. Overall, they are really easy to use and connect, but it’s completely unfamiliar territory. With foolproof instructions, presenters can be at ease and room monitors can be tasked with assisting without fear.

Packaging. With the mad dash to set these up — combined with hourly hookups — these were a hot mess on the podium. I’ll be working to tighten these up so they look less intimidating and take up less space. No idea what this entails yet, so I’ll gladly accept ideas.

Testing. As mentioned, I will test regular display port to HDMI, as well as various sleep states while recording.

Shipping. Because these kits are so light weight, part of the plan is to be able to share them with regional camps. There was a lot of interest from other organizers in these kits during the camp. Someone from Twin Cities even offered to purchase a kit to add to the mix, as long as they could borrow the others. A Pelican box with adjustable inserts would be just the ticket.

Sponsors. If you are willing to help finance this project, please contact me at [email protected]. While Fox Valley Camp owns three kits and MidCamp owns one, wouldn’t it be great to have your branding on these as they make their way around the camp circuit? The equipment costs have (mostly) been reimbursed, but I’ve devoted a lot of time to testing and documenting the process, and will be spending more time with the next steps listed above.

Mar 28 2015
Mar 28

Today I had the pleasure to attend Johannesburg's Drupal Camp 2015.

The event was organized by DASA that is doing a stunning job in gathering and energizing South Africa's Drupal Community. From community subjects to Drupal 8, we got to see a lekker variety of talks including those by Michael and me on "Drupal 8" and "How to run a successful Drupal Shop".

Special thanks to the organizers Riaan, Renate, Adam, Greg and Robin. Up next will be Drupal Camp Cape Town in September 2015.

Mar 28 2015
Mar 28

I rencently read a very interesting article of an all time .Net developer comparing the MEAN stack (Mongo-Express-Angluar-Node) with traditional .Net application design. (I can't post the link because I'm unable to find the article!!).

Among other things he compared the tedious process of adding a new field in the RDBM model (modifying the database, then the data layer, then views and controllers), whereas in the MEAN stack it was as simple as adding two lines of code to the UI.

In this article we will see how to get native JSON document support in your Drupal entities (mixing traditional RDBM storage with NoSQL in the same database) and explore the benefits of having such a combination.

Native JSON support in your current database engine

There's a chance your traditional database engine is already offering NoSQL/Document Storage like capabilities.

In MS SQL server we have had the XML field type for a long time now. This field type allows you to store XML documents inside a MS SQL Server field and query it's contents using XPATH. And it's been around since the year 2005.

This datatype was a great candidate to store data for our experiment, until I found out that PHP has crap XML support. I could not find a properly working (in a reasonable amount of time such as 5 minutes or less) serialization/deserialization method to convert PHP objects to XML and the other way round.

So what about JSON native support? PostgreSQL already offers that. MS SQL Server users have been demanding support for JSON since 2011, but still MS has not made any progress in this. I guess they are too much focused on making SQL Server work properly (and scale) to support Azure, rather than improving or adding new features.

Thanks God MS SQL Server is extremely well designed and pluggable, and someone came out with a CLR port of the PostgreSQL Json data type here. In the same way MS has never supported GROUP_CONCAT and we had to use more CLR to get it working

Adding JSON support to Drupal's database abstraction layer

Now that we have the capability of storing and querying JSON documents in our RDBM database, let's see how can we support this from Drupal 7 in the least disruptive way possible.

We would have loved to be able to use our JSON property just as 'serialized' field specification, but to do so we would have to modify the serialization/deserialization scattered all over Drupal (and contrib) and add another setting to tell Drupal to use json_decode/json_encode instead of serialize/unserialize.

No problem, we will tell Drupal that our JSON based property is a text field and do the encoding/decoding ourselves when we need it.

The database engine is internally storing JSON fields as binary data.

For INSERTS and UPDATES there is no issue as you can pass the string based document in your statement and the database engine will convert this to the binary internal representation. But when you try to retrieve a JSON field without modifying the the database abstraction layer you won't get the JSON document as expected but the binary data, so we will make a simple fix to the SelectQuery->__toString method code to retrieve this fields as JSON strings:

    foreach ($this->fields as $alias => $field) {
      $table_name = isset($this->tables[$field['table']]['table']) ? $this->tables[$field['table']]['table'] : $field['table'];
      $field_prefix =  (isset($field['table']) ? $this->connection->escapeTable($field['table']) . '.' : '');
      $field_suffix = '';
      $field_name = $field['field'];
      $field_alias = $field['alias'];
      if (isset($field['table'])) {
        $info = $this->connection->schema()->queryColumnInformation($table_name);
        // If we are retrieving a JSON type column, make sure we bring back
        // the string representation and not the binary data!
        if(isset($info['columns'][$field_name]['type']) && $info['columns'][$field_name]['type'] == 'JSON') {
          $field_suffix = '.ToString()';
        }
      }
      $fields[] = $field_prefix . $this->connection->escapeField($field_name) . $field_suffix . ' AS ' . $this->connection->escapeField($field_alias);
    }

Using JSON based fields from Drupal

First of all we are going to add a storage property (field) to the node entity with the name extend. Our aim is going to be able to store new "fields" inside this JSON field without having to alter the database structure.

To do so, implement an update to alter the database structure (you can use hook_update for this) :

<?php

namespace Drupal\cerpie\plugins\updates;

use \Drupal\fdf\Update\UpdateGeneric;
use \Drupal\fdf\Update\IUpdate;

class update002 extends UpdateGeneric implements IUpdate {

  /**
   * {@inheritdoc}
   */
  public static function Run(&$sandbox) {
    if (!db_field_exists('node', 'extend')) {
      db_add_field('node', 'extend', array(
        'type' => 'text',
        'length' => 255,
        'not null' => FALSE,
        // Tell the native type!
        'sqlsrv_type' => 'JSON'
      ));
    }
  }
}

Then with a few hooks let's expose this new property:

/**
 * Implements hook_schema_alter();
 */
function cerpie_schema_alter(&$schema) {
  $schema['node']['fields']['extend'] = array(
    'type' => 'text',
    'length' => 255,
    'not null' => FALSE,
  );
}

/**
 * Implements hook_entity_property_info_alter(&$info);
 */
function cerpie_entity_property_info_alter(&$info) {
  $info['node']['properties']['extend'] = array(
    'label' => 'JSON Extend',
    'description' => 'Store aditional JSON based data',
    'type' => 'text',
    'sanitize' => 'check_plain'
  );
}

That's basically everything you need to start storing and retrieving data inside a node in JSON format.

To store something inside the JSON document:

      // Fields to store inside the JSON document.
      $mydata = array('field1' => 'data1', 'field2' => 'data2');
      $e = entity_create('node', array ('type' => 'curso'));
      // Specify the author
      $e->uid = $user->uid;
      $e->extend = json_encode($mydata, JSON_UNESCAPED_UNICODE);
      // Create a Entity Wrapper of that new Entity
      $entity = entity_metadata_wrapper('node', $e);
      $entity->save();

Imagine we now wanted to add some additional data to the node storage (field3 in the example), no need to alter the database schema:

      $e = node_load($nid);
      $mydata = json_decode($e->extend);
      // Fields to store inside the JSON document.
      $mydata['field3'] = 'data3';
      $e->extend = json_encode($mydata, JSON_UNESCAPED_UNICODE);
      // Create a Entity Wrapper of that new Entity
      $entity = entity_metadata_wrapper('node', $e);
      $entity->save();

Up to now there's nothing new we could not have done with a string based storage field. The power is unleashed when we are able to query/retrieve the JSON based data atomically.

What he have now is just something that looks like text based storage, but is natively backed by a JSON data type.

Exposing JSON based fields to Views

We are going to create a views field handler that will allow us to expose the json based fields as regular fields (sort of - remember this is a proof of concept).

Tell views that our module will be using it's API:

/**
 * Implements hook_views_api().
 */
function cerpie_views_api() {
  return array('api' => 3);
}

Create a *.views.inc file with that consumes the hook_views_data_alter:

function cerpie_views_data_alter(&$data) {
  $data['node']['extend'] = array(
    'title' => t('Entity Json Extend'),
    'help' => t('Entity Json Stored Extended Properties'),
    'field' => array(
      'help' => t(''),
      'handler' => '\\Drupal\\cerpie\\views\\JsonExtendDataFieldHandler',
    )
  );
}

Now implement our JsonExtendDataFieldHandler class:

<?php

namespace Drupal\cerpie\views;

class JsonExtendDataFieldHandler extends \views_handler_field {
  /**
   * Implements views_handler_field#query().
   *
   * @see views_php_views_pre_execute()
   */
  function query() {
    $this->field_alias = 'extend_json_' . $this->position;
    $this->query->fields[$this->field_alias] = array(
      // TODO: This is user input directly in the query!! find something else...
      'field' => '[extend].Get(\'' . $this->options['selector'] . '\')', 
      'table' => NULL,
      'alias' => $this->field_alias);
  }
  
  /**
   * Implements views_handler_field#pre_render().
   */
  function pre_render(&$values) {
  }

  /**
   * Default options form.
   */
  function option_definition() {
    $options = parent::option_definition();
    $options['selector'] = array('default' => '$');
    return $options;
  }
  
  /**
   * Creates the form item for the options added.
   */
  function options_form(&$form, &$form_state) {
    parent::options_form($form, $form_state);
    
    $form['selector'] = array(
      '#type' => 'textfield',
      '#title' => t('Json Selector'),
      '#default_value' => $this->options['selector'],
      '#description' => t('Use a JSON path to select your data.'),
      '#weight' => -10,
    );
  }

  /**
   * Implements views_handler_field#render().
   */
  function render($values) {
    return $values->{$this->field_alias};
  }
}

We are done. Now you can easily retrieve any of the properties in the JSON document stored in the Extend field from the Views UI using a JsonPath selector.

A JsonPath selector is similar to XPATH and will allow you to retrieve any piece of data from within the JSON document.

Final Words

This was just a proof of concept but promissing experiment. With the given sample code you can easily implement a Views Filter or Sort Handler to the view based on any of the JSON stored fields. The database abstraction layer probably needs some more love to support querying JSON fields in a more user friendly way so that users do not need to learn the JsonPath notation.

You can of course directly use the JsonPath notation inside your queries to filter, sort or retrieve information from the database.

What are the benefits of storing data in this way?

Flexibility and speed when storing new fields (reduced time to market and application disruption). If a customer asks you to store some additional information in one of your entities, just drop it into the Json Document. If in the future you need to sort or filter using this field, no need to convert it to a real field or property because you can operate directly on the Json document. Depending on the use you give to the fields stored in the Json it can get to a point where performance will indicate to promote this to a real database field. 

Improved support for flexible (unforecasted) data needs. Imagine how much the Webform module's database usage could be improved if they stored form submissions in JSON format instead of this mess (which is indeed a smart approach):


Going back to the original comparison of adding a new field to a MEAN stack application vs a traditiona RDBM based stack, you can use the here explained technique as a base to be able to easily support new fields by simply touching the UI layer.

I think a very interesting contrib module could come out of this experiment (after solving some current design flaws and implemeting the missing functionality) that would allow users to attach a JSON document to any entity and have this information exposed to views. This potentially means no more hook_updates to alter the database schema for small/medium projects. And this module has the potential to be cross database portable (Postgre and MS SQL will work for sure, don't know about MySQL).

Knowing that donations simply don't work I wished there was a more robust commercial ecosystem for Drupal Contrib modules that would motivate to get this sort of ideas off the ground and into the contrib scene faster and with better support. This would also help Drupal have better site builder oriented quality modules like Wordpress has (fostering Drupal usage for quick site builds) and reduce the number of abandoned or half baked projects.

Mar 27 2015
Mar 27

The Drupalize.Me team typically gets together each quarter to go over how we did with our goals and to plan out what we want to accomplish and prioritize in the upcoming quarter. These goals range from site upgrades to our next content sprints. A few weeks ago we all flew into Atlanta and did just that. We feel it is important to communicate to our members and the Drupal community at-large, what we've been doing in the world of Drupal training and what our plans are for the near future. What better way to do this than our own podcast. Kyle Hofmeyer is joined by Joe Shindelar, Amber Matz, Blake Hall, and Will Hetherington to talk about our Q1 successes to our Drupal 8 curriculum plans. Take a listen, celebrate with us, and hear about what we are working on next.

Mar 27 2015
Mar 27

If you are new to Drupal, take a look at our previous blog New To Drupal? These Videos Will Help You Get StartedIf you just got started in Drupal, how about we provide you these short but thorough tutorial videos on Working with Content.

Introduction to nodes tutorial

In this tutorial, we take a 20,000 foot look at what nodes are and how they get imbedded into a site. To get you creating content as quickly as possible, we're only going to look at the basics, and leave the more complex elements to later videos. This foundation is vital for orienting yourself for doing common adding and editing tasks.

[embedded content]

Adding and editing content in Drupal 7

Drupal makes it easy to add and edit content on your website. In this tutorial, we'll cover the fundamentals of how to add basic pages and articles, and how to go back later and edit them. We'll conclude by reviewing Drupal's built-in tools for finding content, which comes in pretty handy, particularly in larger websites.

[embedded content]

 

Drupal 7 node displays tutorial

Nodes on a Drupal site can be displayed in several different modes. Each useful in a different context. For each mode, we can configure what content gets displayed and how it is formatted. In this tutorial we look at the different node modes and how to manage how they are displayed. Drupal provides several different display modes for nodes.

[embedded content]

Drupal Node publishing controls tutorial

Drupal can handle some pretty advanced publishing scenarios. The core installation provides us with a handful of settings that enable us to control many aspects of how nodes are published. In this video we dig deeper into node publishing and the revision options.

[embedded content]

We'll explore advanced content work flows in a later blog post, but for now, what do you think about our videos? Helpful? Have something to add? Leave them in the comments below!

Mar 27 2015
Mar 27

If you’re working on a site that needs subscriptions, take a look at Recurly. Recurly’s biggest strength is its simple handling of subscriptions, billing, invoices, and all that goes along with it. But how do you get that integrated into your Drupal site? Let’s walk through it.

There are a handful of pieces that work to connect your Recurly account and your Drupal site.

  1. The Recurly PHP library.
  2. The recurly.js library (optional, but recommended).
  3. The Recurly module for Drupal.

The first thing you need to do is bookmark is the Recurly API documentation.
Note: The Drupal Recurly module is still using v2 of the API. A re-write of the module to support v3 is in the works, but we have few active maintainers right now (few meaning one, and you’re looking at her). If you find this module of use or potential interest, pop into the issue queue and lend a hand writing or reviewing patches!

Okay, now that I’ve gotten that pitch out of the way, let’s get started.

I’ll be using a new Recurly account and a fresh install of Drupal 7.35 on a local MAMP environment. I’ll also be using drush as I go along (Not using drush?! Stop reading this and get it set up, then come back. Your life will be easier and you’ll thank us.)

  1. The first step is to sign up at https://recurly.com/ and get your account set up with your subscription plan(s). Your account will start out in a sandbox mode, and once you have everything set up with Recurly (it’s a paid service), you can switch to production mode. For our production site, we have a separate account that’s entirely in sandbox mode just for dev and QA, which is nice for testing, knowing we can’t break anything.
  2. Recurly is dependent on the Libraries module, so make sure you’ve got that installed (7.x-2.x version). drush dl libraries && drush en libraries
  3. You’ll need the Recurly Client PHP library, which you’ll need to put into sites/all/libraries/recurly. This is also an open-source, community-supported library, using v2 of the Recurly API. If you’re using composer, you can set this as a dependency. You will probably have to make the libraries directory. From the root of your installation, run mkdir sites/all/libraries.
  4. You need the Recurly module, which comes with two sub-modules: Recurly Hosted Pages and Recurly.js. drush dl recurly && drush en recurly
  5. If you are using Recurly.js, you will need that library, v2 of which can be found here. This will need to be placed into sites/all/libraries/recurly-js.
    Your /libraries/ directory should look something like this now:

Which integration option is best for my site?

There are three different ways to use Recurly with Drupal.

You can just use the library and the module, which include some built-in pages and basic functionality. If you need a great deal of customization and your own functionality, this might be the option for you.

Recurly offers hosted pages, for which there is also a Drupal sub-module. This is the least amount of integration with Drupal; your site won’t be handling any of the account management. If you are low on dev hours or availability, this may be a good option.

Thirdly, and this is the option we are using for one of our clients and demonstrating in this tutorial, you can use the recurly.js library (there is a sub-module to integrate this). Recurly.js is a client-side credit-card authorization service which keeps credit card data from ever touching your server. Users can then make payments directly from your site, but with much less responsibility on your end. You can still do a great deal of customization around the forms – this is what we do, as well as customized versions of the built-in pages.

Please note: Whichever of these options you choose, your site will still need a level of PCI-DSS Compliance (Payment Card Industry Data Security Standard). You can read more about PCI Compliance here. This is not prohibitively complex or difficult, and just requires a self-assessment questionnaire.

Settings

You should now have everything in the right place. Let’s get set up.

  1. Go to yoursite.dev/admin/config (just click Configuration at the top) and you’ll see Recurly under Web Services.
  2. You’ll now see a form with a handful of settings. Here’s where to find the values in your Recurly account. Once you set up a subscription plan in Recurly, you’ll find yourself on this page. On the right hand side, go to API Credentials. You may have to scroll down or collapse some menus in order to see it.
  3. Your Private API Key is the first key found on this page (I’ve blocked mine out):
  4. Next, you’ll need to go to Manage Transparent Post Keys on the right. You will not need the public key, as it’s not used in Recurly.js v2.
  5. Click to Enable Transparent Post and Recurly.js v2 API.
  6. Now you’ll see your key. This is the value you’ll enter into the Transparent Post Private Key field.
  7. The last basic setup step is to enter your subdomain. The help text for this field is currently incorrect as of 3/26/2015 and will be corrected in the next release. It is correct in the README file, and on the project page. There is no longer a -test suffix for sandbox mode. Copy your subdomain either from the address bar or from the Site Settings. You don’t need the entire url, so in my case, the subdomain is alanna-demo.
  8. With these settings, you can accept the rest of the default values and be ready to go. The rest of the configuration is specific to how you’d like to set up your account, how your subscription is configured, what fields you want to record in Recurly, how much custom configuration you want to do, and what functionality you need. The next step, if you are using Recurly’s built-in pages, is to enable your subscription plans. In Drupal, head over to the Subscription Plans tab and enable the plans you want to use on your site. Here I’ve just created one test plan in Recurly. Check the boxes next to the plan(s) you want enabled, and click Update Plans.

Getting Ready for Customers

So you have Recurly integrated, but how are people going to use it on your Drupal site? Good question. For this tutorial, we’ll use Recurly.js. Make sure you enable the submodule if you haven’t already: drush en recurlyjs. Now you’ll see some new options on the Recurly admin setting page.

I’m going to keep the defaults for this example. Now when you go to a user account page, you’ll see a Subscription tab with the option to sign up for a plan.

Clicking Sign up will bring you to the signup page provided by Recurly.js.

After filling out the fields and clicking Purchase, you’ll see a handful of brand new tabs. I set this subscription plan to have a trial period, which is reflected here.

Keep in mind, this is the default Drupal theme with no styling applied at all. If you head over to your Recurly account, you’ll see this new subscription.

There are a lot of configuration options, but your site is now integrated with Recurly. You can sign up, change, view, and cancel accounts. If you choose to use coupons, you can do that as well, and we’ve done all of this without any custom code.

If you have any questions, please read the documentation, or head over to the Recurly project page on Drupal.org and see if it’s answered in the issue queue. If not, make sure to submit your issue so that we can address it!

Mar 27 2015
Mar 27

Keep Calm and Clear Cache!

This is an often used phrase in Drupal land. Clearing cache fixes many issues that can occur in Drupal, usually after a change is made and then isn't being reflected on the site.

But sometimes, clearing cache isn't enough and a registry rebuild is in order.

The Drupal 7 registry contains an inventory of all classes and interfaces for all enabled modules and Drupal's core files. The registry stores the path to the file that a given class or interface is defined in, and loads the file when necessary. On occasion a class maybe moved or renamed and then Drupal doesn't know where to find it and what appears to be unrecoverable problems occur.

One such example might be if you move the location of a module. This can happen if you have taken over a site and all the contrib and custom modules are stored in the sites/all/modules folder and you want to separate that out into sites/all/modules/contrib and sites/all/modules/custom.  After moving the modules into your neat sub folders, things stop working and clearing caches doesn't seem to help.

Enter, registry rebuild.  This isn't a module, its a drush command. After downloading from drupal.org, the registry_rebuild folder should be placed into the directory sites/all/drush.

You should then clear the drush cache so drush knows about the new command

drush cc drush

Then you are ready to rebuild the registry

drush rr

Registry rebuild is a standard tool we use on all projects now and forms part of our deployment scripts when new code is deployed to an environment.

So the next time you feel yourself about to tear your hair out and you've run clear cache ten times, keep calm and give registry rebuild a try.

Mar 27 2015
Mar 27

I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.

Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)

Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.

So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.

The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.

The git workflow to get to that point would be:

  1. Create the branch 1234-brief-description
  2. Make commits to fix the bug
  3. Create a branch 1234-tests
  4. Make commits to tests (I assume most people are like me, and write the tests after the fix)
  5. Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
  6. Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
  7. If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)

Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.

Mar 27 2015
Mar 27

We proudly present our first official drupal.org project, which we released about two months ago: the Outdated Browser module :-)

So what is it about?

This module integrates the Outdated Browser library in Drupal. It detects outdated browsers and advises users to upgrade to a new version - in a very pretty looking way. The library ships with various languages. Its look and feel is configurable, and the targeting browser can be configured either specifying a CSS property or an Internet Explorer version.

Yet another browser deprecation library?

You may ask, why you should need it because there are already other modules/libraries doing the same thing, like Browser update or jReject, and other solutions with the help of simple conditional comments? I would, so here's our motivation behind the project:

In the past, we've always used a "browse happy" bar on top of the page, simply embedded with conditional comments for IE lower or equal than version 8. But that's was not very pretty looking, also not multilingual. As mentioned above, there several initivatives and scripts available, like browser-update.org, which also already has a Drupal integration, but most of them are pretty ugly without customizing them.

A few months ago, we've found the Outdated Browser library on Github, where both the updated prompt on the website, as well as the platform itself have a very nice look and feel. It also ships with a broad variety of language files, which makes this script perfectly suitable for any multilingual Drupal project. So we tried it and were absolutely satisfied with it and decided to use it as our default browser deprecation warning tool. Further we decided that we want to build a Drupal module for easy integration and share it with the community.

Benefits

Here are the main benefits of using Outdated Browser in comparison to other alternatives:

  • It looks pretty out of the box.
  • The look and feel can be easily customized via configuration.
  • The targeting browsers can be configured either by specifying an Internet Explorer version or a CSS property. So you are not limited to older IE versions.
  • The library ships with various language files (and they are constantly growing). Our module automatically serves the correct language, including fallbacks to default language and English.

So come on and try it out yourself! Feedback welcomed :-) In the meantime, I'll work on the Drupal 8 version already...

Last but not least, many thanks to Bürocratik for developing and sharing this great library!

Mar 27 2015
Mar 27

In Drupal 7 the Field API introduced the concept of swappable field storage. This means that field data can live in any kind of storage, for instance a NoSQL database like MongoDB, provided that a corresponding backend is enabled in the system. This feature allows support of some nice use cases, like remotely-stored entity data or exploit storage backends that perform better in specific scenarios. However it also introduces some problems with entity querying, because a query involving conditions on two fields might end up needing to query two different storage backends, which may become impractical or simply unfeasible.

That's the main reason why in Drupal 8, we switched from field-based storage to entity-based storage, which means that all fields attached to an entity type share the same storage backend. This nicely resolves the querying issue without imposing any practical limitation, because to obtain a truly working system you were basically forced to configure all fields attached to the same entity type to share the same storage engine. The main feature that was dropped in the process, was the ability to share a field between different entity types, which was another design choice that introduced quite a few troubles on its own and had no compelling reason to exist.

With this change each entity type has a dedicated storage handler, that for fieldable entity types is responsible for loading, storing, and deleting field data. The storage handler is defined in the handlers section of the entity type definition, through the storage key (surprise!) and can be swapped by modules implementing hook_entity_type_alter().

Querying Entity Data

Since we now support pluggable storage backends, we need to write storage-agnostic contrib code. This means we cannot assume entities of any type will be stored in a SQL database, hence we need to rely more than ever on the Entity Query API, which is the successor of the Entity Field Query system available in Drupal 7. This API allows you to write complex queries involving relationships between entity types (implemented via entity reference fields) and aggregation, without making any assumption on the underlying storage. Each storage backend requires a corresponding entity query backend, translating the generic query into a storage-specific one. For instance, the default SQL query backend translates entity relationships to JOINs between entity data tables.

Entity identifiers can be obtained via an entity query or any other viable means, but existing entity (field) data should always be obtained from the storage handler via a load operation. Contrib module authors should be aware that retrieving partial entity data via direct DB queries is a deprecated approach and is strongly discouraged. In fact by doing this you are actually completely bypassing many layers of the Entity API, including the entity cache system, which is likely to make your code less performant than the recommended approach. Aside from that, your code will break as soon as the storage backend is changed, and may not work as intended with modules correctly exploiting the API. The only legal usage of backend-specific queries is when they cannot be expressed through the Entity Query API. However also in this case only entity identifiers should be retrieved and used to perform a regular (multiple) load operation.

Storage Schema

Probably one of the biggest changes introduced with the Entity Storage API, is that now the storage backend is responsible for managing its own schema, if it uses any. Entity type and field definitions are used to derive the information required to generate the storage schema. For instance the core SQL storage creates (and deletes) all the tables required to store data for the entity types it manages. An entity type can define a storage schema handler via the aptly-named storage_schema key in the handlers section of the entity type definition. However it does not need to define one if it has no use for it.

Updates are also supported, and they are managed via the regular DB updates UI, which means that the schema will be adapted when the entity type and field definitions change or are added or removed. The definition update manager also triggers some events for entity type and field definitions, that can be useful to react to the related changes. It is important to note that not all kind of changes are allowed: if a change implies a data migration, Drupal will refuse to apply it and a migration (or a manual adjustment) will be required to proceed.

This means that if a module requires an additional field on a particular entity type to implement its business logic, it just needs to provide a field definition and apply changes (there is also an API available to do this) and the system will do the rest. The schema will be created, if needed, and field data will be natively loaded and stored. This is definitely a good reason to define every piece of data attached to an entity type as a field. However if for any reason the system-provided storage were not a good fit, a field definition can specify that it has custom storage, which means the field provider will handle storage on its own. A typical example are computed fields, which may need no storage at all.

Core SQL Storage

The default storage backend provided by core is obviously SQL-based. It distinguishes between shared field tables and dedicated field tables: the former are used to store data for all the single-value base fields, that is fields attached to every bundle like the node title, while the latter are used to store data for multiple-value base fields and bundle fields, which are attached only to certain bundles. As the name suggests, dedicated tables store data for just one field.

The default storage supports four different shared table layouts depending on whether the entity type is translatable and/or revisionable:

  • Simple entity types use only a single table, the base table, to store all base field data.
    | entity_id | uuid | bundle_name | label | … |
    
  • Translatable entity types use two shared tables: the base table stores entity keys and metadata only, while the data table stores base field data per language.
    | entity_id | uuid | bundle_name | langcode |
    
    | entity_id | bundle_name | langcode | default_langcode | label | … |
    
  • Revisionable entity types also use two shared tables: the base table stores all base field data, while the revision table stores revision data for revisionable base fields and revision metadata.
    | entity_id | revision_id | uuid | bundle_name | label | … |
    
    | entity_id | revision_id | label | revision_timestamp | revision_uid | revision_log | … |
    
  • Translatable and revisionable entity types use four shared tables, combining the types described above: the base table stores entity keys and metadata only, the data table stores base field data per language, the revision table stores basic entity key revisions and revision metadata, and finally the revision data table stores base field revision data per language for revisionable fields.
    | entity_id | revision_id | uuid | bundle_name | langcode |
    
    | entity_id | revision_id | bundle_name | langcode | default_langcode | label | … |
    
    | entity_id | revision_id | langcode | revision_timestamp | revision_uid | revision_log |
    
    | entity_id | revision_id | langcode | default_langcode | label | … |
    

The SQL storage schema handler supports switching between these different table layouts, if the entity type definition changes and no data is stored yet.

Core SQL storage aims to support any table layout, hence modules explicitly targeting a SQL storage backend, like for instance Views, should rely on the Table Mapping API to build their queries. This API allows retrieval of information about where field data is stored and thus is helpful to build queries without hard-coding assumptions about a particular table layout. At least this is the theory, however core currently does not fully support this use case, as some required changes have not been implemented yet (more on this below). Core SQL implementations currently rely on the specialized DefaultTableMapping class, which assumes one of the four table layouts described above.

A Real Life Example

We will now have a look at a simple module exemplifying a typical use case: we want to display a list of active users having created at least one published node, along with the total number of nodes created by each user and the title of the most recent node. Basically a simple tracker.

User activity tracker

Displaying such data with a single query can be complex and will usually lead to very poor performance, unless the number of users on the site is quite small. A typical solution in these cases is to rely on denormalized data that is calculated and stored in a way that makes it easy to query efficiently. In our case we will add two fields to the User entity type to track the last node and the total number of nodes created by each user:

function active_users_entity_base_field_info(EntityTypeInterface $entity_type) {
 $fields = [];

 if ($entity_type->id() == 'user') {
   $fields['last_created_node'] = BaseFieldDefinition::create('entity_reference')
     ->setLabel('Last created node')
     ->setRevisionable(TRUE)
     ->setSetting('target_type', 'node')
     ->setSetting('handler', 'default');

   $fields['node_count'] = BaseFieldDefinition::create('integer')
     ->setLabel('Number of created nodes')
     ->setRevisionable(TRUE)
     ->setDefaultValue(0);
 }

 return $fields;
}

Note that fields above are marked as revisionable so that if the User entity type itself is marked as revisionable, our fields will also be revisioned. The revisionable flag is ignored on non-revisionable entity types.

After enabling the module, the status report will warn us that there are DB updates to be applied. Once complete, we will have two new columns in our user_field_data table ready to store our data. We will now create a new ActiveUsersManager service responsible for encapsulating all our business logic. Let's add an ActiveUsersManager::onNodeCreated() method that will be called from a hook_node_insert implementation:

 public function onNodeCreated(NodeInterface $node) {
   $user = $node->getOwner();
   $user->last_created_node = $node;
   $user->node_count = $this->getNodeCount($user);
   $user->save();
 }

 protected function getNodeCount(UserInterface $user) {
   $result = $this->nodeStorage->getAggregateQuery()
     ->aggregate('nid', 'COUNT')
     ->condition('uid', $user->id())
     ->execute();

   return $result[0]['nid_count'];
 }

As you can see this will track exactly the data we need, using an aggregated entity query to compute the number of created nodes.

Since we need to also act on node deletion (hook_node_delete), we need to add a few more methods:

 public function onNodeDeleted(NodeInterface $node) {
   $user = $node->getOwner();
   if ($user->last_created_node->target_id == $node->id()) {
     $user->last_created_node = $this->getLastCreatedNode($user);
   }
   $user->node_count = $this->getNodeCount($user);
   $user->save();
 }

 protected function getLastCreatedNode(UserInterface $user) {
   $result = $this->nodeStorage->getQuery()
     ->condition('uid', $user->id())
     ->sort('created', 'DESC')
     ->range(0, 1)
     ->execute();

   return reset($result);
 }

In the case where the user's last created node is the one being deleted, we use a regular entity query to retrieve an updated identifier for the user's last created node.

Nice, but we still need to display our list. To accomplish this we add one last method to our manager service to retrieve the list of active users:

 public function getActiveUsers() {
   $ids = $this->userStorage->getQuery()
     ->condition('status', 1)
     ->condition('node_count', 0, '>')
     ->condition('last_created_node.entity.status', 1)
     ->sort('login', 'DESC')
     ->execute();

   return User::loadMultiple($ids);
 }

As you can see, in the entity query above we effectively expressed a relationship between the User entity and the Node entity, imposing a condition using the entity syntax, that is implemented through a JOIN by the SQL entity query backend.

Finally we can invoke this method in a separate controller class responsible for building the list markup:

 public function view() {
   $rows = [];

   foreach ($this->manager->getActiveUsers() as $user) {
     $rows[]['data'] = [
       String::checkPlain($user->label()),
       intval($user->node_count->value),
       String::checkPlain($user->last_created_node->entity->label()),
     ];
   }

   return [
     '#theme' => 'table',
     '#header' => [$this->t('User'), $this->t('Node count'), $this->t('Last created node')],
     '#rows' => $rows,
   ];
 }

This approach is way more performant when numbers get big, as we are running a very fast query involving only a single JOIN on indexed columns. We could even skip it by adding more denormalized fields to our User entity, but I wanted to outline the power of the entity syntax. A possible further optimization would be collecting all the identifiers of the nodes whose titles are going to be displayed and preload them in a single multiple load operation preceding the loop.

Aside from the performance considerations, you should note that this code is fully portable: as long as the alternative backend complies with the Entity Storage and Query APIs, the result you will get will be the same. Pretty neat, huh?

What's Left?

What I have shown above is working code, you can use it right now in Drupal 8. However there are still quite some open issues before we can consider the Entity Storage API polished enough:

  • Switching between table layouts is supported by the API, but storage handlers for core entity types still assume the default table layouts, so they need to be adapted to rely on table mappings before we can actually change translatability or revisionability for their entity types. See https://www.drupal.org/node/2274017 and follow-ups.
  • In the example above we might have needed to add indexes to make our query more performant, for example, if we wanted to sort on the total number of nodes created. This is not supported yet, but of course «there's an issue for that!» See https://www.drupal.org/node/2258347.
  • There are cases when you need to provide an initial value for new fields, when entity data already exists. Think for instance to the File entity module, that needs to add a bundle column to the core File entity. Work is also in progress on this: https://www.drupal.org/node/2346019.
  • Last but not least, most of the time we don't want our users to go and run updates after enabling a module, that's bad UX! Instead a friendlier approach would be automatically applying updates under the hood. Guess what? You can join us at https://www.drupal.org/node/2346013.

Your help is welcome :)

So What?

We have seen the recommended ways to store and retrieve entity field data in Drupal 8, along with (just a few of) the advantages of relying on field definitions to write simple, powerful and portable code. Now, Drupal people, go and have fun!

Mar 27 2015
Mar 27

People love maps. People love being able to visually understand how locations relate to each other. And since the advent of Google Maps, people love to pan and zoom, to click and swipe. But what people hate, is a shoddy mapping experience. Mapping can be hard, but fortunately, Drupal takes a lot of the pain away.

Why map?

You might want a map if you:

  • have a list of location-based data, e.g. venues, events, objects, people, groups
  • want a slick way of giving directions
  • need to search for locations

What map?

There's not just the ubiquitous Google Maps, but rather a host of mapping providers to choose from. Maps are made up of 'tiles' (the images that are downloaded and which go into making up the map), a coordinate grid upon which to place the tiles, and the actual location data you wish to display. These can come from different providers too, e.g. you might want to rely on Google's grid data, but want to come up with your own custom tile solution via Tile Mill or the like. Really, it's down to personal preference and your particular usecase. We are rather fans of Open Streetmap, for example, but many of our clients favour Google Maps' look and feel because that's what their users expect in a mapping interface.

Remember that a map can be more than dropping a pin and having done with it. Depending upon the technical solution chosen, you can

  • layer your data and allow users to switch layers on and off, thereby filtering data in a structured manner
  • cluster your data points to display large data sets in a meaningful way
  • draw features on your map, e.g. to outline an area or mark an area as special in some way
  • include popup information windows filled with rich, interactive media
  • have location search (when integrating with a search back-end such as Solr), or even proximity search
  • use custom map tiles, custom icons, a customized look & feel

There are many, many options, solution and Drupal modules available and choices to fit every need.

Map how?

There are a lot of ways to get mapping data onto a Drupal web page, ranging from quick & easy to really complex.

  • Cut & paste embed code from Google Maps
  • Gmap
  • Leaflet
  • Open Layers

The thing is, that if all you need is a pin on a map to show the location of your office, then a straight embedded map with do perfectly. However, once you get into dynamic listings and the display of custom data and fancy-dan functionality, then you need to put in a bit of effort. Open Layers is definitely the weapon of choice for complex mapping projects. The Open Layers module provides everything you need to get going building maps through Views (yes, Views!) and with its ecosystem of plugins and add-ons, it is a really powerful tool.

With the what now?

Adding maps to your website is a great way to increase interactivity, hold users' attention, and encourage return visitors to the site. The latter of which is good for increasing your search engine optimisation. Many people think mapping is difficult, but with some practice it becomes a very enjoyable part of the website creating experience. Annertech have a lot of experience creating maps for our clients. Have a look at some examples of mapping projects that we've done to see what is practical and readily available in the real world.

Simple Google Map

On the website for the National Adult Literacy Agency (NALA), we created a map of Ireland which shows where every NALA course in the country is being run. It is quite a simple feature, where users can click on a pin to get some more information about the course. In the screenshot shown, for example, a user has clicked on a map marker to see details about an adult literacy course in Wexford.

Basic mapping in Drupal with Google Maps

Cluster Google Map

A more complex map was created for the Health Information and Quality Authority (HIQA) where we used a “clustered” approach to mapping. This means that if there are a number of pins/map markers placed in very close proximity, instead of everything getting bunched on top of each other we create “clusters”. Each cluster looks like a broadcasting icon with a number. The number shows how many pins are within that cluster/area and clicking on it allows you to then see those pins much more easily. This is a little tricky to explain in words, so we will post two screenshots to illustrate.

1) The small pins are individual markers, the broadcast signals show how many individual pins are within that "cluster"

Clustering with mapping in Drupal

2) When a cluster is clicked on, the individual pins for that cluster are revealed. When an individual pin is clicked on, information relating to that pin is revealed.

Zooming in to a cluster Google map in Drupal

OpenStreetMap and Open Layers in Drupal

When working with the Local Government Management Agency (LGMA), we created a “filter” map. This map allows users to select which items they would like to see by ticking boxes. As many filter groups as you wish can be created in this manner. This screenshot shows the map with all council buildings in Ireland visible. Options are available here to also show where every library and/or fire station in the country is.

You'll also note that it does not have the default Google Maps style that you see on most sites. For this particular site, we used OpenStreetMap and used different coloured pins for the different types of buildings being mapped. This approach can support other map types too if requested including, but not limited to, Bing Maps, MapBox and XYZ map types.

OpenStreetMaps and Open Layers in Drupal for Ireland's Local Government sector

So that's just a short introduction to some of the fun we've had mapping with Drupal recently. In a follow-up blog post we'll show you how we created some of these maps and how you can integrate them with your project.

If you would like Annertech to help you add maps to your website, please feel free to contact us by phone on 01 524 0312, by email at [email protected], or using our contact form.

Mar 27 2015
Mar 27

I’ve just listened to the latest episode of the Modules Unraveled podcast by Bryan Lewis, which talked about The current job market in Drupal. And it made me think about my own journey as a Drupal developer, from zero to reasonably competent (I hope). The thing about this industry is that everything seems to move faster and faster. There’s a best new tool or framework released every other day. Developers are creating cool things all the time. And I feel like I’m constantly playing catch-up. But looking back to Day 1, I realised that I did make quite a bit of progress since then.

Learning on the job

I’ve been gainfully employed as a Drupal architect (the job title printed on my name cards) for 542 days as of time of writing. That’s about a year and a half. I’m pretty sure I didn’t qualify for the job when I was hired. My experience up to that point was building a Drupal 6 site a couple years before. I barely had a good grasp of HTML and didn’t even know the difference between CSS and Sass.

When I got the job, I felt that I didn’t deserve my job title. And I definitely did not want to feel that way for long. I spent a lot of time reading newbie web development and Drupal articles early on. There’s a lot you can learn from Google, and trust me, even now, I still google my way out of a lot of challenges on the job. However, I do recognise the fact that I seriously lucked out with this job. Let me explain.

I’m the type of person who learns best if I’m doing something for real. Meaning, as nice as courses from Codeacademy and Dash are for learning the basics of web development, for me, they don’t really stick to my little brain as well as if I was building a real live website. Two weeks into the job, I was handed an assignment to build an entire website. It was like learning to swim by being tossed into the ocean, just the way I like it.

Having a mentor

But of course, googling alone is far from enough. Here’s where the lucked out bit comes in. I didn’t know it at the time, but the fact is, I joined a development team that was very strong in Drupal fundamentals and best practices. Our technical lead was a stickler for detail, and insistent on doing things the Drupal way. As someone who didn’t know a class from a method, he didn’t have to do any re-education of programming habits for Drupal because I had none to begin with. Sometimes it’s easier to deal with a blank slate.

First thing I learnt was version control, using Git. Now that was a steep learning curve. Oh, the amount of time I spent resolving git conflicts, undoing damage done to git repositories, the list goes on. Though it really feels like learning to ride a bicycle, once you get it, you can’t unlearn it.

I also learnt about the multitude of environments needed for web development very early on, as my technical lead patiently explained to me why we needed to follow the proper git workflow.

All the time.
No, you can’t just make the change on the live server.
Yes, you have to commit even if the change is tiny.

So even though I asked a lot of questions (I probably questioned everything I was told to do) I always got clear, justifiable answers. Which shut me up pretty quickly. In the rare event that I caught something that was arbitrary, my technical lead was gracious enough to let me “win”. My fellow developer was also extremely generous with her knowledge and patience. Let’s just say, with all the support I was getting, it would be odd if I didn’t pick things up quickly.

With such a small team, I got a lot of opportunities to take responsibility for entire projects. I wasn’t micromanaged and was pretty much given free reign to implement things so long as I followed best practices. Right, so that’s the day job bit.

Make stuff for fun

I think I’m somewhat of a control freak. I’ve tried using CSS frameworks on my projects. Didn’t like them. Because I didn’t use a lot of the stuff that came in the whole package. I started off using the Zen theme as my starter, but 3 projects in, I decided to just write my own starter theme. Nothing fancy, it’s a practically a blank slate, even the grid isn’t pre-written in. Partly because I wanted to know exactly how everything worked, and partly because I didn’t want anything in my code that I didn’t use.

I gravitated toward front-end development largely because HTML and CSS were very easy for me to pick up and understand. I also liked the fact that I could make things look pretty (yes, I'm shallow like that). Brilliant developers create amazing things almost every day. Let me give you a list:

Maybe one day I’ll be able to create something mind-blowing as well. For now, I’m content with just writing my own themes, playing around with Jekyll, exploring the Drupal APIs (can’t be a Drupal developer without this) and mucking around on CodePen.

Mingle with people who are better than you

Another thing I did was to attend meet-ups. Even though I felt like a total noob, I showed up anyway (though sometimes I dragged a friend with me). For the more technical meet-ups, it was quite intimidating, and still is even now sometimes. But I realised that as time went by, I understood more of what was happening. People didn’t seem like they were speaking Greek anymore. It’s still cool to get my mind blown by all the creative stuff people present at the meet-ups. They remind me of how much more there is to learn.

I found it really beneficial to attend creative meet-ups as well. The perspectives shared by all those speakers triggers me to examine my own thought processes. Because at the end of the day, it’s not just about writing code, it’s about how the artefacts we create are used by others.

Rounding things off

I’d like to hope that as I get better at this, I’ll be able to create more useful stuff. Despite the fact I grumble and rant about fixing UAT bugs (can’t we just tell them it’s a feature), inheriting legacy code, hitting deadlines and so on, the truth is, I love what I do. Nobody can predict the future, but I do hope that 5420 days in, I’ll still be doing this and loving it the same.

Mar 26 2015
Mar 26

Drupal is a great choice for your enterprise level web application project, and maybe even your website. Like every other framework under the sun, it’s also a terrible choice if you’re cavalier about the management of its implementation. Things go awry, scope creeps, budgets get drained, and hellfire rains down from the sky… You get it.  The good news is that there are steps you can take to prevent this from happening.

1. Consider Paid Discovery

Software projects go over budget. A lot. It would be disingenuous for me to tell you not to be prepared for that.

There’s good news on this front though. You, as the client, have complete control over this. Most software developers, or at least this one, don’t sit around thinking of ways to increase scope and add to cost, though that’s sometimes the image clients get.

The fact is we need good information to give you good estimates. You wouldn’t tell a real estate broker you’re looking for a log cabin in the $20,000 price range, then get mad when they don’t deliver the Taj Mahhal. The more transparent you are about your needs early on, the better chances you have of hitting your budget and getting what you need.

The basic truth is: you get what you pay for. Investing in paid discovery will increase the time and effort that is put into the estimate, giving you a much better chance of hitting your cost / value sweet spot.

2. Work Closely with the Development Team, But not too Closely

It’s important to bring at least one member of the development team into the conversation early, even if you are weeks away from any actual development work. This exposes the developer to your thought process and allows him to begin thinking about how to work effectively with you to accomplish your goals for the project. Open communication between the development team and Project Owner should be a regular process.

But, there’s always a but. At the end of the day the developer’s job is to find technical solutions to your business goals. Communication is an important part of this process, but can become a hindrance if not performed in a controlled manner.  Short daily stand ups or longer weekly check-ins should be sufficient. Any other communications should go through the Project Manager. Constantly peppering the development team with requests for Just-In-Time reaction is a sure fire way to force your budget down the sink and drive down your derived value with it.

3. Be Available

The absolute worst thing you can do, that will lead to the certain demise of your project, is to neglect to nurture it. Nothing is more frustrating as a developer than a client who does not answer your questions. We’re not trying to waste your money with needless communications, honest. If you’re being asked a question by a developer, it’s probably a blocker. If the developer is blocked long enough, it leads to an assumption. More assumptions means less ROI.

4. Demos are Dangerous

Demos have an important place in the development world. They give clients a first-hand experience of what’s going on, and that’s great. There’s a flip side of the coin though.

Demoing incomplete work helps neither the client nor the development firm. This is usually because clients have unrealistic expectations about what a “Demo” is for. We are showing baseline functionality here. Not bells and whistles.

To avoid frustrating clients (we do want to make you happy after all) developers will soak time and energy into temporary solutions created strictly for the demo, then replace them with long term solutions later.

Ultimately you’d save money by reducing the number of demos and allowing developers to build things correctly in the first place.

5. Be Assertive About Your Goals, Be Flexible About Your Needs

Wait, shouldn’t I be assertive about both my goals and my needs? Well, no, at least not if you want to get the most out of your investment.

Clients understand their business goals. They don’t usually understand their technical needs to accomplish those goals. That’s where we, or whoever you end up hiring, come in.

Unfortunately clients usually lead with what they believe their technical needs are. If you advocate fervently enough, we’ll believe you. That’s where everyone gets into trouble. You’re far better off communicating the business goals you want to accomplish so we can work with you to determine the most cost-effective solution for your budget.

6. Be Afraid, Very Afraid of “Deliverables”

Lawyers love deliverables. They make contracts simple. They also tie you to a specific technical need, long before it has been determined that the technical need will achieve your business goals.

To be clear, web projects should have deliverables. They can be defined during the discovery process (the longer and more thorough discovery is, the better we can define the deliverables, see item 1), but you’re much better off defining the actual deliverable closer to the middle of the project. That is the only way to assess if the technical solution we’ve agreed upon provides you with the ROI you expect.

Along those lines, deliverables can also give a false sense of progress. If we meet 100% of the deliverables defined in the contract but fail to meet your business goals, I call that a waste of my time and your money.

The last piece of advice borrows from the Agile methodology. Ultimately the ball is in your court to determine what the best implementation process is for your organization, but remember that you, as the client, have a significant level of control over the amount you pay and the value you get.

Mar 26 2015
Mar 26

Palantir CEO Tiffany Farriss recently keynoted MidCamp here in Chicago where she spoke about the economics of Drupal contribution. In it, she explored some of the challenges of open-source contribution, showing how similar projects like Linux have managed growth and releases, and what the Drupal Association might do to help push things along toward a Drupal 8 release. You can give her presentation a watch here.

With this post, we want to highlight one of the important takeaways from the keynote: the Drupal 8 Accelerate Fund.

Drupal 8 Accelerate

Some of you are clients and use Drupal for your sites, others build those sites for clients around the world, and still others provide technology that enhances Drupal greatly. We also know that Drupal 8 is going to be a game changer for us and our clients for a lot of reasons.

While we use a number of tools and technologies to drive success for our clients, Drupal is in our DNA. In addition to being a premium supporting partner of the Drupal Association, we also count amongst our team members prominent Drupal core and contributed module maintainers, initiative leads, and Drupal Association Board, Advisory Board, and Working Group members.

We've all done our part, but despite years of support and contributions from countless companies and individuals, we need to take a new approach to incentivize contributors to get Drupal 8 done. That’s where the Drupal 8 Accelerate Fund comes in.

Palantir is one of seven anchor donors who are raising funds alongside the Drupal Association to support Drupal 8 development. These efforts relate directly to the Drupal Association's mission of uniting a global open source community to build and promote Drupal, and will (and already have) support contributors directly through additional dollars for grants.

The fund breaks down like this:

  • The Drupal Association has contributed $62,500
  • The Drupal Association Board has raised another $62,500 from Anchor Donors
  • Now, the Drupal Association’s goal is to raise contributions from the Drupal community. This is the chance for everyone from end users to independents to Drupal shops to show your support for Drupal 8. Every dollar donated by the community has already been matched, doubling your impact. That means the total pool could be as much as $250,000 with your help.

 
Drupal 8 Accelerate is first and foremost about getting Drupal 8 to release. However, it’s also a pilot program for Drupal Association to obtain and provide financial support for the project. This is a recognition that, as a community, Drupal must find (and fund) a sustainable model for core development.

This is huge on a lot of levels, and those in the community have already seen the benefits with awards for sprints and other specific progress in D8. Now it’s our turn to rally. Give today and spread the word so we can all help move Drupal 8 a little closer to release.

Fundraising Websites - Crowdrise

Mar 26 2015
Mar 26

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call

Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Mar 26 2015
Mar 26

Looking back on 2014, it was a great year of events and conversations with people in and around Acquia, open source, government, and business. I think I could happily repost at least 75% of the podcasts I published in 2014 as "greatest hits," but then we'd never get on to all the cool stuff I have been up to so far in 2015!

Nonetheless, here's one of my favorite recordings from 2014: a terrific session that will help you wrap your head around developing for Drupal 8 and a great conversation with Frederic Mitchell that covered the use of Drupal and open source in government, government decision-making versus corporate decision-making, designing Drupal 7 sites with Drupal 8 in mind, designing sites for the end users and where the maximum business value comes from in your organization, and more!

---Original post from October, 2014---

Drupal in Government

"We were part of the original Whitehouse.gov [Drupal] build, and the We the People petition system that they use for the democratization of ideas. I feel it opened up this conversation about what open source was, the security of open source, and what it really meant in terms of democratic principles. Of course, in the land of politics, perception is important."

"I was the lead developer at one point on the Energy.gov project; that was one of the first Drupal 7 sites. That checked all of the political checkboxes:

  1. it was going to save money
  2. it brought various offices under the Department of Energy under the same branding
  3. they didn't have the recurring licenses
  4. and because it is open source, because we can manipulate it, we can custom tailor those content authoring experiences and those tools to the needs of the various offices while still having that kind of super administrator and allowing that person to control what needed to be controlled.""

"Being able to enable government officials to get their message ... It's a public service ... We're continuing to work with the Senate, with the House of Representatives; it's been absolutely great because everyone understands at this point in 2014, that open, transparency, open source, democratization – they all have a single, underlying thread."

Presenter Dossier: Fredric Mitchell

  • Senior Engineer, Phase 2
  • Drupal.org profile: fmitchell
  • Website: http://brightplum.com/
  • Twitter: fredricmitchell
  • 1st Drupal memory: Meeting Drupal while working with Larry Garfield at Palantir ... "My first Drupal memory was trying to absorb all of Larry's eagerness and enthusiasm about Drupal 5."

Session Description

Now that you know how to build sites, it's time to take the next step and jump into the Drupal 8 API. This session reviews the 30 API functions that you should know to begin your journey.

This is an updated version of my popular talk at Drupalcamp Chicago and Drupalcamp Costa Rica that now covers Drupal 8!

We'll jump through common API examples with some derived examples from the excellent Examples module and others that will carry over from Drupal 7.

Attendees will learn the behind-the-scenes functions that power common UI elements with the idea of being able to build or customize them for your projects. Some of these include:

  • drupal_render()
  • entity_load_multiple() (node_load_multiple in D7)
  • entity_view_multiple()
  • menu_get_tree()
  • taxonomy_get_tree()
  • Field::fieldInfo()->getField() (field_info_field in D7)
  • QueryBase class (EntityFieldQuery in D7)
  • Request->attributes->get(‘entity’) (menu_get_object in D7)

Session Video

[embedded content]

Session slides

You can grab a copy of Fredric's slides he updated for DrupalCon Austin from his GitHub repository.

Interview video

[embedded content]

Mar 26 2015
Mar 26

Introduction

After upgrading this site to a nice shiny Beta, I was itching to try themeing on Drupal 8, I have left off up to now as a few simple experiments showed me that even a simple sub-theme broke quickly under the pace of Drupal change, now though I should be able to upgrade any efforts and improvements without too much difficulty.

I theme Drupal every now and again and spend more time doing back-end and server related work, I usually have to have a good understanding of the mechanics of the themeing though even when not actively doing it. 

Often in the past I have been at odds with the themeing philosophy of teams I am working with (and have had to capitulate when outnumbered ;)) as I am more in the camp and would rather strip out most of the guff that Drupal inserts and break away from the 'rails' that make many Drupal sites turn out kind of samey apparently the 33% camp.

Also when working with talented front-end developers who don't necessarily deal mostly with Drupal it seems such a shame to clip their wings, I would rather try and start with a theme like Mothership.

The challenge

The assumption I had was that Drupal 8 will be much easier to customise and "go your own way" than Drupal has ever been before. The mini-challenge I set myself was to re-implement the look from another site chris-david-hall.info  which runs on ExpressionEngine and use the same CSS stylesheet verbatim (in the end I changed one line). 

The theme is pretty basic, based on Bootstrap 3, but even despite that has a few elements of structure that are not very Drupally, so made an interesting experiment.

More than enough for my first attempt.

The result

Well this site no longer looks like a purple Bartik, and does bear more than a passing resemblance to the site I ripped the CSS from.

It was pretty easy to restructure things and Twig theming in Drupal is a massive improvement, I am now convinced that Drupal 8 wins hands down over Drupal 7 for themeability.

There is still a lot more stuff I could strip out, this was a first pass, I am going to take a breather and come back to it. I have a couple of style-sheets left from Drupal to keep the in-line editing and admin stuff (mostly) working. I would prefer to target those bits more selectively.

The theme is on Github, just for interest and comparison at the moment, but depending on later experiments might turn into something more generically useful. 

Still a few glitches

It is a bit difficult working out if I have done something wrong or whether I am encountering bugs in the Beta, I will take the time to find out if issues have been raised when I get the chance. There are problems, for example for an anonymous user the home link is always active and some blocks seem to leave a trace even when turned off for a page (which messes with detecting whether a sidebar is active for example), both of these problems also exhibit in Bartik though.

I plucked the theme from my site at chris-david-hall.info and needs a lot of work anyway, I am hoping to improve both sites in tandem now. 

Mar 26 2015
Mar 26

How to change solr search query for one facet using these modules: Search API, Search API solr, facet API, Facet API bonus and some custom code.

I have configured these modules to have a search page showing content that you search for and a list of facet items that you can filter the search result on. In my case the facet items was an representation of node types that you could filter the search result with. There are tons of blogpost how to do that, Search API solr documentation.

The facet item list can look like this (list of node types to filter search result on):

- Foo (22)
- Bar (18)
- Elit (10)
- Ipsum (9)
- Ultricies (5)
- Mattis (2)
- Quam (1)

What I wonted to achieve was to combine two facet items to one so the list would look like this:

- Foo and Bar (40)
- Elit (10)
- Ipsum (9)
- Ultricies (5)
- Mattis (2)
- Quam (1)

The solution was using Search API hook hook_search_api_solr_query_alter(). I need to only change the query for the facet Item (node type) "Foo" and try to include (node type) "Bar" in the search query. So I fetched the facet item name by digging deep into the argument "$query".

<?php
function YOUR_CUSTOM_MODULE_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) { // Fetching the facet name to change solr query on.
 
$facet_item = $query->getFilter()->getFilters();
  if (!empty(
$facet_item)) {
   
$facet_item = $facet_item[0]->getFilters();

    if (!empty(

$facet_item[0])) {
      if (!empty(
$facet_item[0][1])) {
       
$facet_item = $facet_item[0][1];
       
// This is my facet item I wont to change solr query on "Foo" and also add node type "Bar" to the filter.
       
if ($facet_item === 'foo') {
         
$call_args['params']['fq'][0] = $call_args['params']['fq'][0] . ' OR  ss_type:"bar"';
        }
      }
    }
  }
}
?>

We have now altered the solr query, but the list looks the same, the only differences now is that if you click on the "Foo" facet you will get "Foo" and "Bar" (node type) nodes in the search result.

To change the facet item list, I used drupal hook_facet_items_alter() provided by contrib module Facet API Bonus

<?php
function YOUR_CUSTOM_MODULE_facet_items_alter(&$build, &$settings) {

  if (

$settings->facet == "type") { // Save this number to add on the combined facet item.
   
$number_of_bar = $build['bar']['#count'];

    foreach(

$build as $key => $item) {
      switch (
$key) {
        case
'foo':
         
// Change the title of facet item to represent two facet items
          // (Foo & Bar).
         
$build['foo']["#markup"] = t('Foo and Bar'); // Smash the count of hits together.
         
if ($build['foo']['#count'] > 0) {
           
$build['foo']['#count'] = $build['foo']['#count'] + $number_of_bar;
          }
          break;
// Remove this facet item now when Foo item will include this node type in the search result.
       
case 'bar':
          unset(
$build['bar']);
          break;
      }
    }
  }
}
?>

After this should the list look like we want.

- Foo and Bar (40)
- Elit (10)
- Ipsum (9)
- Ultricies (5)
- Mattis (2)
- Quam (1)

I also have a text printed out by Facet API submodule Current Search. This module lets you add blocks with text and tokens. In my case I added text when you searched and filtered to inform the user what he just searched for and/or filtered on. This could be done by adding existing tokens in the configuration of Current Search module configuration page "admin/config/search/current_search". The problem for me was that the token provided was the facet items that facet API created and not the one I changed. So I needed to change the token with text "Foo" to "Foo & Bar". This can be accomplished by hook_tokens_alter().

<?php
function YOUR_CUSTOM_MODULE_tokens_alter(array &$replacements, array $context) {

  if (isset(

$replacements['facetapi_active:active-value'])) {
    switch (
$replacements['[facetapi_active:active-value]']) {

      case

'Foo':
       
$replacements['[facetapi_active:active-value]'] = 'Foo and Bar';
        break;
    }
  }
}
?>

And that's it.
Link to all code

Mar 26 2015
Mar 26

Hopefully you've heard the news about the Drupal Association's new D8 Accelerate grants program, and the fundraising drive we have currently going on. If not, the gist is that the Drupal Association has created a central fund, managed by the Drupal core committers, to fund both "bottom-up" community grants for things like targeted sprints or "bug bounties," as well as "top-down" spending driven from the core committers on larger strategic initiatives that help accelerate Drupal 8's release. All D8 Accelerate grants that are provided are tracked centrally at https://assoc.drupal.org/d8accelerate/awarded, including what the money was used for, how much was spent, to whom it went, and a report from the grant recipient(s) that outlines the work that was accomplished.

However, it can be a little hard to parse from that format the larger meaning/context of this work, especially if you don't spend upwards of 30 hours per week in the core queue like most of these folks. :) As Chief Wearer of Many Hats™, I sit on both the Drupal Association board as well as the committee who manages these funds. This puts me in a good position to provide a bit of "behind the scenes" info on how the funding process works, as well as provide some of the larger story of how these funds are benefitting not only Drupal 8 core, but the larger Drupal ecosystem.

Drupal 8 Acceleration

Performance

A flamegraph of what functions take the longest on the user/password page in Drupal 8
Source: https://www.drupal.org/node/2370667

As noted in my post-DrupalCon Bogotá critical issue run-down, performance improvements are a large chunk of the work remaining in Drupal 8. We had deliberately postponed most of this work until post-beta to avoid premature optimization and to allow all the major architectural chunks to be in place. However, we definitely can't release Drupal 8 while it is much slower than Drupal 7.

The D8 Accelerate fund has been instrumental in not only helping to address critical performance regressions, but also in accelerating the development of Drupal 8's next-generation cache system.

Or, to sum it up in a cheesy catch-phrase:

For less than $10,000, we're making Drupal 8 twice as fast! :D

Win!

Unblocking the beta-to-beta upgrade path

The second large focus of D8 Accelerate grants has been around D8 upgrade path blockers. These are extremely strategic, because they unblock a beta-to-beta upgrade path between Drupal 8 releases, which is extremely important for early Drupal 8 adopters.

For example, one large chunk of work D8 Accelerate has funded is in making sure Entity Field API and Views work well together. This is critical for features such as Multilingual, so content shows up in the right languages when expected, and it's necessary to complete this work prior to providing an upgrade path since it would be horrendous to write hook_update_N() functions for some of the necessary changes.

We're also exploring other alternatives to provide a beta-to-beta upgrade path to early adopters sooner, which is viable now that the hardest data-model-changing issues are done.

Security

Drupal drop on a padlock
Source: http://www.codepositive.com/code-positive/about

Obviously, we do not want to release Drupal 8 with known security vulnerabilities, but neither do we want to release an "upgrade path beta" that we encourage early adopters to use with known security vulnerabilities. Hence, we are trying to get any critical security issues taken care of sooner than later.

For example, one chunk of work that D8 Accelerate is funding in this area is tightening security around Drupal 8 entity API. Numerous form validation functions in core contain entity-level validation, to the wild dismay of anyone who's ever tried to implement web services on top of Drupal. The reason that's bad is because if you attempt to save something using just the Entity API (as you will in a REST API scenario where there are no forms), you will end up skipping validation routines and could end up with invalid and/or insecure data entry.

Targeted Sprints to Crush Criticals

Menu link critical sprint at DrupalCamp New Jersey
Source: https://groups.drupal.org/node/456353

As demonstrated at the Ghent critical issue sprint last year, nothing is better for D8's velocity than getting a bunch of awesome contributors in the same place to pound on the critical queue together. D8 Accelerate funded a fantastic Menu link critical sprint at DrupalCamp New Jersey which, in addition to turning this area of criticals from "OMGWTFBBQ" to an actionable plan, resulted directly or indirectly in all of the related critical issues in this area being closed, within a couple of weeks after the sprint.

We aim to do more of these same types of targeted sprints throughout the year, the next one being the DrupalCI sprint in a couple of weeks. More on that in a sec.

Drupal Community acceleration

Testbot modernization

Architectural diagram of new testbot
Source: https://www.previousnext.com.au/blog/architecting-drupalci-drupalcon-ams...

Our beloved Drupal.org testbot has been showing its age. https://qa.drupal.org/ is still on Drupal 6, and largely on life-support these days. Testbot doesn't support testing on multiple PHP versions and databases, both of which are Drupal.org (websites/infra) blockers to a Drupal 8 release.

The DrupalCI: Modernizing Testbot Initiative is being driven by numerous devops-inclined community members from around the world. It aims to rebuild testbot from a collection of Drupal modules to a more standard CI stack (using big fancy words like Jenkins and Docker and Puppet and Travis and Silex) that your average PHP/devops folk can both understand and help maintain.

There's been a lot of work on DrupalCI already, and the upcoming D8 Accelerate sprint on DrupalCI: Modernizing Testbot Initiative will bring all the various contributors together to form DrupalCI Voltron to get an MVP of all the various pieces working together. Actual deployment will happen some time later, and both the new and old testbot will run alongside each other for a good while so any kinks can be worked out while D8 development stays stable.

This particular improvement not only allows Drupal 8 to ship, it also will provide great new functionality to all projects on Drupal.org! The architecture allows for ample room for later expansion as well, so we could start doing things like automated code reviews, performance testing, front-end testing, etc.

People power!

Here are some of the awesome Drupal contributors who've benefited from these funds:

Not just code!

D8 Accelerate funds are being used not only to fund development work, but also to fund patch reviews as well as more "project management"-y tasks like "triaging" a set of issues to find the truly critical ones, research on different approaches, etc. Wherever possible, the core committers explicitly look for opportunities to fund two people, usually a developer and a patch reviewer, in order to maintain the sanctity of Drupal core's peer review process.

I think this is great, because it highlights that it's not just raw PHP that's going to get Drupal 8 out the door; it's a joint effort of many complementary skills coming together.

Show me the money!

Skeptical cat is fraught with skepticism
Source: https://blackincense.files.wordpress.com/2008/10/skeptical-cat-is-fraugh...

Why $250k to accelerate D8?

This is a very reasonable question to ask, particularly in light of the widely-cited statistic of 2,750+ contributors to Drupal 8, and various Drupal companies employing major contributors to Drupal core. Here are a few points:

  1. Drupal 8 will be a truly revolutionary release, not only by providing tons more useful functionality out of the box for site builders and content authors (WYSIWYG, mobile support, Views, configuration management, etc.), but by modernizing the underlying code base to address years of technical debt, and help "future-proof" Drupal for the next 10+ years. Unsurprisingly, this means that the total amount of work that has already gone into D8, and that remains needed to move D8 from a late beta to a release, is larger than it was for earlier versions of Drupal. Most technology maturations follow that pattern.

    Drupal 8's release also unlocks the move to a new release cycle that introduces backwards-compatible feature releases every 6 months. This allows us to "release early, release often," as opposed to "release every 4+ years, coupled with lots and lots of API breaks." ;)

  2. For these reasons, as well as many others, there's significant community benefit for 8.0.0 to be released as soon as possible, both so that sites can be built on it, and so that the 8.1.x branch can be opened for development for everyone with a feature idea itch to scratch. Additionally, many organizations and individuals who would benefit from D8 getting released sooner than later don't have the expertise or time to solve the remaining critical issues. These organizations/people might be willing to contribute money, but don't know who to best send it to or don't want to deal with the administration of contracting directly with individual core contributors. This fund is an opportunity for those organizations/people to make a difference without dealing with that administration.

    Make no mistake: Drupal 8 will get done, with or without this money. The goal of the fund is not about saying that our current awesome core contributor base is incapable of completing the work; it's only a recognition that funding work can make it happen faster.

  3. Why is this so? It's a common misconception that most core developers are paid for their work, either by Drupal companies who employ them, or by their customers. In reality, those directly financially compensated for their contributions to core (and especially to Drupal 8, which is not yet commercially viable for the masses) are a tiny fraction of the overall number of contributors.

    While there are numerous contributors who have already spent literally years contributing to core during their nights and weekends, and as a result have developed the kind of expertise needed to finish some of the remaining hard critical issues, relying on their ongoing availability of free time is not sustainable. These include contributors who work as freelance developers for clients, and it's certainly unfair to expect these people to turn down paid client work in order to have free time to work on core, or to quit being freelancers and become employees of forward-thinking Drupal companies who provide company time for core contribution. One of my favorite aspects of D8 Accelerate is that it is helping to "level the playing field" by making it possible for these people to have time to work on core regardless of their current employment situation.

  4. It's also important to emphasize here that injecting funding into the "bug fix slog" phase of major Drupal releases, when all the fun stuff that tends to motivate volunteers is long exhausted, is nothing new. That should come as no surprise, given that there have always been companies with financial interests in having a given version of Drupal ready sooner. For example, in Drupal 6, Acquia funded release manager Gábor Hojtsy full-time to help get that release done. In Drupal 7, in addition to employing core contributors full-time, Examiner.com paid numerous "bug bounties" out to folks to help slay specific critical issues. The difference here is that the DA as a non-profit organization needs to be extremely transparent in anything it's doing with the community's money, so there is greater visibility on things this time around.

If you don't want to donate, that's totally okay. You'll still be able to use Drupal 8 all you want, for free, when it's ready. Donating to this fund is only an opportunity to help make that happen sooner, if that's sufficiently valuable to you.

For a lot more "deep thinking" around these topics, see:

Many thanks to effulgentsia for his extensive help on this part!

How do you decide on how money gets spent?

The core committers have a well-documented process that explains how we decide what to fund. The TL;DR version is we look at criteria like:

  • Is a proposal genuinely a release blocker to Drupal 8, or something that will otherwise directly lead to an accelerated Drupal 8 release? (That's a biggie.)
  • Is a proposal resolving a blocker to other work, especially other release blockers?
  • Is a proposal resolving an "ecosystem" blocker? (For example "D8 upgrade path" issues that block early D8 adoption, blockers to a major portion of contributed modules/themes porting)
  • Is this a place where we can inject funding to take an issue the "last 20%" and get it across the finish line quickly?
  • Is momentum in this area slow, making it unlikely to be fixed "organically" by D8 contributors?
  • Are the people working in this area not directly funded (by an employer or client) to fix it already?
  • Do we have some confidence that funding will lead to a successful outcome?

Proposals that answer "yes" to more of these questions than not are more likely to get funded. And the D8 Accelerate team is constantly on the lookout for things that meet this criteria and proactively reaching out to contributors to help get things started.

In short, we take our responsibility with the community's money very seriously, and have turned down multiple community proposals that were fantastic ideas, but did not fit this criteria. (Where appropriate, we refer folks over to the Community Cultivation Grants instead.)

Also please note that a previous restriction around people asking for funding for their own time has been lifted a month or so back (Thanks, DA!). So if you are a contributor who knows a lot about critical issue #12345, you can request a stipend (initially capped at $500 for five hours) to help push it forward.

If that sounds like you, or you have other creative ideas on how we can get Drupal 8 out faster, apply for a grant today!

Thank you for your support!

I wanted to take the opportunity to give a huge shout-out to the "anchor donors" of the D8 Accelerate campaign:

Thanks to their efforts, every dollar you contribute is already matched by the Drupal Association and these anchor donors, doubling your impact. If you'd like, you can make a donation to my fundraising drive (I've set a very ambitious goal of $20,000 since that's 8% of $250,000 — get it? ;)):

Fundraising Websites - Crowdrise

...or, find your favourite Drupal person at https://www.crowdrise.com/d8accelerate/fundraiser and donate to theirs instead, or create your own! :)

Thanks as well to the folks who somehow stumbled across it and donated to my fundraiser already—Andreas Radloff, Douglas Reith, and Ian Dunn. I thought Ian's note was particularly awesome! :D

As a WordPress developer, I think we're all on the same side and wish you the best :)

And finally, thank YOU for any and all support you can provide that will help us make Drupal 8 the most successful release of Drupal yet! :D If you have any other questions, please feel free to ask!

Mar 26 2015
Mar 26

We’ve already discussed why accessibility matters and whetted your appetite on accessibility by providing a few simple things you can do to make your site more accessible.  Now it’s time to look at accessibility in forms, a common component of just about every site. Inaccessible forms can affect a wide array of disabilities - screen reader users, keyboard-only users, cognitive disabilities, and mobility impaired - and create unnecessary barriers for what could be loyal website visitors and customers.   Here are 5 items to get you started:

1. Thou shalt use the <label> tag.

The label tag associates a single label tag to a single form element (text fields, radio buttons, checklists, etc.) and gets read by the screen reader when focus is placed on the form element so that a user knows what type of input is expected.  The “for” attribute is used to associate the label with the form element by matching the value of the “for” attribute to the value of the elements “id” attribute.  This association should be unique - one label per form element.  Using the id attribute fosters this since id’s should be unique on a page as well.  This also has the side effect of being able to select an option or put focus on an element by clicking on the label, which is great for someone who has difficulty being accurate with a mouse.  And remember, the placeholder attribute should not be used as a replacement for a label.

Example:

<label for=”myinput”>Label text</label>
<input id=”myinput” name=”textfield” type=”text”>

2. Thou shalt use fieldsets and legends for grouping.

Use fieldset and legend tags for grouping related input elements, particularly with sets of radio buttons and checkboxes.  Use the legend tag with fieldsets to associate a description with the group of elements.  The text of the legend element will be read by the screenreader when a person tabs to an input element in the group, whereas text placed outside of legend text would be completely skipped by screen readers when tabbing from form element to form element.

Example:

<fieldset>
    <legend>Choose your favorite sport:</legend>
    <input id="soccer" type="checkbox" name="sports" value="soccer">
    <label for="soccer">Soccer</label><br>
    <input id="basketball" type="checkbox" name="sports" value="basketball">
    <label for="basketball">Basketball</label><br>
    <input id="quidditch" type="checkbox" name="sports" value="quidditch">
    <label for="quidditch">Quidditch</label><br>
</fieldset>

3. Thou shalt clearly indicate required fields as part of the label text.  

Optimally, required fields would have “(required)” in the label text of each required field as not all screen readers read an asterisk by default.  Alternatively, if most fields are required, instead of polluting every label with “(required)”, non-required fields could have “(optional)” as part of the label. Either way, required fields should be clearly indicated.

Example:

<label for=”myinput”>Label text (required)</label>
<input id=”myinput” name=”textfield” type=”text”>

4. Thou shalt keep all form elements accessible by a keyboard and in logical tab order.  

Not everyone uses a mouse.  Some users navigate a form by tabbing from element to element. By default, form elements can receive focus by the keyboard but it is possible to break this behavior with javascript.  Be sure it stays intact.  In addition, the tab order of the form elements should be logical.  Bouncing from First Name to Address to Last Name is likely to cause a non-sighted user to question whether they’re missing something.  Generally, staying away from tables for form layout and allowing forms to keep their linear display is all that’s required since tab order is set according to the order of the form elements in the source code.  But there are times when tables are needed for forms and tab order needs to be addressed with the tabindex attribute. However, use the tabindex attribute with caution.    

5. Thou shalt clearly provide feedback to the user.

Whether it’s a validation error or successful form submission, users should be clearly notified.  Validation errors should include what field was rejected and why.  W3C provides an excellent overview of common approaches for placement of these success and error notifications.

For basic forms, following these commandments will create a form usable by a wider audience.  However, forms can get vastly more complicated.  Sometimes there IS a need to associate one label with multiple form elements.  Sometimes a label needs to be hidden.  Sometimes we need to use custom controls like star ratings and share buttons.  A couple great resources for more in-depth form accessibility are:

Go forth and accessify!

Additional Resources

5 Simple Things You Can Do To Make Your Site More Accessible | Mediacurrent Blog Post
Why Accessibility Matters | Mediacurrent Blog Post
Web Accessibility Terminology | Mediacurrent Blog Post

Mar 26 2015
Mar 26

The worst time to read software documentation is when you're trying to fix something that is broken and you have no idea why. I'd say it's like shopping when you're hungry, but it's actually the opposite. When stuff breaks for no apparent reason and you're on edge it's easy to notice every little issue with the docs, and you instantly form a very strong opinion on documentation.

The good news is that as a Drupal developer, you have tons of awesome documentation just a click away on api.drupal.org or drupal.org/documentation/develop and contributed module documentation is better than ever. Even though almost everything you could ever want to know about Drupal internals is available on api.drupal.org (even the source code!) sometimes you need to combine that with contributed docs, or dig in a little deeper.

That's exactly what I had to do while working on the first few chapters of Model Your Data with Drupal. Hooks like hook_entity_info() are well documented on api.drupal.org, but the Entity API module adds it's own options to the mix. Entity API has great docs on it's options as well, but there isn't a lot out there that thoroughly documents them together, or the interaction of their options.

Because of that, I decided to add an appendix to Model Your Data with Drupal that details the most common and popular keys for hook_entity_info() and what the recommended defaults are not just when using Drupal core by itself, but when using Entity API as well. After looking at this, I realized that this would make a pretty awesome cheat sheet, which is why you're receiving this email.

I didn't just copy and paste docs from multiple sources to compile this data. With this cheat sheet you'll save yourself time reading deep into common.inc trying to figure out just what properties are actually required. You'll also be able to see where Entity API module takes care of callbacks, and where it doesn't along with which interfaces and classes to implement and extend for various controllers.

The cheat sheet is available via email by signing up at modelyourdatawithdrupal.com.

Along with this cheat sheet, I have more good news: pre-sales of Model Your Data with Drupal start tomorrow. Sales will open with a special launch discount, so keep an eye out for more emails with information on getting the best deal on the book.

I hope this cheat sheet saves you time helps you have a better understanding of one not only one of the most powerful Drupal core subsystems, but also one of the most useful contributed modules for advanced site building.

To get your copy of the cheat sheet, head over to modelyourdatawithdrupal.com and fill out the email sign up form.

Mar 26 2015
Mar 26

Dynamic forms without the custom JavaScript

Drupal's Form API helps developers build complex, extendable user input forms with minimal code. One of its most powerful features, though, isn't very well known: the #states system. Form API #states allow us to create form elements that change state (show, hide, enable, disable, etc.) depending on certain conditions—for example, disabling one field based on the contents of another. For most common Form UI tasks, the #states system eliminates the need to write custom JavaScript. It eliminates inconsistencies by translating simple form element properties into standardized JavaScript code.

The syntax of the #state property is:

#states' => array(
  'STATE' => array(
    JQUERY_SELECTOR => REMOTE_CONDITIONS,
    JQUERY_SELECTOR => REMOTE_CONDITIONS,
    ...
  ),
),

Each form field consist of states and remote conditions that define the properties of the field. A state is a property that can be applied to a form element (i.e. enabled, disabled, checked, unchecked), while a remote condition is the state of an element to trigger a change in different element. All the available states and remote conditions are defined in the drupal_process_states(). Wunderkraut also has a great article, quoting our Co-Founder and CEO Jeff Robbins breaking down the states into two categories, "the ones that trigger change in others" and "the ones that get applied onto elements".

Starting with a simple form, let's look at a few examples.

Here is a simple form to collect some basic personal info from a user.

We would like the name field and anonymous checkbox to work together. We can simply use the invisible state to hide the name field when the anonymous checkbox is checked, and conversely keep the anonymous checkbox unchecked if the name field is filled.

// Hide name field when the anonymous checkbox is checked.
$form['name'] = array(
  '#type' => 'textfield',
  '#title' => t('Name'),
  '#states' => array(
    'invisible' => array(
      ':input[name="anonymous"]' => array('checked' => TRUE),
    ),
  ),
);

// Uncheck anonymous field when the name field is filled.
$form['anonymous'] = array(
  '#type' => 'checkbox',
  '#title' => t('I prefer to remain anonymous'),
  '#states' => array(
    'unchecked' => array(
      ':input[name="name"]' => array('filled' => TRUE),
    ),
  ),
);

We can also define multiple conditions to show the email field, only when the name field is filled and email is the preferred method of contact.

// Show the email field when the name field is filled
// and 'email' is selected for the preferred method of contact field
$form['email'] = array(
  '#type' => 'textfield',
  '#title' => t('Email'),
  '#states' => array(
    'visible' => array(
      ':input[name="name"]' => array('filled' => TRUE),
      ':select[name="method"]' => array('value' => 'email'),
    ),
  ),
);

For OR conditions, use an non-associative array by wrapping each group of conditions in the array() function.

// Show the email field when either
// * the name is filled and the method is email,
// * or anonymous is checked and method is email.
$form['email'] = array(
  '#type' => 'textfield',
  '#title' => t('Email'),
  '#states' => array(
    'visible' => array(
      array(
        ':input[name="name"]' => array('filled' => TRUE),
        ':select[name="method"]' => array('value' => 'email'),
      ),
      array(
        ':input[name="anonymous"]' => array('checked' => TRUE),
        ':select[name="method"]' => array('value' => 'email'),
      ),
    ),
  ),
);

There is also a special syntax for the exclusive or (XOR) operator, which one or the other condition is true, but not both. While the OR operator is implied by using an non-associative array, the XOR operator needs to be explicitly defined with an array item with the string 'xor'. Let's change the email field to accommodate users that prefer to be anonymous.

// Show the email field when either condition is true, but not both
// * the name is filled and the method is email,
// * anonymous is checked and method is email.
$form['email'] = array(
  '#type' => 'textfield',
  '#title' => t('Email'),
  '#states' => array(
    'visible' => array(
      array(
        ':input[name="name"]' => array('filled' => TRUE),
        ':select[name="method"]' => array('value' => 'email'),
      ),
      'xor',
      array(
        ':input[name="anonymous"]' => array('checked' => TRUE),
        ':select[name="method"]' => array('value' => 'email'),
      ),
    ),
  ),
);

Conclusion

Form API #states is a great way to generate consistent JavaScript code for simple form interactions. Although custom JavaScript might be necessary for more complex requirements, the #states system gives us a very good start in creating centralized and standardized frontend code. I find the #state system generally much cleaner in terms of code maintenance. Compared to custom JavaScript, it's also less prone to bugs and accessibility issues.

Further reading

Senior Developer

Want Angus Mak to speak at your event? Contact us with the details and we’ll be in touch soon.

Mar 26 2015
Mar 26

Image Dialogs and Modals are an important UX pattern and can be used effectively both to provide information and to handle user interaction.

A key use for Dialogs and Modals in Drupal is to present a new user interaction without losing the original context. For example, when editing Views settings the modal allows the user to be presented with a new interface without navigating away from their original location.

Displaying Modals in Drupal 7

In Drupal 7, there are a number of approaches and modules for displaying and working with modals and dialogs. Views UI is probably the most common place where sitebuilders interact with modals in Drupal 7, closely followed by Panels/Page Manager. Both of these use modals for simplifying the user interface and the lazy-loading of elements when needed, keeping the interface uncluttered until a specific user interaction is required.

In Drupal 6, there were a number of dialog/modal API modules – with varying popularity – including Modal Frame API, Dialog API, and Popups API, but none have even reached an alpha release for Drupal 7, leaving Ctools Modal as the de facto API for Drupal 7.

Common Use, Different Approach

While each Drupal 6 and 7 modal/dialog module has a common use-case and set of requirements, each implement the functionality in their own way. Additionally, many of these use a Not Invented Here paradigm to roll custom solutions into a problem that’s already been solved in the wider web-community. As a result, many of these solutions are lacking in certain areas, such as accessibility. Also, given the range of different solutions and APIs, DX and consistency suffers.

Drupal 7 already includes the jQuery.UI library which itself contains a Dialog component. The Views modal uses the jQuery.UI Dialog while the Ctools module doesn't – further emphasizing the disconnect in approaches.

With Views coming into core in Drupal 8, we needed a Dialog/Modal API for it to use; this led us to develop the current solution, meaning that core now has an API for this functionality.

In addition, because accessibility is one of the core gates, we needed to solve the problem in a way that didn't exclude screen-reader users, those who prefer a keyboard, and those with JavaScript disabled.

Rather than continue the “not-invented-here” approach, we reached out to the jQuery.UI team and worked with them to solve some accessibility short-comings in the then stable-release. These made it into the jQuery.UI 1.10 release, cross-project collaboration for the win!

Handling non-js Fallbacks

One of the shortcomings of Drupal 7's routing system was that you had to juggle whether the user has JavaScript enabled when serving dialogs/modals. It was common to see URLs containing a nojs slug. For example, in Views UI there were two versions of each URL for JavaScript and non-JavaScript. The markup would render the URLs with the nojs form (e.g., 'http://example.com/admin/structure/views/nojs/display/myview/default/style_plugin' then the JavaScript would handle fetching the content from 'http://example.com/admin/structure/views/ajax/display/myview/default/style_plugin', with the menu callback at the ajax path returning Ajax commands to display a modal, and the nojs returning a normal form via a page callback for those with JavaScript disabled.

Drupal 8's Routing System

Drupal 8's routing system, based on that of Symfony 2, has support for the Accept request header baked into it. This means you can serve two different versions of the same content at any URL depending on the Accept headers used in the incoming request. For example, you could serve an HTML version of a node at node/1 as well as a JSON version, with only the accept-header varying.

This is achieved with a _format entry in your routing requirements entry. For example:

mymodule.route_html:
  path: '/admin/config/mymodule'
  defaults:
    _title: 'My module'
    _content: '\Drupal\mymodule\Controller\MyModuleController::somePage'
  requirements:
    _format: 'html'
    _access: 'TRUE'

mymodule.route_json:
  path: '/admin/config/mymodule'
  defaults:
    _controller: '\Drupal\mymodule\Controller\MyModuleController::jsonCallback'
  requirements:
    _format: 'json'
    _access: 'TRUE'

RouteEnhancers

Another key element in the new Drupal 8 routing system is the concept of RouteEnhancers. These are from the Symfony CMF routing component. They are similar to Drupal 7's hook_menu_alter(), but because they run at the time of Request instead of when the cache is empty, they have the opportunity to essentially re-route an incoming request.

One such enhancer is the ContentControllerEnhancer which handles incoming requests for Ajax, HTML, and dialogs/modals. In the case of Ajax requests, it makes sure the response is routed via the AjaxController. In the case of HTML requests, it sends the request via the HtmlPageController, which is responsible for wrapping the inner-page content in blocks etc. But the behavior we're interested in here is when it routes incoming requests with an Accept header of either application/vnd.drupal-modal or application/vnd.drupal-dialog to the DialogController.

The DialogController

This is the guts of the PHP side of the Dialog API. It handles incoming Dialog requests and returns the response in a format that the JavaScript code running client side then uses to display the dialog or modal.

So how does it work? The ContentControllerEnhancer sends the request via the DialogController in a manner which allows the DialogController to ascertain where the original request would have ended up if it were a standard (HTML) page request. The DialogController then uses this information to get the original content that would have been seen on that page (minus the blocks, etc.) that wrap the inner content on a Drupal page.

The DialogController then creates the necessary AjaxCommand objects for displaying the dialog/modal and returns an AjaxResponse object in a similar fashion to any other AjaxCommand/AjaxResponse. The JavaScript in the client-side code that made the request then executes these commands and the dialog/modal is displayed.

Using the Dialog API

There are two main ways to use the Dialog API: either with a link, or with a form-button.

Simple Link Example

To make a link return the content in a Dialog, all you need to do is add two attributes; the use-ajax class and the appropriate data-accepts attribute, depending on whether you want a modal or a plain dialog. To request a modal, use data-accepts='application/vnd.drupal-modal'. To request a dialog, use data-accepts='application/vnd.drupal-dialog'.

<a class="use-ajax" data-accepts="application/vnd.drupal-modal" href="http://drupalwatchdog.com/volume-4/issue-1/make-mine-modal/some/path">Make mine a modal</a>

Form Example

To use a form button to trigger a dialog, just setup an #ajax property like any other Ajax behavior, add an accept behavior and a callback method to return a new AjaxResponse containing an OpenDialogCommand or OpenModalDialogCommand.

<?php
/**
 * {@inheritdoc}
 */
public function buildForm(array $form, array &$form_state) {
  // Make the button return results as a modal.
  $form['foo'] = array(
    '#type' => 'submit',
    '#value' => t('Make it a modal!'),
    '#ajax' => array(
      'accepts' => 'application/vnd.drupal-modal',
      'callback' => array($this, 'foo'),
    ),
  );
  return parent::buildForm($form, $form_state);
}
 
/**
 * Ajax callback to display a modal.
 */
public function foo(array &$form, array &$form_state) {
  $content = array(
    'content' => array(
      '#markup' => 'My return',
    ),
  );
  $response = new AjaxResponse();
  $html = drupal_render($content);
  $response->addCommand(new OpenModalDialogCommand('Hi', $html));
  return $response;
}

The resultant modal looks like so:

Model - before.

And when this issue lands, it will look like so:

Model - after.

Summary

So that's a quick overview of the Dialog API. I'm looking forward to the possibilities this will open up for Drupal 8 contrib. Particularly for themers, the ability to quickly add two attributes to a link and get the result in a modal is going to make adding dynamic interactions far simpler.

One place where this will make a huge UX improvement is for confirmation forms: clicking the 'delete' link for a piece of content could load the confirmation form in a modal, with no need to redirect the user to a new location.

Bring on Drupal 8!

Image: "222/365 - book in bloom" by orangesparrow is licensed under CC BY-NC-ND 2.0

Mar 26 2015
Mar 26

Submitted by ansondparker on Thu, 03/26/2015 - 12:09

I recently had to add a new content type to an existing tabled view... the thing was that one content type had an extra field to define a link... fine.. I figured I could go in and use Views PHP.... it's a hacky solution, but a few lines and I'd be out the door... curiosity got the better of me... I know Views PHP is a bit déclassé, so for good measure I googled for a solution, and pretty quickly stumbled on Views Conditional... the documentation is clear, but I figure a few screen shots won't hurt, and may encourage the next nerd looking for a better way to set either-or's in their views...

Steps to get some variance in your views

  • Add the conditional view field after adding your fields that you want to show add the conditional view field
  • UI for Conditional View Field the UI is really straightforward - and tokens from previous rows are available.... just exclude them from view and use them here.... it's absolutely obvious :)
  • Conditional Field Outputs... Ya! and that's about it... sure - we could've done that a bunch of ways.... but with a clean UI like that there's no reason too.. and if you really want to you can have conditions based on the conditions... bonus.
Mar 26 2015
Mar 26

Warning: mildly graphic medical descriptions referenced below...

The Good:
7 years and 6 months in the making, I very much anticipated my 50th Drupalcamp. As seems to happen at least one time each year, I flew straight from one event to another. I had my 2nd speaking gig at SxSw in Austin then flew directly to Chicago for MADcamp. I love the new name: Midwest Area Drupal Camp!

My Drupal 8 Live Demo at SxSw was well received. When I proposed the session, I thought for sure we’d be in a Release Candidate. As it turned out, Drupal 8 was only at Beta 7. I walked through demonstrations of creating content types and views and placing blocks with the new layout system. Matt Cheney joined in for a short but powerful demo of Configuration Management. Over 20 Attendees followed along with their Pantheon instances which were built directly from Drupal 8 HEAD. This gave us FAR FEWER bugs than Beta 7 contained.

My Pantheon training class at MADCamp in Chicago went very well. We were converting configuration to code via features and committing the code to the repository, then pushing it up the deployment chain. As is always the case, the attendees marveled at how a modern Development Operations workflow could be achieved without being a sysadmin or paying for multiple servers and glueing them together with scripts.

The Bad:
About 9:30 on Saturday 3/14, The Only Real Pi Day Of Our Lives, I was walking in downtown Austin on 6th street and got me left foot hooked into a storm drain as I navigated the curb back up to the sidewalk. I went down FAST and planted on my side with my left arm fully extended. Those nearby heard the THUD and rushed to help. I got right back up. Brushed off some sidewalk debris and marveled that nothing hurt and there was no blood! End of story… Not quite…

The Ugly: [VERY UGLY]
As the goose egg swelled up near me left elbow, I applied ice to it throughout the night. It took on some gnarly colors and would swell back up if the ice wasn’t directly on it.

The next day, still no pain, but it looks horrible. On the phone, my wife insists that I go to an urgent-care facility, so off I go. 3 x-rays later, no breaks or fractures are visible. End of story… Not quite…

Wednesday [4days after the fall] I wake up to fly out to Chicago. My entire left forearm has taken on every sickly shade of blue black and Burgundy! By that evening, my hand is welling up as well.

Thursday I teach the Pantheon class all day and don’t think much about my arm. After all, IT NEVER HURT! But by Thursday night I have TWO purple fingers and a third on the way. I’m afraid to go to bed at 1am, so I head for the Cook County ER in Chicago. They draw blood to test for an infection and x-ray e again. No breaks or fractures. It is explained that I have a lot of blood pooling up beneath my skin and it's going to take some time to heal. They send me home.
Arriving at the hotel about 5:30, I clean up, take 2 Ibuprofen PM and fall asleep at 6am. The phone wakes me up at 8am. Somehow I left the ringer on AND was able to fumble around with it and answer it. It’s the same Doctor that saw me in ER. He says my platelet count is slightly elevated and he’s afraid I may actually have an infection. He tells me to get back to ER pronto! I ask if I can sleep it off first. He says NO.

I’m back at ER within the hour. More blood work, more x-rays, this time including chest, shoulder, arm, hand, and leg. Many hours later [sleeping on and off] the x-rays and blood work look fine. I’m back at the hotel. My wife is in a panic over it all and asks that I catch the next flight home to Indy. They’re all booked so I grab one the next day.

My 50th Drupal Camp included an awesome dinner for the out-of-towners, one full day of training, an amazing pre-camp dinner, 2 ER visits, and only a couple of hours at the actual camp. I stopped in to say HI and BYE. Oh well.. At least dinner that night was amazing Greek food with good friends.

And now…

Hand still swells up badly if not iced. I haven’t been able to use my left hand much for 36 hours. I can’t even make a fist.

The alien coloring of my left forearm is fading slowly. The goose egg by my elbow is as goose as it has ever been. TWELVE DAYS after the fall?!?!?

Doc says, wait it out. The body will heal itself over time.

Midcamp / MADCamp / Drupalcamp logoHere’s to hoping that my next MADCamp is
WAY MORE BORING than my 50th DrupalCamp!

Mar 26 2015
Mar 26

Drupal 8 coming (queue suspenseful music)! One of the HUGE things with Drupal 8 is this term called "headless Drupal". Don't worry the Drupal drop isn't in any danger of losing precious head real estate, rather headless Drupal refers to using a different client-side (front-end) framework then the back end Drupal framework. From what I have read the goal is to have Drupal become the preferred back-end content management system.

Are curiosity has peaked and we have started looking into the various Javascript Frameworks that are available. As of now the one that is peaking our interest is Angular.js. If you are experimenting, what frameworks do you like? If you don't know where to start check out this infographic below. Special thanks to Anna Mininkova from Web Design Degree Center for sharing this with us.

Choosing a Javascript FrameworkSource: WebDesignDegreeCenter.org

Mar 26 2015
Mar 26

Drupal has all the elements to build a custom content model that can mirror a DITA specialisation. The really exciting thing is that this data model can be built from the UI, without a single line of code. While there are considerable drawbacks to the usability of the resulting interface, the fact that this is a free and open source implementation means that those who have more time than money could use this implementation as a starting point to build a DITA CCMS that accommodates an arbitrary specialisation.

I am really excited about this because for the first time it is now feasible to build a completely free and open source CCMS for DITA, or for any other XML language for that matter. Such a starting point would enable a community of DIY technical writers to bootstrap a full featured open source CCMS. Through small incremental improvements the community could further improve the implementation so that it becomes useful for a broader audience.

Drupal modules

To achieve this I’ve used the following modules:

  • Paragraphs: normally meant to enable rich structured web pages, that don’t rely on a WYSIWYG editor. The Paragraph field lets you define a set of predefined field bundles that can be added at a location in the page. Because you can add a paragraph field to a paragraph bundle, this allows you to nest bundles as deep as you need them to go (this is hard to explain, just watch the screencast if this wasn’t clear).

  • Asset: the asset module was originally conceived to enable reuse of media assets. But because it is possible to create custom assets with any fields you want, this enables you to define a series of elements that can be added into a CKeditor field, with an arbitrary content architecture. Currently assets display as blocklevel elements. But I think it should be possible to define in-line elements. As a bonus the Asset module includes assets by reference, which means you can use this to create arbitrary reusable elements. So Drupal can also do transclusion...

This is fairly complex to express in words, so I’ve included a screencast that shows you how the modules work and what you can do with them.

Video demo

[embedded content]

The video shows:

The paragraphs module and how you can use it to:

  • add a paragraph
  • define what bundles to use
  • create a paragraph bundle
  • add fields to a paragraph bundle

The asset module and how you can use it to:

  • define what assets can be used by a filter type
  • create a new asset
  • add fields to an asset

A prototype for the DITA task content model and how you can use it for:

  • obligatory elements
  • optional elements
  • grouping elements under a top level group
  • choosing between 2 optional elements
  • adding any number of optional elements
  • infinite nesting

Output

The content model, while closely resembling the DITA model, will have some redundancy: sometimes you need an additional level in the form to achieve the proper behavior (e.g. when you can choose between 2 optional elements). Also in some cases it might not be possible to name the label conform to DITA standards because of the expected label format in Drupal.

To get to a valid output it would be enough to build an output function that generates an XML file in which each of the fields is encapsulated in a tag pair for the field name upon which an XSLT transformation is done to remove redundant elements and to replace fields with an incorrect tag with their proper element tag.

Since we don’t need to move elements around, a simple replacement feature should be enough to generate valid DITA XML. It is easy to imagine a visual interface that would enable someone who is not a developer to build this transformation, through a series of replacement pairs, so that you wouldn’t need a developer to build the content model and to generate the proper output.

Future

So what is next? Should we build such a CCMS? Maybe we could make an example implementation for Lightweight DITA? The only part that is currently missing is the output function and it’s interface. If there is sufficient interest I will consider asking one of our developers to build the module and we could build the Lightweight DITA implementation at a code sprint, at one of the coming DITA conferences.

Interested in an open source DITA CCMS?

If you would like to use and/or contribute to a free an open source DITA CCMS, fill in our form to help us decide how much time to invest in this project:

Mar 25 2015
Mar 25

Next week, an international conglomeration of awesome folks will convene in Portland, Oregon for a D8 Accelerate-funded sprint on DrupalCI: Modernizing Testbot Initiative.

The main aim of DrupalCI is to rebuild Drupal's current testbot infrastructure (which is currently powered by an aging Drupal 6 site) to industry-standard components such as Jenkins, Docker, and so on, architected to be generally useful outside of Drupal.org. See Nick Schuch's Architecting DrupalCI at DrupalCon Amsterdam blog post for more details.

The goal is to end the sprint with an "MVP" product on production hardware, with integration to Drupal.org, which can be used to demonstrate a full end-to-end D8 core ‘simpletest’ test run request from Drupal.org through to results appearing on the results server.

You can read and subscribe to the sprint hit-list issue to get an idea of who's going to be working on what, and the places where you too can jump in (see the much longer version for more details).

This is a particularly important initiative to help with, since it not only unblocks Drupal 8 from shipping, it also makes available awesome new testing tools for all Drupal.org projects!

Go Team! :)

Mar 25 2015
Mar 25
Share this


“Your journey to heaven or hell or oblivion or reincarnation or whatever it is that death holds. … This is the Antechamber of the Mystery…”
Brent Weeks, The Way of Shadows

I suppose this will be the last entry in Aaron Winborn’s blog. This is his dad, Victor Winborn, filling in for him. Aaron entered his final sleep of this life on March 24, 2015. His transition was peaceful, and he was at peace with himself.
Aaron was the oldest of six natural siblings and two step-brothers. He was convinced, of course, that I was doing the father thing all wrong and that he could do far better on his own. He finally got to try to prove it when he gained two daughters of his own. Unfortunately, he was diagnosed with ALS shortly after the youngest was born. We will never know how successful he would have been with teenage daughters, but if the first few years of life with the girls are any indication, he would have been an outstanding father.

His mom, Lynn, no doubt carries the most memories of his infancy and childhood, though there is the interesting story or two that Aaron carried with him to the end. Such as the time his friend, Buddy (remember the My Buddy dolls?) became so dirty that his mom insisted Buddy needed a bath. Aaron was having a panic attack over Buddy drowning in the washer, so his mom invited him to sit in front of our front loader and watch his playmate float past the window in the door of the machine. Things were going great and Aaron was pretty sure his friend must be enjoying the swim, when Buddy suddenly exploded and his innards filled the machine with cotton-like blood. This was such a vivid evisceration that Aaron carried that vision, burned into his retina, to the end.

At twelve, we let him fly to Houston to live for a few months with his aunt and uncle, Diana and Les. He learned lots of useful things there, such as that you can’t just get your hair wet to prove to your more than astute aunt that you washed it. You actually have to put a little of the shampoo on your head so she can smell the strawberry scented perfume in it. His Uncle Les was one of his favorite people in the world. Les is the most captivating professor that either of us ever took a class from. In college, Les’ classes on Vietnam history, Film history, and Black history gave Aaron a yearning to teach.

Aaron had innumerable interests and was talented in most all of them. He left home shortly after high school in order to spend time with Paul Solomon (Fellowship of the Inner Light) at Paul’s retreat, Hearthfire. A year or so later, we learned that he had become personal assistant for Elizabeth Kubler Ross (On Death and Dying). Then we learned that he had moved to a commune in England, Mickleton House in Mickleton Village, Gloucestershire, UK. While there he worked in construction and kissed a girl for the first time.
When he finally returned home, Aaron and I were able to work together for several years. As we drove from job to job, we talked about books, poetry, religion, history, philosophy, technology, and science. We frequently didn’t agree on many topics, but we had eerily similar interests. That made it both fun and frustrating to carry on extended conversations. Of all my kids, Aaron was at the same time, both the most like me and the least like me. I often wondered how that worked. While he may have inherited a lot from his old dad, he always had a mind of his own.

One of those extended conversations had to do with both the purpose and destiny of the soul. I had homed in on the concept that a person was a soul, consisting of physical body and immaterial spirit. At the time, Aaron had no problem with that concept. Where we departed ways was in the purpose. He felt that we were in this physical realm in order to overcome physicality and to develop the spirit – at all odds with the physical pressures against the concept. My belief was that the spirit came to this world in order to learn to master the physical. The final take of our argument was that since he was the younger and had more years ahead of him than I did, I hoped that he would somehow find the real answer and share it with me. Now that Aaron has passed into or through the “Antechamber of the Mystery”, if he has any consciousness at this point, he knows the answer. Unfortunately, the veil separating us will prevent him from giving me the answer. I’ll have to go find it for myself.

The yearning to teach that his Uncle Les had instilled in him was finally fulfilled when he was offered the opportunity to teach at School in the Community in Carrboro, North Carolina. This is where he learned of his love for Web development. During this period, he attended a retreat at the Mid-Atlantic Buddhist Association outside St. Louis, and managed to fulfill a 30-day vow of silence. Any of you who knew Aaron while he had a voice, know that the experience must have been torture for him. Shortly after that, he met the love of his life, Gwen Pfeifer. They then moved to Willimantic, Connecticut. He taught at Sudbury School about 25 minutes away in Hampton. Hampton has the honor of being the only town in Connecticut without a stoplight. Their oldest daughter, Ashlin, was born while he worked there.

A few short years later, he was able to fulfill his next big dream of professional Web development. He became the first employee of Advomatic and ultimately also became an expert in the Drupal content management platform. After they moved to Harrisburg, Pennsylvania he wrote the book, Drupal Multimedia. In Harrisburg, Ashlin (and now Sabina) started attending The Circle School.

The Circle School community has been the core of the most amazing support group that I have ever heard of, much less experienced. For almost four years, they have offered food, help, work, laughs, and tears to the entire family. The co-captains in the Gwen & Aaron’s Helping Hands group have been relentless in their efforts to keep the little community vibrant, healthy, and ever-busy. Every time I have made the trip up to PA to visit, there have been multiple people dropping by to provide meals, take Ashlin to local events, help Aaron with his computer work, or just to offer hugs and hold hands with him.

I think the most important non-family member in their lives has been Michelle, who started out as a Mother’s Helper, and ended up being, besides Gwen, Aaron’s primary caregiver. She has become such an important part of the family, that little Sabina actually believes she is a family member. Michelle just finished her degree in psychology, though; and she and her husband will be moving closer to his work in Baltimore. She will always be a part of Ashlin’s and Sabina’s hearts, and I suspect will stay close to the girls for many years.

As many of you know, Aaron chose to take the path of coming back to this life in the same body that he inhabited here as Jeffrey Aaron Winborn. In other words, he chose to have his body frozen (cryonics) and hopes to have it restored when science has the knowhow to bring it back from the dead and to heal the affliction that ALS laid upon him.

While he was still at home, Aaron and I went through a period of reading many of Robert Heinlein’s books. One especially notable character of his was a curmudgeon named Lazarus Long. Lazarus was the main character of a number of books. I guess I didn’t realize at the time how much an impression this character made on Aaron. He ended up naming his first dog Lazarus Long Winborn. I just called him Grandpuppy. Aaron has been interested in cryonics for a long time, and had been planning on taking advantage of it even before he learned he had ALS. I have always wondered how much influence the Heinlein books had on this decision.

He leaves behind, his beautiful and loving partner, Gwen, and his two wonderful daughters, Ashlin and Sabina. Besides his core family, he leaves behind parents, siblings, aunts & uncles, cousins, a large local community of friends and caregivers, an extended family of coworkers, professional friends, activist friends, fellow ALS patients and survivors, and many more than I could ever know. There must be hundreds of people who have learned to love and honor him. And now comes the hardest moment a father can ever have when he has to say…

Goodbye, my Son. My friend.

Love,
Dad

Pages