Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 23 2016
Sep 23

Is creating website really so hard? This is a question I get from people not in the industry after I tell what we do at AGILEDROP. My reply is that web development is hard because it is just one form of programming. And programming is hard. Difficult to learn and even more difficult to master.

Why is programming so hard?

When you are developing, you are creating new things. Spawning solutions from thin air. Programming is an art. Programming is also inventing things. Manuals for inventing can not exist. Instructions on how to do programming will never be complete, only indicative, guiding. What you do with programming has not been done before, not the same.

Many times, developing complicated software results in problems, frustration. There is a constant need to think in new ways. Problem-solving is the core of the art. As a programmer, you need to embrace and accept that you’ll never know when and in what form problem will appear. The reality is not elegant or simple; it is chaotic and unpredictable.

You will find documentation is wrong. Weird hardware bug takes you days until you will realize it was not your fault. Many times you spend hours looking for a spelling error that was staring you in the face all the time.

To be a programmer, you need to be a person who gets excited about solving hard problems. For example, if you found math to be an interesting subject in school, you will have no issues with programming.

So, is web development hard?

Compared to mobile or desktop development, web development is not hard. Most web development today is done on top of open source which makes frameworks, tools, and documentation accessible to everyone.

You also need to learn basic conceptual foundation.  You shouldn’t start with something like jQuery or Angular!  You need to focus on understanding the basic web development models: HTML, Form processing, and back-end interaction such as with a database.  Know those layers before you start getting into the advanced stuff.  Without an understanding of the basic foundation, it’s going to take you a lot longer than it needs to.

Where to start learning?

Whether you are new to web development, I would advise starting with an open source software that has an active community. When I started in the Drupal , people in the community enabled me to grow very fast from a beginner to an entirely independent developer.

How can my knowledge be tested

To fully understand how deep the knowledge of an individual is when it comes to web development or programming the only route to go is to look at the experiences, references, and do a test project. Recently I applied to join Toptal Web engineering group where the process of selecting a candidate is very sophisticated. If you want to understand where you with your current level of experiences and knowledge I recommend you to put yourself out there.

May 04 2016
May 04

We at Liip AG believe, that the migration API is the best and most efficient way to import data into Drupal. Here are some reasons, why you should use migrate instead of the feeds module or any other custom importer modules:

  • Since Drupal 8, Migrate API is part of Drupal core
  • Migrate will be maintained and supported as long as Drupal 8 exists as it provides the upgrade path for older Drupal versions to Drupal 8
  • Migrate is sponsored by Acquia and mainly supported by Mike Ryan, a well-known and skilled Drupal developer.
  • Migrate has out of the box support for all important Drupal objects such as nodes, users, taxonomy terms, users, files, entities and comments.
  • Migrate has a Drush integration, that allows you, to run import tasks from command-line or via cron job
  • Migrate maintains a mapping-table, has rollback functionality and even supporting a highwater field, that allows to import only new or changed datasets.
  • Migrate is well documented and there is an example module.

Getting started with Drupal 8 Migrate Module

The Migrate 8 module in core is only an API. There is no user interface. This makes it difficult for new developer to start with Migrate 8.

I suggest you, to install the below listed extension modules right away before you start developing if you want to realize the full potential of migrate:

Migrate Plus (https://www.drupal.org/project/migrate_plus)

  • Extends the migration framework with groups
  • Delivers a well documented example module

Migrate Tools (https://www.drupal.org/project/migrate_tools)

  • Provides Drush commands for running and managing migrations in Drupal 8

Migration Source Plugins

Installing a fully working Drupal 8 Migrate setup using composer

Instead of starting now to download in install all the module mentioned above, you can use my installation profile based on a composer.json file. Because a lot of modules with specific version are involved, I have prepared a fully working migrate example environment for a Liip hackday.

If you want to quickly start with the migrate module, head over to my github repository and install Drupal 8 Migrate using composer and drush. You just need to follow the instruction in the README.md


Comparing Drupal Migrate 7 with Drupal Migrate 8

Some of you might already have used Migrate 7. A traditional mapping was done in the constructor of a Migration class:

public function __construct($arguments) { parent::__construct($arguments); $this->description = t('Page Placeholder import'); // Set up our destination - nodes of type migrate_example_beer. $this->destination = new MigrateDestinationNode('page'); $this->csvFile = DRUPAL_ROOT . '/docs/navigation.csv'; $this->map = new MigrateSQLMap($this->machineName, array( 'ID' => array( 'type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, ), ), MigrateDestinationNode::getKeySchema() ); $this->source = new MigrateSourceCSV($this->csvFile, array(), array('header_rows' => 1, 'delimiter' => ';')); // Force update. $this->highwaterField = array(); // Mapped fields. $this->addFieldMapping('title', 'name'); $this->addFieldMapping('uid')->defaultValue(1); $this->addFieldMapping('status')->defaultValue(1); $this->addFieldMapping('promote')->defaultValue(0); $this->addFieldMapping('sticky')->defaultValue(0); $this->addFieldMapping('language')->defaultValue('de'); // Unmapped destination fields. $this->addUnmigratedDestinations(array( 'body:format', 'changed', 'comment', 'created', 'is_new', 'log', 'revision', 'revision_uid', 'tnid', 'translate', )); }

















































public function __construct($arguments) {


  $this->description = t('Page Placeholder import');

  // Set up our destination - nodes of type migrate_example_beer.

  $this->destination = new MigrateDestinationNode('page');

  $this->csvFile = DRUPAL_ROOT . '/docs/navigation.csv';

  $this->map = new MigrateSQLMap($this->machineName,


      'ID' => array(

        'type' => 'int',

        'unsigned' => TRUE,

        'not null' => TRUE,





  $this->source = new MigrateSourceCSV($this->csvFile, array(), array('header_rows' => 1, 'delimiter' => ';'));

  // Force update.

  $this->highwaterField = array();

  // Mapped fields.

  $this->addFieldMapping('title', 'name');






  // Unmapped destination fields.














In Migrate 8 this format has been replaced with yaml files. The same mapping as above looks like that in Drupal 8:

# Migration configuration id: page_node label: Dummy pages migration_group: liip source: plugin: page_node # Full path to the file. Is overridden in my plugin path: public://csv/navigation_small.csv # The number of rows at the beginning which are not data. header_row_count: 1 # These are the field names from the source file representing the key # uniquely identifying each node - they will be stored in the migration # map table as columns sourceid1, sourceid2, and sourceid3. keys: - ID destination: plugin: entity:node process: type: plugin: default_value default_value: page title: name uid: plugin: default_value default_value: 1 sticky: plugin: default_value default_value: 0 status: plugin: default_value default_value: 1 promote: plugin: default_value default_value: 0 'body/value': body 'body/summary': excerpt #Absolutely necessary if you don't want an error migration_dependencies: {}








































# Migration configuration

id: page_node

label: Dummy pages

migration_group: liip


plugin: page_node

# Full path to the file. Is overridden in my plugin

path: public://csv/navigation_small.csv

# The number of rows at the beginning which are not data.

header_row_count: 1

# These are the field names from the source file representing the key

# uniquely identifying each node - they will be stored in the migration

# map table as columns sourceid1, sourceid2, and sourceid3.


   - ID


plugin: entity:node



   plugin: default_value

   default_value: page

title: name


   plugin: default_value

   default_value: 1


   plugin: default_value

   default_value: 0


   plugin: default_value

   default_value: 1


   plugin: default_value

   default_value: 0

'body/value': body

'body/summary': excerpt

#Absolutely necessary if you don't want an error

migration_dependencies: {}

Understanding the new mapping with yaml files in Migrate 8

The mapping is quite straightforward.

  • First you have to define your Migrate source. In the example we have used a CSV source plugin. (https://www.drupal.org/node/2129649)
  • Then you can map the source fields to a Migrate destination. In our case, we use a node destination (https://www.drupal.org/node/2174881)
  • You can map now all source fields to destination fields, for example you map a column of the CSV file to the node title field
  • Every field can be processed and modified before it is passed to the final node field. There are a lot of useful process plugins like “default_value”, “callback” or “skip_if_empty”
  • You can find a list of all process plugins here: https://www.drupal.org/node/2129651
  • Of course you can easily write your own plugins and use them while migrating data

Example: Importing a menu tree and create dummy nodes using Drupal Migrate 8

For demonstration purpose I created a small module, that reads a menu tree from a CSV file and imports into Drupal.

The module is split into 2 tasks:

  1. Creating a page node for every row in the csv file
  2. Create a menu item for every row and attach it to the correct node

Migrate handles these dependencies in the yaml file:





migration_dependencies: required: - page_node



   - page_node

See the full module code here:


Developing your own Drupal 8 migrate modules and fighting caching issues

You have learned, that the whole migration mapping is now done in yaml files. But how about writing your own migration yaml files?

Unfortunately, there are some pitfalls for new Drupal 8 developers. Because of the Configuration Management Interface (https://www.drupal.org/documentation/administer/config) of Drupal 8, all yml files in the “config/install” directory are only imported when installing the module.

This is very impractical if you want to develop new configuration files. To address this, a module “Configuration Development” (https://www.drupal.org/project/config_devel) which resolves the caching issues can be installed. It is possible to import certain yml files on every request. But unfortunately drush commands are not supported yet. So we need to add all yaml files we want to import into a new section in our module.info.yml.

config_devel: install: - migrate_plus.migration.page_node - migrate_plus.migration.menu_item - migrate_plus.migration_group.liip



   - migrate_plus.migration.page_node

   - migrate_plus.migration.menu_item

   - migrate_plus.migration_group.liip

Then we can run the following commands after updating the yml file. This will import the new configuration file into CMI.

drush cdi <module_name> drush cr

drush cdi <module_name>

drush cr

In short:

You always have to remember, that you have to import the yaml files and clear the cache after changing the mapping before executing the migration again.

Testing and running your migration

If your module is set up correctly, you can run “drush ms” to see if the migration is ready:


Now you can run the migration using

drush mi <migration_name>


drush mi <migration_name>


If you want to update already imported items you can use the –update option:


Advanced example importing a JSON source into Drupal nodes

During the last hackday at Liip I wrote a small module that is consuming a JSON source from

http://jsonplaceholder.typicode.com importing some postings and comments to a Drupal 8 website.

The module is split into 3 tasks:

  1. A feature module that installs the node type and fields called “migrate_posts”
  2. A migration importing post nodes
  3. A migration importing comments and attaching them to the already imported nodes

You can find the feature module here:


The migration module itself is here:


Migration Process Plugin

Inside the migration module from above you will find a simple process plugin. The subject field of a comment in drupal only accepts a certain number of chars by default. Therefore I wrote a small process plugin, that truncates the incoming subject string:

subject: plugin: truncate source: name


plugin: truncate

source: name

The process plugin needs an annotation (https://api.drupal.org/api/drupal/core%21core.api.php/group/annotation/8.2.x) to be picked up by the migration API. You can later refer to the id in the yaml file:

<?php namespace Drupal\migrate_json_hackday\Plugin\migrate\process; use Drupal\migrate\MigrateExecutableInterface; use Drupal\migrate\ProcessPluginBase; use Drupal\migrate\Row; use Drupal\Component\Utility\Unicode; /** * * @MigrateProcessPlugin( * id = "truncate" * ) */ class Truncate extends ProcessPluginBase { /** * {@inheritdoc} */ public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) { return Unicode::truncate($value, 64); } }


























namespace Drupal\migrate_json_hackday\Plugin\migrate\process;

use Drupal\migrate\MigrateExecutableInterface;

use Drupal\migrate\ProcessPluginBase;

use Drupal\migrate\Row;

use Drupal\Component\Utility\Unicode;



* @MigrateProcessPlugin(

*   id = "truncate"

* )


class Truncate extends ProcessPluginBase {


   * {@inheritdoc}


  public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) {

    return Unicode::truncate($value, 64);



You can see the whole module code under:


Final Words

At the moment, there migration is under heavy development. For Drupal 8.1 a lot of changes have been made and these changes were breaking a lot of helper modules. Especially the Migrate UI is not working since several months now under Drupal 8.1. You can see more information under https://www.drupal.org/node/2677198.

But nevertheless, Migrate 8 is getting more and more mature and stable. And it’s time to learn the new yaml syntax now! If you have any suggestions, just drop a comment.

Apr 05 2016
Apr 05

We’re going to be in New Orleans next month for DrupalCon, will you be? Heather White, Sandy Smith, and I will all be flying down the week of May 9th. Heather helped organize the PHP track for this year’s event and will be helping to make sure everything runs smoothly for the speakers. Sandy will be at our sponsor booth to chat with all of you and show off our magazine and some sample books. I’ll be at our booth with Sandy and also presenting Navigating the PHP Community.

DrupalCon New Orleans Logo

Want to join us? We’re giving away a free ticket to DrupalCon at random. We’ll draw names from all entries on Wednesday, April 13th.

Our Contest is closed.

Can’t make it to Drupalcon? Come meet and hang out with us at php[tek] in May or on php[cruise] this summer. Also, follow us on twitter, @phparch, or subscribe to our mailing list to stay updated.

Oscar still remembers downloading an early version of the Apache HTTP server at the end of 1995, and promptly asking "Ok, what's this good for?" He started learning PHP in 2000 and hasn't stopped since. He's worked with Drupal, WordPress, Zend Framework, and bespoke PHP, to name a few. Follow him on Google+.
Mar 17 2016
Mar 17

This week, we were proud to once again help launch County Health Rankings, a project we have been fortunate to support over seven annual releases.

A collaboration between the Robert Wood Johnson Foundation and the University of Wisconsin Population Health Institute, the Rankings compare counties within each state on more than 30 factors that impact health, including such social determinants as education, jobs, housing, exercise, commuting times, and more.

In honor of the seventh release, and in honor of St. Patrick’s Day – a day upon which Americans are prone to misfortune due to less-than-healthy behaviors – we provide seven lucky reasons we love this year’s Rankings:

1. You Can Compare Counties Across State Lines

Rankings fans have long desired to compare counties in different states. While it would never make sense to compare state-by-state ranks, you can now create head-to-head comparisons on specific measures between any county of your choosing. For example, here we’ve compared several counties named “Orange” including these counties in Florida and California that are home to Disney World and Disneyland. (Might this help answer the age-old question: Are the happiest counties on Earth also the healthiest?)


2. Easy-to-Use Additional Measures

The Rankings provide county-level information on a variety of interesting additional measures, such as residential segregation and health care costs, that do not factor into ranking calculations yet are helpful to gaining a better portrait of a county’s health. These measures can now be directly accessed within any county snapshot. Just browse to your favorite county (here’s mine) and click the plus (+) signs to reveal additional measures.


3. Improved Details About Measures that Affect Health

This year, we’ve created new pages for each ranked and additional measure to help audiences understand more about how factors such as adult obesity, drug overdose deaths, and insufficient sleep affect our health. Previously, these were only described in the context of a given state when viewing data in the application. In practical terms, this means you can now quickly find these measures via the site’s main search function. We also revamped the focus area pages that each measure is related to. For example, here is the overview of tobacco use, which includes a clear description, measurement strategy, references, and a list of relevant policies and programs that can lead to improvements.  


4. Key Findings Report

A beautiful, new Key Findings report takes a broader, national perspective on the Rankings. It explains that rural counties have had the highest rates of premature death rates, lagging behind more urbanized counties.

The Drupal crowd might be interested in knowing that this was built using the Paragraphs module, which allowed our site editors a fair bit of flexibility in creating longform content on the fly, since their content wasn’t approved/ finalized until very close to our release date. They did this by adding any number of pre-built components (Paragraph types) like downloadable images, text fields, section headers, and callout boxes to the report, then rearranging as needed via drag and drop. And because we created this as its own content type, it’s also now very easy for editors to go back and create reports from the PDFs of key findings from previous years.


5. Health Gaps Reports

Revealing state-by-state Health Gaps reports explore the factors that cause different health outcomes across each state, and what can be done to address them.


6. Areas of Strength:

When viewing any one of the over 3,000 county snapshots – say, Wayne County, MI, for example – you can now highlight specific areas in which a county is performing well by clicking the checkboxes in the upper right. This compliments the “Areas to Explore” toggle we introduced a few years ago.


7. Boozing Discouraged

Before you imbibe on St. Patrick’s Day, you should check out your county’s performance on Alcohol-impaired Driving Deaths and Excessive Drinking.

Seriously, nearly 90,000 deaths are attributed annually to excessive drinking. So wherever you live, take care this St. Patrick’s Day!

Previous Post

Luck-Changing Web Dev Limericks

Jan 31 2016
Jan 31

Here is a complete guide to get your drush working OS X El Capitan.

1) Download latest stable release using the code below or browse to github.com/drush-ops/drush/releases.

wget http://files.drush.org/drush.phar

(Or use our upcoming release: wget http://files.drush.org/drush-unstable.phar)

2) Test your install.

php drush.phar core-status

3) Rename to `drush` instead of `php drush.phar`. Destination can be anywhere on $PATH.

chmod +x drush.phar
sudo mv drush.phar /usr/local/bin/drush

4) Enrich the bash startup file with completion and aliases.

drush init

5) Add the following lines to .bashrc. (Check which PHP version you are using!)

export MAMP_PHP=/Applications/MAMP/bin/php/php5.6.10/bin
export PATH=$PATH:/Applications/MAMP/Library/bin
export PHP_OPTIONS='-d memory_limit="512M"'

6) Add the following line to .bash_profie

if [ -f ~/.bashrc ]; then . ~/.bashrc; fi

That’s it, you will have a fully functional drush on your Macintosh.

Nov 18 2015
Nov 18

This Thursday is the long awaited release date for Drupal 8, the newest version of the leading Open Source CMS. It’s packed with big improvements, and the new version represents a big leap forward for the future of the CMS. There is a better mobile experience, a new services layer for delivering “content-as-a-service,” easier deployment management, and, of course, Drupal 8 has made a big architectural shift under-the-hood to a more object-oriented model that professional developers will love.

But the biggest question everyone is asking about Drupal 8 right now *isn’t* “how great is Drupal 8 going to be?” but rather “should I use Drupal 7 or Drupal 8 for my project right now?”

Here’s our big advice. First, don’t panic. Organization’s should not feel any urgency to move from Drupal 7 to Drupal 8 in the immediate term. Why?

What makes Drupal amazing is it’s ecosystem of thousands of open source modules that easily extend the “core” functionality of the platform. Looking to integrate your website deeply with Salesforce CRM? There is a module for that. Want sophisticated online community functionality? There is a module for that too. The release of any major version of Drupal requires the maintainers of those modules to update their code, and that will take some time.

During its first 6 to 9 months of existence, Drupal 8’s ecosystem of contributed modules will have a smaller collection of available modules than Drupal 7, and those modules, because they are brand new, *will* have issues that require time and effort to patch, debug, and make function. Right now, the amount of time and energy required to do that work is almost impossible to determine accurately – other than it is certain to be there.  By early Q3 of 2016, we expect to have a far more solid understanding of that timeframe – and Drupal 8 and it’s contributed modules will have had the benefit of a few updates and some real-world use.

If you launch anything but the simplest of sites at any time in 2016 prior to Q3 –

  • It will be less expensive to create on Drupal 7 than on Drupal 8.
  • It will likely not need to be updated to Drupal 8 until 2019/2020 – which is in line with most site redesign cycles.

This is the pattern we’ve seen with previous major releases of Drupal – Drupal 5 to Drupal 6, Drupal 6 to Drupal 7: Sites (of any meaningful complexity) built within the first 9 months of a major Drupal release will require more effort to create, and have higher risks of unforeseen issues that need effort to overcome than simply building on the previous release.

One caveat is that for “simpler” sites, Drupal 8 will be more suitable immediately upon it’s release. Why – because many of the “key modules” that were previously contributed modules, are now a part of Drupal 8 core.

So, what does that all mean for my project? Here’s our frank take on when to stick with Drupal 7 and when to make the move to Drupal 8 for your project:

If you need to deploy your site in the first half of 2016, use Drupal 7 unless:

  • You have no (or very limited) integration requirements with CRM, authentication, or other systems. Modules to watch: Salesforce, LDAP, other mature integration modules. 
  • You have no customized publishing workflows. Modules to watch: Workbench and other moderation modules 
  • You do not need sophisticated ‘community’ functionality. Modules to watch: Organic Groups
  • You do not need drag and drop/sophisticated layout manipulation for non-techies on your project. Modules to watch: Panels, IPE, Panelizer
  • You have significant multi-lingual needs. Drupal 8 might be a better choice because of the improvements to internationalization and their inclusion in Drupal Core.
  • You want to accelerate the development of Drupal 8 contributed modules in any of the areas above.
  • Your site meets the conditions above, and your goal is for your development team to gain experience and insight into the platform

For more information on taking the stress out of upgrading your website to Drupal 8, check out our December 3rd webinar “Drupal 8 for Decision Makers.” Do you have a project that’s happening in this timeframe and still have questions? Drop us a line!

Don’t forget to read our other featured blogs for this week’s launch of Drupal 8, “What’s Your Drupal Upgrade Path?” and “Upgrading to Drupal 8: A Planning Guide.

And that’s our Drupal 8 cheat sheet!

Previous Post

Upgrading to Drupal 8: A Planning Guide

Nov 17 2015
Nov 17

Co-written by: Jess Snyder,  Senior Manager, Web Systems at WETA
A year ago, Andrew wrote that the Drupal 8 decision was coming, and advised folks to plan.  Around that time, he talked to folks about their plans, many replied, “my plan so far has been to put my fingers in my ears and say, “la, la, la, la…” Well, it’s time to remove your fingers from your ears and kick your strategic planning into high gear. Drupal 8 is officially released this week!

While there’s no reason to panic (unless you are Drupal 6, then it is time to get into gear), you do need to plan. Although Drupal 7 will be officially supported for quite some time, developer enthusiasm and attention is already shifting to Drupal 8. While security updates will continue, new features and innovation are definitely going to be focused on Drupal 8, and in time, fewer and fewer developers will be adept at maintaining or improving a Drupal 7 site.

In short, unless you are an early adopter, our recommendation is to begin planning your upgrade now, so you are prepared to move Drupal 8 as soon as late 2016. That means that you should get your house in order and prepare your budget requests now.

What does a Drupal 8 plan look like?

A solid plan includes a consideration of the following:

  • Context
  • Content
  • Platform
  • Resources
  • Budget

Let’s consider each one in turn.

Step 1: Consider Your Context

Before you even think about software, consider where your organization or project exists in your world.

Is your organization considering a re-branding? Are you planning a new communications campaign? Are you about to seek a new round of funding? All of these questions affect how and when and how you should move to Drupal 8.

Too often, organizations consider a technical change outside of the broader context of their overall communications plans. Ask yourself: “What do I want my website to achieve?” – in real terms, not simply in terms of what will impress your boss or board.  You should have goals that are more mission-focused that simply “make the site easier to use” or “work better on mobile.”

If you are considering updating your project or organization’s visual expression, you should make sure that this is established before you start the project. Drupal 8 has a new Twig-based templating engine, so you will need to re-theme the entire site even if you wish to retain your site’s existing design. This means that fonts, page headings, page footers, color assignments, form stylings, mobile-friendly styles – everything about your site’s overall “look and feel” – will need to be re-created as part of a Drupal 8 migration. Don’t make the mistake of coding in your old style only to have to re-theme it again later.

Considering your organizational priorities will also make it easier to make the case that the upgrade is necessary. If you are a Drupalist or technical staffer, you already know in your heart that a Drupal 8 upgrade is the right move for your organization. But chances are, you will need to convince non-technical decision-makers who won’t be swayed by all of Drupal 8’s cool new features. Being able to align the technical upgrade with your organization’s overall strategy can help you avoid potential delays or derailments.  

If you do your homework, you will be in a good place to argue for an upgrade.  Here’s a checklist to jump-start your digital project.

Step 2: Catalog Your Content

An upgrade to Drupal 8 also represents an opportunity to re-think your content. When you move to a new house, it’s best practice to go through your belongings and separate them into piles: “keep,” “donate,” “trash,” and “repair.”

Take a similar approach when planning your upgrade project. As my colleague blogged, you need to ask yourself four questions: How much content do I have? How good is it? What’s missing? Who can fix it?

A content audit will provide you a clear view of which content types are in use, which ones are valuable, and which ones could be deprecated. Next, evaluate your content’s quality by checking the data. To determine what’s missing, talk to your stakeholders – department heads, content owners, and perhaps even your audience. Finally, assign missing or poorly-written content to the appropriate experts or copywriters.

Pruning your content  will save time and money during the upgrade. For instance, perhaps your communications department has decided that it no longer needs press releases listed on the site. By eliminating that type of content, that’s one less content type to migrate and one less view to be re-created.

Likewise, ask yourself,  “are my content types still getting the job done?” For example, a process or workflow that was required when you originally launched your site may no longer be current practice. Ask your content editors whether there are pain points when they edit and update your existing site. Is there a way to improve their authoring experience during the upgrade?

This is also an opportunity to break apart a large site into smaller, more manageable chunks. For instance, on Jess’s site, weta.org, she is considering whether she might rebuild the WETA Press Room separately from the main Drupal instance. This will isolate that subsite’s special functionality and give her practice building a Drupal 8 site before tackling the larger project, while at the same time making that larger project less complex.

This is also the time to consider whether there are pieces of functionality that had been incredibly important, but may no longer make sense. For example, there are incredibly complex rules that govern how playlists are displayed on weta.org. Before rebuilding that functionality in Drupal 8, Jess plans to confirm with her legal department  that those rules are still in effect.  If your site has similar bits of legacy functionality that are no longer relevant, put those in the recycle bin. There’s no sense in bringing low-value functionality into your new D8 site.

Step 3: Evaluate Your Platform

On modern smartphones, you can tap an “upgrade” button, set it down, and return to a fresh interface and new features in less than an hour.

Your Drupal 8 upgrade will be nothing like this.

Make no mistake, a move to Drupal 8 is more of a re-build than a simple upgrade.  While your content, users, and other elements can be systematically moved over cleanly, you still will need to re-create your themes, install and configure modules, reconfigure views, and much more. The complexity of an upgrade is determined by the complexity – and quality – of your existing Drupal instance.

In addition, if your site is using any custom programming, those pieces will need to be ported manually to be compatible with Drupal 8’s new API.

Therefore you need to evaluate the state of your specific technical configuration. This will help with your budgeting. Here are some questions to ask:

  • Is your CSS is compiled?
  • Which modules you are using? Which ones are now in core? Which ones can be retired?
  • Do changes in best practices necessitate different modules or different approaches?
  • Are you using any custom programming?
  • Is your theme responsive, or will the design need to be adapted to work well within a responsive theme?
  • Are your analytics installed and configured correctly and usefully?
  • Are there any technical integrations between Drupal and other systems that will need to be re-established?

Consider whether your current Drupal solution is weighed down by complex functionality that serves just a portion of your site? For example, might you have a large number of community-focused modules simply because it needs to support community functions for a small slice of your audience? For example, Jess’ site supports moderated user authentication due to the requirements of just two small, low-traffic features. As part of the Drupal 8 upgrade, it may make sense to break these features into their own mini-site.

And here’s a more frightening question: Are your security patches current? If not, there is a high likelihood that your site has been compromised. Not only should you take steps to plug those holes  immediately, you will want to use the upgrade process to redouble your efforts keep things locked down.

If you can’t answer these questions on your own, consider getting a Technical Audit by a qualified consultant. Here’s someone who can help audit your site!

Step 4: Consider Your Resources

As we’ve said, it’s important to frame this to your peers as a rebuild, not an upgrade. This is the first step to making sure you have the resources you need.

Depending on your situation, a move to D8 will require the following capabilities (among others):

  • Audience Research
  • Content Strategy
  • Design
  • Copywriting
  • Front-end Development
  • Back-end Development
  • Information Architecture
  • Design
  • Analytics

Take an honest look at your staff. Consider how many of these capabilities are available to you in house. In addition, think hard about the weight of the task. A team that is capable of supporting and improving an existing solution may not be able to continue that support while taking on the sometimes substantial task of a major upgrade.

If your resources are not sufficient, you will need to find a partner in the form of skilled independent contractors or a trusted agency.

Step 5: Plan Your Budget

Budgets are prepared annually at most organizations, and this is not a small project that can be slipped into another line item. You should be thinking now as to whether your D8 upgrade will hit your 2016 budget or 2017 budget.

Budgeting for a technical project such as this is a challenging, multi-faceted exercise in the best of circumstances. Upgrade projects are even trickier because, at this writing, virtually no one  has prior D8 upgrade projects to reference as a baseline. Still, budgets must be planned, so plan we must.

Here are a few things to consider in your budgeting:

  • How many outside consultants will you need to engage and at what levels? (See step 4.)
  • How much strategic work must precede the migration?
  • How much creative work must precede the migration? (Do you have brand and visual identity guidelines?)
  • Which custom applications must be migrated?
  • How many content types must be migrated? The types of content are often more critical than the volume of content. (That is, knowing your site has content types for, say, press releases, blogs, and videos is more useful indicator than the actual quantity of each type.)
  • Which integrations are needed?
  • Have you budgeted for ongoing security patching?
  • How will the upgrade affect your hosting costs?

And with all these questions, the answers are easier if you are able to break larger projects into smaller ones.

Listening and Learning

Unfortunately, the fingers-in-ears strategy is not a viable solution. The tips in this article provide you a framework for your planning so you can get your organization in a good position to upgrade to Drupal 8 when it makes sense for you. In the meantime, there is a lot we can learn from each other. Keep your eyes and ears open!

If you have further questions about the Drupal 8 release and where your existing site fits into this, please get in touch and we’d be happy to talk it through with you. And for information on how to upgrade from Drupal 6, check out What’s Your Drupal Upgrade Path?

Previous Post

What’s Your Drupal Upgrade Path?

Oct 16 2015
Oct 16

We recently finished building our first Drupal 8 site for a client. So I wanted to ask our team, “What did you like about working in Drupal 8?” These were some of the topics they touched on.

Content editing

Amanda: The UI for editing content is very good, with contextual links much more useful and usable than they were in Drupal 7. Node edit pages are also improved—with a sidebar and collapsed options, making them easily accessible, and the options are neither in the way nor create really long edit pages. And the WYSIWYG is in there by default now, which is a bonus for site owners.

Dave: The newly streamlined content editing form is going to make the lives of content creators much easier. It’s amazing how little things like that can have a big impact. For those smaller organizations that are resource constrained, they don’t want to be spending their time doing menial tasks to manage their content, they need to spend their energies on what’s important—crafting content that engages their constituents. Drupal 8 will help them do that.

Responsive Design

Amanda: While there are many contrib modules that are not ready yet, I’ve been impressed by what is now included in core and considered necessary—like breakpoints and responsive images.

Sarah: Also HTML5 markup out-of-the-box makes for a better experience on mobile and easier responsive theming.

Jack: Having Responsive Images in core is a no-brainer; every serious project these days deals with that aspect of site building. In the past, it could get quite complicated mashing a few contrib & custom solutions together to handle dynamic image resizing and resolution. After giving things a spin in D8 and working out the kinks of our first approach, it feels very intuitive and less of an afterthought.

The Developer Experience

Sarah: A fun thing about using Drupal 8 has been having something new to learn. We’ve been working with Drupal 7 for about five years now, and it’s exciting to get a push towards getting up-to-speed with more modern PHP practices.

And long-term maintenance will be made easier by having fewer contributed modules. Having better translation capabilities in core should make it easier to add multilingual support to a site, and result in a better experience for translators.

Amanda: So far I like how core, custom themes and modules, etc, have been restructured; putting core in it’s own folder discourages hacking and having a developer’s custom files easier to access makes sense.

Jack: For clients who will be taking our work and building off of it internally post-launch, it will be easier to continue development using the clear implementation examples we’ve set up initially for them. Trying to dig around in the admin side of D7 to figure out where responsive images were handled was understandably frustrating (for devs and clients alike) and I think things seem much more clear now.

Everyone on the team enjoyed working in Drupal 8, and we are really looking forward to our next D8 build. Have you started working in Drupal 8? Let us know in the comments!

Sep 29 2015
Sep 29

While I couldn’t make it to Drupalcon Barcelona last week, I insisted on giving a presentation somewhere. Forum One let me do it as a webinar: Planning and Building Salesforce Integration. There are some truly wonderful tools for integrating Salesforce with Drupal out there, but the tricky part is planning, documenting, and estimating the task.

My webinar presentation doesn’t cover the basics of Salesforce or Drupal. Rather, it is tailored for a certain (basic) level of knowledge about both systems: what they are, how to set them up, and the basic data models. It covers the toolset you need, its strengths and weaknesses, how to plan an integration, and what kind of amazing cool stuff you can do if you’re smart about it.

[embedded content]

Salesforce is an extremely powerful CRM platform, and that includes options for external integrations. There are two different APIs available, but basically all you need to do is wrap requests in an OAuth2 session, and query the Salesforce data with their own query language, SOQL.

That said, I would never send you off to go and write totally custom integration code. There’s already a fantastic suite of modules written for Drupal that provide this base functionality and more: the Salesforce Suite. The Suite is actually a set of 5 modules, providing an OAuth2 wrapper and SOQL query builder, two different ways to connect to Salesforce, an interface to map Salesforce objects to Drupal objects, and separate modules for both pushing data into Salesforce, and pulling data from it. It’s a pretty complete package, and it is an excellent base for even highly-customized integrations.

Out of the box, the Salesforce Suite also does an excellent job of direct, 1:1 mappings between Salesforce Objects and Drupal entities. It schedules and queues synchronizations well, and offers a lot of hooks to help with your custom development. When you’re planning your integration, you have to look out for “red flags” that will help you estimate:

  • Conceptual objects in Drupal that are actually made of two or more Entities, e.g., users with their profile2 objects.
  • Modules that don’t store their data in entities, e.g., the Webform module.
  • A wide variety of different types of fields. Many fields actually need some translation between Drupal and Salesforce, and you will have to provide that yourself.

Very often, custom code is the best way to augment the synchronization that you’ve built in the Drupal UI. The following are some of my favorite examples:

  • Salesforce stores country names in a non-ISO format. If you’re syncing to a Drupal location field, you will have to do a little bit of translating.
  • Salesforce booleans (checkboxes) are stored differently from Drupal booleans. Again, a little bit of translation is required.
  • Salesforce doesn’t have the same validation rules as Drupal, e.g., Salesforce Contacts are not required to have an email address, but Drupal users are.

All this by way of saying that custom code is almost always required for an integration to go smoothly. It doesn’t have to be complicated code, but it is there.

With all this in mind, I recommend planning your integration around use cases and user stories for both systems. We paint a picture of the people who will be using both the Drupal and Salesforce sides of this, and what information they want to have readily available. In most cases it boils down to a long list of objects and fields to integrate. You can use this list to estimate based on the number of different field types, number of 1:many object mappings, etc. You can also use it to consider large-scale architectural options, like the decision to install the Redhen Drupal CRM to map all your data into these entities, and then sync from there.

Another important question (brought up by a participant during the live broadcast) is whether to keep your data customizations in custom Salesforce fields, or in Drupal custom code. Salesforce allows you to create custom fields that are filled automatically according to rules that you configure. So that country name problem I mentioned above could be solved by syncing Drupal with a custom Salesforce field that translates the names for you. That saves you the trouble of coding in Drupal, which otherwise seems like it would be hard to maintain. As a general policy I prefer to keep “code-like” customizations in the site’s codebase, where I can then easily find them in case of a problem or change. It can save serious headaches down the road! For those of you who are more comfortable in Salesforce, however, the same reasoning might push you towards the opposite decision. You should take the route that seems the most maintainable to you, and not worry about what some guy on the Internet said in a blog post ????

The webinar also includes a quick walkthrough of a simple integration between Drupal and Salesforce, synchronizing users and contacts. We right away run into some of those very predictable problems that our estimation process was designed to root out, and we talk about how to solve them.

My favorite part of the presentation (if I do say so myself!) is the “blue sky” section near the end. Salesforce and Drupal are each incredibly powerful, flexible platforms on their own, and so I really enjoy coming up with cool ways to have them magnify each others’ power. Some fun ones:

  • If you include Salesforce data in Drupal user objects, you can do A/B testing in Drupal based on peoples’ engagement in other channels.
  • You can tag users in your analytics package based on their Salesforce profiles. Imagine a special analytics report of how your donors behave on the site, or simply people who opened the last newsletter, or whatever other user group you can imagine.
  • You can track user engagements from your website in Salesforce. For example, including a user’s comments on blog posts as a part of their Salesforce history.
  • If your site is a user community with points, referrals, and other rewards for interaction, those should definitely be reflected in Salesforce profiles.
  • (My #1 favorite) You can integrate information from social media, email blasts, human contact, and the website to provide little surprises and personalizations for users. Imagine a message like, “we saw you shared our blog post on Facebook – thanks!,” or, “have you checked out the newest version of our product yet?”

We had some great questions come up in the Q&A section, as well. We talked a bit about using machine learning (and Salesforce’s new Predictive Decisions feature) to drive website content decisions, and potential future areas for development in the Salesforce suite around the Salesforce metadata API. Cool stuff, but you’ll have to watch the video to get it.

The Salesforce suite offers an excellent foundation for building Salesforce integrations. As always though, the really exciting stuff is on the cutting edge of what the industry offers. To take advantage of that, you’ll need more than just a small custom module, and you’ll also need serious strategic and developer leadership. I’m looking forward to having that conversation with you soon!

Previous Post

Coding vs Clicking and Building Layouts from 7 to 8 – More Fun at DrupalCon Barcelona

Aug 05 2015
Aug 05

One of the biggest challenges for companies in the Open Source space, is how to make sure they contribute back to the community. Contribution is a core value at Forum One, but that doesn’t mean that it’s been any easier for us! This year we are testing a new structure for our community contribution. Once a month, we schedule a “sprint day.” All of our technical architects have the time blocked off on their calendar, and everyone who can make it spends a full day working on contributed code.

For our first sprint day we decided to focus on the Pane module, a module that Forum One maintains. This module allows administrators to create custom panes of translatable text or entity references, the content and/or placement of which can be easily moved between environments with the Features module. We spent a few hours in a Hangout, crunching through the active issues and pushing them forward.

Lucas Hedding and Andy Hieb added the ability to set permissions per pane. This is great for when you want to have some content that is editable by site admins, and some you need to lock down. These can be found on the permissions page.

Stephen Lucero got his patches for self-reported bugs reviewed and committed by William Hurley. 2473933 is a little obscure and was pretty tricky to track down! It broke the ability to use a display mode in a “read more” link, and it’s now resolved. The views display mode wasn’t working right either, but is now fixed by 2472553.

Andrew Morton worked on a new, easy-to-understand documentation page, including a full tutorial. We realize site builders can’t always comprehend the dev speak that explains the nuances of this module. The tutorial seeks to help builders breeze through configuration and understand Pane’s benefits.

At the end of the day, we were proud to mark a new release, version 2.7. It’s recommended for use in production sites right away.

We had a great time working together, and it was cool to see how quickly we could make progress when we all hammered on something together. I think we’ll try another “contrib day” like this next month. There’s plenty more to work on if you’re interested in this module. If you have any ideas or features we should add, please let us know!

Visit and use the pane module today!

Previous Post

Evolving the Nation’s Report Card – Improving Data Visualization Through Usability Testing

Next Post

Empowering Content Managers via Google Analytics Dashboards & Training

Jul 25 2015
Jul 25

I like to be technology/platform agnostic, but last couple of years I’ve built everything on top of Drupal. I get this question many times: “Why not using something else?”. My answer is usually: “I became so good at using it, that it only makes sense to me”.

I tried to came up with some objective reasons, to rationalise my future decision.

1. It’s open source

Software is the bricks and mortar of your business. If you don’t own it, then someone else has the control over you startup. Open source also means no up-front cost for licenses. Since so many people know how to work with Drupal, you are also not locked in with developers.

2. Integrates with 3rd party services

Do you really want to spend your time building custom payment solutions, analytics or notifications systems? We live in a time that there is an API service for everything. There is also a Drupal module for every popular API service. This enables you to build your solutions quicker and cheaper, without developing the integrations yourself.

3. Safe and reliable code

With a community of thousands of people working on the code it became of of the biggest open source communities in the world. The biggest challenges was to ensure the code in all 10.000 modules is secure and reliable. Drupal has a centralised system for modules and it is very rare you would download a module from Github or private websites. This enables moderators to control who releases what. Also, there is a special security team watching over the code.

4. Enterprise oriented software

When I first joined the community, Drupal was compared to WordPress more often than it is today. From some different perspective I would say WP has won the battle against Drupal. On the other hand, Drupal has won the battle of enterprise CMS platforms by far. Why is this important to your startup? It puts you shoulder by shoulder to Twitter, CISCO, Tesla and many others.

5. It’s a safe investment

There is a 90% chance you will fail. Now, if you pick a technology that you will invest your time to learn, then pick something you can use on your next project.  On the other hand there is a big demand for Drupal developers out there, if you will ever want to get a regular job and take some time off from startup madness.

Would love to hear from you too, what platforms would you recommend to me, and why?

Jul 10 2015
Jul 10


The Forum One team is thrilled to be heading to the United Nations for this year’s NYC Camp. We’re looking forward to seeing both new and familiar faces, and sharing some of the newest things the team’s been working on in terms of data visualization, Drupal development, and automation.

Here is a look at the sessions we’ll be leading:

Building Realtime Applications with Drupal and Node.js

Friday, July 17th, 11:00am – 11:45am (Location: Conference Room 11)

Call it the “appification” of the web. Users’ expectations are being pushed in the direction of being able to interact with sites and content not by clicking a link or refreshing the page but by having those changes come directly to them. We’ll be using Drupal, AngularJS and Sails.js to demonstrate the capabilities of both Javascript and Node to build interactive applications and synchronize with Drupal through the REST interface to serve as the data repository.

Automating Deployments

Saturday, July 18th, 9:00am – 9:45am (Location: Conference Room 8)

That moment when new code makes it way out into the world is the most fragile part of the process. We take for granted all the steps that need to happen to make sure it goes correctly. In this session we’ll be demonstrating technologies to turn deployments from a nail biting experience into a simple “one click and done.” You’ll see the power of Jenkins to manage the continuous integration process and Capistrano to deploy changes and how to drive it all from changes in your git repository.

The Drupal 8 Decision: Budgets, Bosses, and [email protected]#$% Standing Between You and the Next World-Class CMS

Saturday, July 18th,  3:30pm – 4:15pm (Location: Conference Room C)

Drupal 8 is coming. When we reach “Issue Queue Zero,” your business or organization needs a sensible strategy for upgrading your sites to D8. There are variety of questions to consider including available talent, budgets, goals, and aspirations. This session will be of particular interest to executives, business owners, nonprofit professionals, communications staff, and site managers who face this increasingly pressing decision regarding D8.

D3 Data Visualization: Because your data should tell a story

Saturday, July 18th, 3:30pm – 4:15pm (Location: Conference Room 12)

There’s no escaping the fact that data visualization is hot right now. Everyone wants to tell their data’s story visually, whether it be through a map, chart, or more detailed presentation. The difficulty is there are so many different tools that solve this, each one with their own benefits and limitations. We feel D3.js is the most awesome tool for handling this task — which is the approach we’ve used for the sites like the Nation’s Report Card, BlueCross BlueShield of North Carolina, GlobalChange, and others.

Drupal 8 Theming with Twig!

Sunday, July 19th, 9:00am to 3:00pm

Prepare yourself with the skills you’ll need to hit the ground running as a Drupal 8 themer. This training will be a hands-on, an interactive workshop where we will build a Drupal 8 theme from the ground up using Drupal 8’s new template engine, Twig. This workshop is intended for Drupal themers and front-end developers. Knowing the basics of git will be helpful but is not necessary. No PHP knowledge is necessary but a laptop is required. (Note: Twig is being improved in Drupal 8 Core @ http://drupaltwig.org/ and you are all welcome to join in and help make Drupal 8 theming awesome!)

Previous Post

July Content Strategy Resource Roundup

Next Post

Sublime Text Packages for Front End Developers

Jun 29 2015
Jun 29

drupalgovcon logoWe’re excited for Drupal GovCon hosted in the DC area July 22nd through the 24th! We can’t wait to spend time with the Drupal4Gov community and meet fellow Drupalers from all over! Forum One will be presenting sessions in all four tracks: Site Building, Business and Strategy, Code & DevOps, and Front-end, User Experience and Design! Check out our sessions to learn more about Drupal 8 and other topics!

Here our are sessions at a glance…

Nervous about providing support for a new Drupal site? A comprehensive audit will prepare you to take on Drupal sites that weren’t built by you. Join this session and learn from Forum One’s John Brandenburg as he reviews the audit checklist the our team uses before we take over support work for any Drupal site.

Drupal 8’s getting close to launching – do you feel like you need a crash course in what this means? Join Forum One’s Chaz Chumley as he demystifies Drupal 8 for you and teaches you all that you need to know about the world of developers.

If you’re wondering how to prepare your organization for upgrading your sites to Drupal 8, join WETA’s Jess Snyder, along with Forum One’s Andrew Cohen and Chaz Chumley as they answer questions about the available talent, budgets, goals, and more in regards to Drupal 8.

The building blocks of Drupal have changed and now’s the unique time to rethink how to build themes in Drupal 8. Join Chaz Chumley as he dissects a theme and exposes the best practices that we should all be adopting for Drupal 8.

Drupal 8’s first class REST interface opens up a world of opportunities to build interactive applications. Come learn how to connect a Node application to Drupal to create dynamic updates from Forum One’s William Hurley as he demonstrates the capabilities of both JavaScript and Node.js using Drupal, AngularJS, and Sails.js!

Are you excited to launch your new website, but getting held down by all the steps it takes for your code to make it online? On top of that, each change requires the same long process all over again… what a nail biting experience! Join William Hurley as he demonstrates the power of Jenkins and Capistrano for managing continuous integration and deployment using your git repository.

If you’re a beginner who has found the Views module confusing, come check out this session and learn important features of this popular module from Leanne Duca and Forum One’s Onaje Johnston. They’ll also highlight some additional modules that extend the power of Views.

Have you ever felt that Panels, Panelizer and Panopoly were a bit overwhelming? Well, come to our session from Forum One’s Keenan Holloway. He will go over the best features of each one and how they are invaluable tools. Keenan will also give out a handy cheat sheet to remember it all, so make sure to stop by!

Data visualization is the go to right now! Maps, charts, interactive presentations – what tools do you use to build your visual data story? We feel that D3.js is the best tool, so come listen to Keenan Holloway explain why you should be using D3, how to use D3’s visualization techniques, and more.

Implementing modular design early on in any Drupal project will improve your team’s workflow and efficiency! Attend our session to learn from our very own Daniel Ferro on how to use styleguide/prototyping tools like Pattern Lab to increase collaboration between designers, themers, developers, and your organization on Drupal projects.

Are you hoping to mentor new contributors? Check out this session where Forum One’s Kalpana Goel and Cathy Theys from BlackMesh will talk about how to integrate mentoring into all the layers of an open source project and how to develop mentoring into a habit. They’ll be using the Drupal community as an example!

If you’re a beginner looking to set up an image gallery, attend this session! Leanne Duca and Onaje Johnston will guide you in how to set up a gallery in Drupal 8 and how to overcome any challenges you may encounter!

Attend this session and learn how to design and theme Drupal sites using Atomic Design and the Drupal 8 CSS architecture guidelines from our very own Dan Mouyard! He’ll go over our Gesso theme and our version of Pattern Lab and how they allow us to quickly design and prototype reusable design components, layouts, and pages.

Can’t make it to all of the sessions? Don’t worry, you’ll be able to catch us outside of our scheduled sessions! If you want to connect, stop by our table or check us out on Twitter (@ForumOne). We can’t wait to see you at DrupalGovCon!

Previous Post

Programmatically Restricting Access to Drupal Content

Next Post

Announcing Selectability.js: Style-able, Accessible Select Fields

Jun 24 2015
Jun 24

Have a requirement like “I want users to not be able to add, update, delete or view content if some condition is true?” If so, hook_node_access might be all you need. This hook simply allows you to alter access to a node. Now let’s learn how to use it.

How this hook works

Anytime a user accesses a node (to add, update, delete or view it) this hook is called and can be used to alter that access decision. It receives three parameters.

  • $node – Contains information about the node being accessed (or base information, if it is being created)
  • $op – What type of access it is (create, update, delete, view)
  • $account – Information about the user accessing the node

It expects a return value to alter the access, which is; NODE_ACCESS_DENY, NODE_ACCESS_IGNORE or NODE_ACCESS_ALLOW. Best practice is to not use allow and use either deny or ignore (ignore is the default, if no return is given). You get far less permissions headaches this way.

I use this module often for customized workflow requirements. The most recent use case was “I want to deny update access to all users who try to update a node based on a user select field on the node selecting them or not.”

This is all done with one simple hook in a custom module (see below).

NOTE: There are some instances this hook is not called/skipped. Refer to the hook’s link for those cases.

 * Implements hook_node_access().
 * Enforces our access rules for custom workflow target content to force updates
 * only if the user is targeted in the user select field
 function mymodule_node_access($node, $op, $account) {
   // If a node is being updated
   if ($op == 'update') {
     // If the user select field exists on this node
     if (isset($node->field_my_user_select)) {
       // If the user select field is not empty
       if (!empty($node->field_my_user_select)) {
         // If the user id in the user select field does not match the current user
         if ($node->field_my_user_select[LANGUAGE_NONE][0]['target_id'] !=
           $account->uid) {
           // The users are not the same. Deny access to update in this case
           return NODE_ACCESS_DENY;
   // Else ignore altering access

Previous Post

Content Strategy Resource Roundup

Next Post

Join Us at Drupal GovCon July 22-24!

May 29 2015
May 29

Recently, I had the opportunity to present my core conversation, Pain Points of Learning and Contributing in the Drupal Community, at DrupalCon Los Angeles.

drupal 8 logo isolated CMYK 72My co-presenter Frédéric G. Marand and I talked about the disconnect between Drupal and api.drupal.org on core and some of the pain points to contributing and learning in the Drupal community. We also spoke little bit on the benefits of continuous contribution and sporadic contribution.

The open mic discussion brought up some interesting issues, and so I have compiled some links to answer questions.

Audience Suggestions and Responses

  • Stable release of Drupal 8 will help people start on client work and support contribution. The Drupal community needs to recognize contribution not just in the form of patch, but mentors mentoring on IRC during core office hours, credit to code reviewers on the issue queue, recognize event organizers and have people edit their profile on Drupal.org and list their mentors at the end of a sprint.
  • We now have an issue on Drupal.org to allow crediting for code reviewers (and other non-coders) as first-class contributors.
  • Make profiles better on Drupal.org. Here is an issue for that – [Meta] new design for User Profiles.
  • Event organizers could get an icon on their profile page. You can read more on that – Make icons for the items in the list of user contributions to be included on user profiles.
  • Another issue to read – Reduce Novice Contribution differences and consolidate landing pages, content, blocks.
  • Explanations of what needs to be done could be a big time-saver. For Drupal 8 there are pretty clear outlines of what could be done for core.
  • There was a suggestion to provide video and audio documentation instead of just text, walking people through issues. There are four or five companies that make videos and we have core office hours for walking people through the issue.
  • A few people expressed that its hard to keep up with IRC and are looking for easier ways to communicate. I have created an issue for that and you can read more here – Evaluate whether to replace Drupal IRC channels with another communication medium.
  • Another audience member suggested that we need to make sure that communications that happen in IRC are summarized and documented on issues, so more people can get familiar with the discussion.
  • There were some suggestions for core mentoring that have been proposed but haven’t panned out such as Twitter or hangouts (privacy concerns, less office-friendly).
  • Someone suggested that those who don’t like to get on IRC, can get core updates via email (This week in Drupal Core) which is a weekly-to-monthly update on all the cool happenings in Drupal 8.
  • Users can also subscribe to issue notifications in email on the issues/components they want to follow on Drupal.org.

Overall it was an enlightening core conversation and it was amazing to hear from the community about their pain points and suggestions they made.

To see more of our discussion watch the presentation and view the slides.

Previous Post

Behind the Curtain: The Making of the DrupalCon Prenote

Next Post

4 Content Questions to Ask Yourself Before Your Next Website Redesign

May 21 2015
May 21

The Drupalcon song - with actions!

I am never missing the #DrupalCon #prenote again. So brilliant.

— Kelley Curry (@BrightBold) May 12, 2015

DrupalCon always leaves me full of energy, and Amsterdam 2014 was no exception. The three of us – Adam Juran, me, and my wife Bryn – sat together on the short train ride back home to Cologne. Some chit chat and reminiscing quickly led to anticipation of the next DrupalCon, in LA. We were excited about the possibilities of this world-class host city. The home of Hollywood, Venice Beach, and Disneyland sounded like a great destination, but after three years of co-writing the DrupalCon “opening ceremony” with Jam and Robert, we were more excited about the possibilities for the Prenote. We knew we had to up the ante, make something new and different from previous years, and LA seemed like a gold mine of possibilities.

Every DrupalCon, before the keynote from Dries, this small group has staged a “pre-note.” The goal of the prenote is to break the ice, to remind everyone present that Drupal is a friendly, fun, and above all, inclusive community. It’s often themed after the host city: in Munich, Jam and Robert taught everyone how to pour a good Bavarian beer, and brought in a yodeling instructor for a singalong (yodel-along?) at the end. In Portland we held a “weirdest talent” competition, featuring prominent community members juggling and beat boxing. Every year it gets more fun, more engaging, and more entertaining for the audience.

Learning how to pour beer at the Drupalcon Munich prenote, 2012

Learning how to pour beer at the Drupalcon Munich prenote, 2012

On that train ride home, we threw around a lot of possibilities. Maybe the prenote could be set on a muscle beach, with Dries as the aspiring “98 pound weakling.” Or the whole thing could be a joke on a hollywood party. We briefly considered a reality-TV style “Real coders of Drupalcon” theme, but nobody wanted to sink that low. That’s when the idea struck: we could do it as a Disney musical!

Part of Your World

The Prenote was Jam and Robert’s baby, though. We knew that we would have to have some absolutely knock-down material to convince them of our concept. With beer in hand, the three of us started work on Part of your world from the Little Mermaid, as the client who is excited for the worst website idea ever.

“I’ve got sliders and icons a-plenty,
I’ve got OG with breadcrumbs galore.
You want five-level dropdowns?
I’ve got twenty!
But who cares? No big deal.
I want more!”

We quickly moved on to the song for the coder who would save the day, You ain’t never had a friend like me from Aladdin. We got halfway through this fun number before we realized that the song titles alone could do a lot of the convincing. Another beer, and we had a list of potential songs. There was so much material just in the song titles, we knew that the music would take center stage.

Some of our favorite titles from this first list were ultimately cut. Maybe someday we’ll flesh them into full songs for a Drupal party, but in the meantime you can let your imagination run wild. Hakuna Matata from The Lion King was to become We’ll Build it in Drupal! The Frozen parody, Do You Wanna Build a Website was a big hit, and so was Aladdin’s A Whole New Theme.

We showed our idea to Jam and Robert the first chance we got. They took one look at our list of songs and said the three words we wanted to hear: “run with it.”

You Ain’t Never had a Friend Like Me

Forum One's Adam Juran and Campbell Vertesi as

Forum One’s Adam Juran and Campbell Vertesi as “Themer” and “Coder” at the Drupalcon Austin prenote, 2014

We divided up responsibility for  the remainder of the songs and started to experiment with the script. What kind of story could we wrap around these crazy songs? How much time did we really have, and could we do all this music? We were all absorbed in our normal work, but every chance we got, the group of us would get together to throw ideas around. I don’t think I’ve ever laughed as much as while we wrote some of these songs.

Writing parody lyrics is entertaining on your own, but as a duo it’s a laugh riot.  More than once we checked the Drupal song lyrics project for inspiration. We riffed on ideas and tried different rhyme schemes until things seemed to just “fit.”

Heigh Ho, Heigh Ho

In the last few weeks leading up to DrupalCon, Adam and I met two and three times a week for long sessions, brainstorming new lyrics. We powered through writing the script around the whole thing, and started to address the logistical problems of backtracks, props, and costumes as well.

via Mendel at Drupalcon LA. Ronai Brumett as the perfect hipster Ariel

via Mendel at Drupalcon LA. Ronai Brumett as the perfect hipster Ariel

Finally we set about casting the different songs. Adam and I had always wanted to sing the Agony duet from Into the Woods, so that one was easy. We had a tentative list of who we wanted in the other songs, but we had no idea who would be willing. All of a sudden the whole endeavor looked tenuous again. Why did we think Dries would be OK to make a joke about Drupal 8 crashing all the time? Would Jeremy Thorson (maintainer of the test infrastructure on Drupal.org) even be interested to get up on stage and sing about testing? We realized that we’d never heard these people sing karaoke, much less in front of thousands of people!

One by one we reached out to the performers and got their approval. Some of them were more enthusiastic than others. Dries replied with “OK, I trust you guys,” while Larry Garfield and Jeremy Thorson insisted on rewriting some of their lyrics and even adding verses! The day before the show, Larry was disappointed that we couldn’t find giant foam lobster claws for his version of Under the Sea from the Little Mermaid. Aaron Porter bought a genie costume and offered to douse himself in blue facepaint for his role, and Ronai Brumett spent a weekend building the perfect “hipster Ariel” costume.

When You Wish Upon a Star

On DrupalCon – Monday the day before the show – the cast assembled for the first time for their only rehearsal together. I arrived a few minutes late, direct from a costume shop on Hollywood Boulevard. Jam had built karaoke tracks on his laptop, and Robert had put together a prompter for the script, so the group huddled around the two laptops and tried to work through the whole show.

Via <a href=

Via Mendel at Drupalcon LA. The prenote cast rehearses. From left to right, Larry Garfield, Aaron Porter, Adam Juran, Jeffrey McGuire, Campbell Vertesi.

The rehearsal showed us what a hit we had created. The performers had embraced the motto: “if you can’t sing it, perform it” and they started to feed off each other’s energy. We all laughed at Ronai’s dramatic rendition of Part of My Site, and the Agony Duet raised the energy even further. It turned out that Dries had never heard When You Wish Upon a Star from Pinocchio before, but he was willing to learn as long as he could have someone to sing along with him!

via Mendel at Drupalcon LA. Aaron Porter codes with his butt - on Dries Buytaert's laptop!

via Mendel at Drupalcon LA. Aaron Porter codes with his butt – on Dries Buytaert’s laptop!

The rehearsal really started to hit it’s stride when Aaron delivered You Ain’t Never had a Dev Like Me. Aaron had never sung in public before, and we could tell he was nervous. Then the backtrack started playing with its blaring horns, and he came alive. It’s a difficult piece, with lots of fast moving text and a rhythm that can be hard to catch. Aaron launched into it with gusto. He had us in stitches when he shouted “can your friends do this!” and grabbed Dries’ laptop to start typing with his butt. When he nailed the high note at the end with a huge grin on his face, it was a deciding moment for the group.

From that moment on we were on a ride, and we knew it. Simpletest (to the tune of Be Our Guest from Beauty and the Beast) turned out to be a laugh riot, and Jeremy led us naturally into a kick line for the grand finale. We cheered Larry’s choreography skills during the dance break of RTBC, and Ben Finklea was a natural (as ever) at leading us all in Commit, to the tune of Heigh Ho from Snow White.

Forum One UX lead Kristina Bjoran, had protested the most of everyone about having to sing, but the moment she started with our version of Let it Go from Frozen, we were caught up in the feeling of it. I don’t think anyone expected the goosebumps that happened when we sang that chorus together, but we all appreciated what it meant.

Let it Go

The morning of the show saw the whole cast up bright and early. Though we joked about doing a round of shots before going on stage, no one seemed nervous. In fact we spent most of the setup time laughing at one another. Larry discovered that he has great legs for red tights. Aaron got blue face paint everywhere. We cheered at Jam and Robert’s Mickey and Minnie costumes, and laughed at Ronai’s perfect Hipster Ariel.

Some of us had last minute changes to make: Jeremy spent his time crafting oversized cuffs for his costume. I had forgotten the belt to my ninja outfit, so we made one out of duct tape. Kristina discovered that her Elsa costume limited her movement too much for the choreography she had planned. Dries was the only one who seemed nervous to me – this guy who has spoken in public countless times was afraid of a little Disney! We sang through the song together one last time, and it was time to go on.

via Mendel at Drupalcon LA. Jeremy Thorson leads the

via Mendel at Drupalcon LA. Jeremy Thorson leads the “Simpletest” song. Behind him, from left: Campbell Vertesi, Ronai Brumett, Adam Juran, Aaron Porter, Dries Buytaert

Everyone knows the rest – or at least, you can see it on youtube. What you probably don’t know is how hard we all laughed as we watched the show backstage. Even knowing every word, the energy from the audience was infectious. In the end, there’s nothing quite like standing in front of three thousand people and shouting together: “we come for code, but we stay for community!”

Previous Post

Evolving the Nation’s Report Card – A Study of Designing New Reports

Next Post

Participating in the Drupal Community

May 19 2015
May 19

Here’s a tangent:

Let’s say you need to randomly generate a series of practice exam questions. You have a bunch of homework assignments, lab questions and midterms, all of which are numbered in a standard way so that you can sample from them.

Here’s a simple R script to run those samples and generate a practice exam that consists of references to the assignments and their original numbers.

## exam prep script

## build hw data
j <- 1

hw <- data.frame(hw_set = NA, problm = seq(1:17))

for (i in seq(1:12)) {
        hw[j,1] <- paste0("hw",j)
        j <- j+1


hw <- expand(hw)

names(hw) <- c("problm_set", "problm")

## build exam data

j <- 1

exam <- data.frame(exam_num = NA, problm = seq(1:22))

for (i in seq(1:8)) {
        exam[j,1] <- paste0("exam",j)
        j <- j+1


exam <- expand(exam)

names(exam) <- c("problm_set", "problm")

## create practice exam

prctce <- rbind(exam,hw)

prctce_test <- prctce[sample(1:nrow(prctce), size=22),]

row.names(prctce_test) <- 1:nrow(prctce_test)


As the last line indicates, the final step of the script is to output a prctce_test … that will be randomly generated each time the script is run, but may include duplicates over time.

output from r script

Sure. Fine. Whatever.

Probably a way to do this with Drupal … or with Excel … or with a pencil and paper … why use R?

Two reasons: 1) using R to learn R and 2) scripting this simulation let’s you automate things a little bit easier.

In particular, you can use something like BASH to execute the script n number of times.

for n in {1..10}; do Rscript examprep.R > "YOUR_PATH_HERE/practice${n}.txt"; done

That will give you 10 practice test txt files that are all named with a tokenized number, with just one command. And of course that could be written into a shell script that’s automated or processed on a scheduler.

automatically generated practice tests with bash and r script

Sure. Fine. Whatever.

OK. While this is indeed a fairly underwhelming example, the potential here is kind of interesting. Our next step is to investigate using Drupal Rules to initiate a BASH script that in turn executes an algorithm written in R. The plan is to also use Drupal as the UI for entering the data to be processed in the R script.

Will document that here if/when that project comes together.

May 15 2015
May 15

A number of us from Forum One are sticking around for Friday’s sprints, but that’s a wrap on the third day of DrupalCon and the conference proper!

Wednesday and Thursday were chock-full of great sessions, BoFs, and all the small spontaneous meetings and conversations that make DrupalCons so fruitful, exhausting and energizing.


Forum One gave three sessions on Wednesday. John Brandenburg presented Maximizing Site Speed with Mercy Corps, a case study of our work on www.mercycorps.org focusing on performance optimization. Kalpana Goel of Forum One and Frédéric G. Marand presented Pain points of learning and contributing in the Drupal community, a session on how to encourage and better facilitate code contributions to Drupal from community members. And finally Forum One’s Andrew Morton presented Content After Launch: Preparing a Monkey for Space, a survey of content considerations for project success before, during, and after the website build process. The other highlight from my perspective on Wednesday was a great talk by Wim Leers and Fabian Franz on improvements to Drupal performance/speed, and how to make your Drupal sites fly.


On Thursday, Daniel Ferro and Dan Mouyard rounded out the seven Forum One sessions with their excellent presentation, To the Pattern Lab! Collaboration Using Modular Design Principles. The session describes our usage of Pattern Lab at Forum One to improve project workflow and collaboration — between visual designers, front- and back-end developers, and clients — an approach that has eased a lot of friction on our project teams. I’m particularly excited about how it’s allowed our front-end developers to get hacking much earlier in the project lifecycle. (We were glad to see the presentation get a shout out from Brad Frost, one of the Pattern Lab creators.)

Other highlights for me on Thursday were the beloved Q&A with Dries and friends and sitting down over lunch with other Pacific Northwest Drupalers to make some important decisions about the PNW Drupal Summit coming to Seattle this fall.

Next Stops for DrupalCon

In addition to looking ahead to DrupalCon Barcelona, the closing session revealed the exciting news that DrupalCon will be landing in Mumbai next year!

#DrupalCon is coming to Mumbai! Plus other photos from todays closing session https://t.co/Y3vWCQCSTu? pic.twitter.com/zEt4Y6VLxS

— DrupalCon LosAngeles (@DrupalConNA) May 15, 2015

And the always anticipated announcement of the next DrupalCon North America location… New Orleans!

And the next North American #DrupalCon will be…… pic.twitter.com/AXiFxv3gfW

— DrupalCon LosAngeles (@DrupalConNA) May 14, 2015

That news was ushered in soulfully by these gentlemen, Big Easy style, pouring out from the keynote hall into the convention center lobby.

Great way to announce #DrupalCon New Orleans! #DrupalConLA pic.twitter.com/3cRmV8jI1F

— Andy Hieb (@AndyHieb) May 14, 2015

And to finish off the day properly, many of us hooted and hollered at Drupal trivia night, MC’d by none other than Jeff Eaton.

Another fantastic #DrupalCon trivia night in progress… Woo! pic.twitter.com/AzavA2AFXi

— Jeff Eaton (@eaton) May 15, 2015

A great con was had by all of us here at Forum One… On to the sprints!

Previous Post

Hacking the Feds: Forum One Among the Winners at GSA Hack-a-Thon

Next Post

Evolving the Nation’s Report Card – A Study of Designing New Reports

May 15 2015
May 15

Last Friday, we attended the Digital Innovation Hack-a-Thon hosted by the GSA… and we won. The federal tech website FCW even wrote an article about it.

Our team, made up of designers and developers from Forum One, along with Booz Allen Hamilton, Avar Consulting, and IFC International, worked on a solution for IAE’s Vendor Dashboard for Contracting Officers. We were tasked with creating a vendor dashboard for displaying GSA data that would enable procurement officers to quickly and easily search and identify potential vendors that have small-business or minority-owned status, search by other special categories, and view vendors’ history.

How did we tackle the problem?

Our team initially split into smaller working groups. The first group performed a quick discovery session; talking with the primary stakeholder and even reaching out to some of the Contracting Officers we work with regularly. They identified pain points and looked at other systems which we ended up integrating into our solution. As this group defined requirements, the second group created wireframes. We even took some time to perform quick usability testing with our stakeholders, and iterate on our initial concept until it was time to present.

The other group dove into development. We carefully evaluated the data available from the API to understand the overlap and develop a data architecture. Using that data map, we decided to create a listing of contracts and ways to display an individual contract. We then expanded it to include alternative ways of comparing and segmenting contracts using other supporting data. Drupal did very well pulling in the data and allowed us to leverage its data listings and displays tools. Most developers see Drupal as a powerful albeit time intensive building tool, but it worked very well in this time critical environment.

Our two groups rejoined frequently to keep everyone on the same page and make sure our solutions was viable.

How much could we possibly accomplish in 6 hours?

More than you might think. Our solutions presented the content in an organized, digestible way that allowed contracting officers to search and sort through information quickly and easily within one system. We created wireframes to illustrate our solution for the judges and stakeholders. We also stood up a Drupal site to house the data and explained the technical architecture behind our solution. Unfortunately, we didn’t have a front-end developer participating in the hack-a-thon, so we weren’t able to create a user interface, but our wireframes describe what the UI should eventually look like.

Some of us even took a quick break to catch a glimpse the Arsenal of Democracy World War II Victory Capitol Flyover from the roof. It was also broadcasted on the projectors in the conference room.

Arsenal of Democracy Flyover of the National Mall

What did we learn?

It’s interesting to see how others break down complex problems and iterate on solutions especially if that solution includes additional requirements. Our solution was more complex than some of the other more polished data visualizations, but we won the challenge in part because of the strategy behind our solution.

We’re excited to see what GSA develops as a MVP, and we’ll be keeping our ears open for the next opportunity to attend a hack-a-thon with GSA.

Finally, a big shout out to our teammates!

  • Mary C. J. Schwarz, Vice President at ICF International
  • Gita Pabla, Senior Digital Designer at Booz Allen Hamilton
  • Eugene Raether, IT Consultant at Booz Allen Hamilton
  • Robert Barrett, Technical Architect, Avar Consulting

Previous Post

Zero to MVP in 40 Minutes: What We learned Building Headless Drupal 8 for DrupalCon LA

Next Post

DrupalCon LA Round-Up: Wrapping Up and Looking Ahead

May 14 2015
May 14

My colleague Adam Juran and I just finished with our session, Zero to MVP in 40 minutes: Coder and Themer Get Rich Quick in Silicon Valley, at DrupalCon LA. This one was a real journey to prepare, and through it we learned a lot of dirty truths about Drupal 8, Javascript frameworks, and the use cases where the two make sense together.

The live coding challenge in our session proposal seemed simple: create a web application that ingests content from an external API, performs content management tasks (data modelling, relationships, etc.) through the Drupal 8 interface, and deliver it all to an AngularJS front-end. This is exactly the “headless Drupal” stuff that everyone has been so excited about for the last year, so doing it in a 40 minute head-to-head code battle seemed like an entertaining session.

Ingesting content from an external API

The first hard truth we discovered was the limitations of the still-nascent Drupal 8. Every monthly release of a new Drupal 8 beta includes a series of “change records,” defining all the system-wide changes that will have to be accounted for everywhere else. For example, one change record notes that a variable we often use in Drupal forms is now a different kind of object. This change breaks every single form, everywhere in Drupal.

The frequency of this kind of change record is a problem for anyone who tries to maintain a contributed module. No one can keep up with their code breaking every month, so most don’t. The module works when they publish it as “stable”, but two or three months later, it’s fundamentally broken. changes like this currently happen 10-15 times every month. Any module we were hoping to use as a part of this requirement – Twitter, Oauth, Facebook – were broken when we started testing.

We finally settled on using Drupal’s robust Migrate module to bring in external content. After all, Drupal 7 Migrate can import content from almost any format! Turns out that this isn’t the case with Drupal 8 core’s Migrate module. It’s limited to the basic framework you need for all migrations. Importers for various file types and sources simply haven’t been written yet.

No matter which direction we turned, we were faced with the fact that Drupal 8 needed work to perform the first requirement in our challenge. We chose to create a CSV Source plugin ourselves (with much help from mikeryan and chx) just to be able to meet this requirement. This was not something we could show in the presentation; it was only a prerequisite. Phew!

Displaying it All in Angular

Building an AngularJS based front end for this presentation involved making decisions about architecture, which ended up as the critical focus of our talk. AngularJS is a complete framework, which normally handles the entire application: data ingestion, manipulation, and display. Why would you stick Drupal in there? And what would an Angular application look like architecturally, with Drupal 8 inside?

You always have a choice of what to do and where to do it. Either system can ingest data, and either system can do data manipulation. Your decision should be based on which tool does each job the best, in your particular use case: a catch-all phrase that includes factors like scalability and depth of functionality, but also subtler elements like the expertise of your team. If you have a shop full of AngularJS people and a simple use case, you should probably build the entire app in Angular!

Given that perspective, Drupal really stands out as a data ingestion and processing engine. Even when you have to write a new Migration source plugin, the Entity model, Drupal’s “plug-ability”, and Views make data crunching extremely easy. This is a strong contrast to data work in Angular, where you have to write everything from scratch.

We feel that the best combination of Drupal and Angular is with Drupal ingesting content, manipulating it, and spitting it out in a ready-to-go format for AngularJS to consume. This limits the Angular application to its strengths: layout, with data from a REST back-end, and only simple logic.

The Session

[embedded content]

In the session, we talked a bit about the larger concepts involved, and moved fairly quickly into the technical demonstration. First, Adam demonstrated the flexibility of the decoupled front-end, using bower libraries to build an attractive layout without writing a single line of custom CSS.  Then I demonstrated importing data from CSV sources into Drupal 8, along with the simplicity of configuring Drupal Views to output JSON. Taken together, the videos are 37 minutes long – not bad for a totally custom RESTful endpoint and a nice looking front-end!

Here is Adam’s screencast, showing off the power of the bootstrap-material-design library to build a good looking site without any custom CSS at all:

Here is my screencast, demonstrating how easy it is to create Migrate module importers and REST exports in Drupal 8.

And here is the final screencast, quickly showing the changes we made in AngularJS to have it call the two Drupal Services.

Want to learn of Forum One’s Drupal development secrets? Check out our other Drupalcon blog posts, or visit our booth (#107) and talk with our tech wizards in person!

Previous Post

DrupalCon LA Day 1!

Next Post

Hacking the Feds: Forum One Among the Winners at GSA Hack-a-Thon

May 13 2015
May 13

DrupalCon day one was a great start to this year’s North American Drupal conference! Forum One is well represented this year, giving seven presentations this week.

The Con started off with the traditional “pre-note” show in the early morning. The pre-note is a session designed to get people out of their seats and into the feeling of this big, welcoming community. Jam McGuire, Robert Douglass,

via <a href=

via Mendel: Forum One’s Kristina Bjoran leads the Prenote finale. From left: Jeffrey McGuire, Larry Garfield, Campbell Vertesi, Adam Juran, Dries Buytaert, Ronai Brumett, Aaron Porter

Forum One’s Adam Juran and I have been putting these together for a few years now, and for DrupalCon LA we wrote a Disney musical about Drupal. From Ariel’s song “Part of My Site” to our own version of Into the Woods’ “Agony,” the show got a lot of laughs with its parody lyrics. One high point was Dries, the founder of Drupal, entering the stage with top hat and cane to perform, “When you install Drupal 8? to the tune of “When You Wish Upon a Star” – ending prematurely with a fatal error! This was followed by “Someday D8 Will Come”, and a lot of laughs. The prenote ended with Forum One’s Kristina Bjoran leading the audience in a DrupalCon version of “Let It Go” from Frozen. After all the laughter, it was a nice moment to hear the audience cheer in unison: “we come for code, but we stay for community.”

[embedded content]

Dries’ keynote came next. This year he didn’t talk so much about the great new features of Drupal 8 – we’ve been talking about that for four years now! Instead, he focused on the history of Drupal as a platform, starting in his dorm room in 2001. Once we got to the present day, he switched to the coming challenges in the web sector. The Internet is becoming less and less about browser-based interaction, according to Dries. Increasingly people access data using tailored apps or devices, which means there is a great need for a data back-end like Drupal that can provide for all of these end points. Consumers demand more and more customized and predictive content, and Drupal 8 is a strong platform for that capability.

The day was filled with interesting sessions, but a few stuck out to us. There was Amitai and Josh Koenig’s Decoupled Drupal talk, where they demonstrated an automatic headless Drupal site generator. There were a couple of interesting sessions about long form content: the technical side by Murray Woodman and Jeff Eaton, and the strategic side by Forum One’s Kristina Bjoran and Courtney Clark. Courtney had a double-header day: she also presented about Forum One’s work on content strategy for Drupal.org. I got to present with Adam Juran and Jam McGuire about headless Drupal, building a simple Drupal 8 backed AngularJS demonstration in 40 minutes. We learned a lot about various prototyping tools, and were surprised to find no clear consensus on a standard toolkit for this important problem. Forum One resources were asked a lot of questions about how we use Pattern Lab in this space. Forum One’s Daniel Ferro and Dan Mouyard have a session about Pattern Lab on Thursday.

[embedded content]

[embedded content]

[embedded content]

Be sure to keep checking back for more of our takeaways and recaps of DrupalCon LA.

The cast of the prenote: Dries Buytaert, Aaron Porter, Ben Finklea, Robert Douglass, Adam Juran, Campbell Vertesi, Jeremy Thorson, Kristina Bjoran, Ronai Brumei, Larry Garfield, Jeffrey

via Mendel: The cast of the prenote, from top left: Dries Buytaert, Aaron Porter, Ben Finklea, Robert Douglass, Adam Juran, Campbell Vertesi, Jeremy Thorson, Kristina Bjoran, Ronai Brumett, Larry Garfield, Jeffrey McGuire

Previous Post

Google to Non-Mobile sites: ‘You’re Dead to Me’

Next Post

Zero to MVP in 40 Minutes: What We learned Building Headless Drupal 8 for DrupalCon LA

May 08 2015
May 08


So you’re going to Drupalcon? Looking for a new job? Here are some quick tips to up your chances on finding a great new gig!

Do your homework. Take some time to check out what companies will have a booth and what companies will have employees presenting a session or running a BOF. Also, be sure to check out the Drupal.org job board. Looking into this ahead of time will help you get a game plan together. Maybe a company you admire doesn’t have a booth but their CTO is presenting a session – attend the session and look to strike up a conversation after the session about any open positions.

Take this time to dig past the job description. You’re going to have a chance to interact with current employees of the companies you’re interested in at Drupalcon, so take this time to ask about the things that aren’t necessarily in a job description. Did their company pay their way to Drupalcon and what else can they tell you about professional development opportunities? Do they have a good work/life balance? People tend to be more open/friendly at events like this so find out more about what is important to you! You might be able to get some great insight into what their company culture is like. Perhaps your dream company doesn’t have an open position that is a good fit for you – ask what their future growth plans are – maybe there could be a role opening up soon that could work for you!

Come prepared/follow up. Be sure to have more than enough copies of your resume or business cards. Be sure to include your Drupal.org ID on your resume or perhaps on the back of your card  this is especially helpful for developement folks. Also, don’t forget to get the business card of the people you talk to about the companies/roles you’re targeting. You want to be sure you follow up with an email after Drupalcon to strengthen the connections you made and hopefully get referred through that employee for an open role – that is a much stronger application than if you’re a general applicant.

And just in case you’re interested in Forum One check out our open tech positions!

Outside of tech we’re looking for…

We’re also presenting several sessions! Come meet some of our awesome team members.

If you’re looking a for a new gig come by and meet the team at Booth #107.  While you’re at the booth take a minute to vote on your favorite Drupal topic, and be sure to check out the results on Twitter (@ForumOne)!

Previous Post

Content After Launch: Preparing a Monkey for Space

Next Post

Google to Non-Mobile sites: ‘You’re Dead to Me’

May 08 2015
May 08

We’re excited to announce this talk, Content After Launch – Preparing a Monkey for Space on Wednesday, May 15, 2015 from 5pm to 6pm at DrupalCon LA!

So what’s it all about? Well, coupled with a silly metaphor, I’m going to be talking about what happens to content during various stages of a website build, from the initial kickoff, through the production, and well after launch. The talk will touch on:

  • how all team members can get involved in the success of a launched website.
  • setting and managing expectations for what it takes to run a site post-launch.
  • everything you might have missed while focused on designing and building the website.

Come for the metaphor, stay for the juicy takeaways! Spoiler alert – there will be an abundance of monkey photos.

Previous Post

To the Pattern Lab! Collaboration Using Modular Design Principles

May 07 2015
May 07

Beakers and science equipment. The beakers are filled with patterns instead of plain liquids.

Come check out our presentation at Drupalcon 2015 in Los Angeles about modular design on Thursday, May 14, 2015 at 1:00 – 2:00pm PST.

You’ll learn how to use styleguide/prototyping tools like Pattern Lab to increase collaboration between designers, themers, developers, and clients on Drupal projects. A focus on modular design early on in a Drupal project improves workflow and efficiency for the whole team!

After applying modular design principles into your design workflow you will have, guaranteed *:

  • Shinier, more polished sites: You’ll improve collaboration between themers and designers without relying so much on static photoshop comps, dramatically improving the end product’s quality at a higher detail level.
  • Happier clients: Clients will be able to see functional visual designs earlier in the project and be able to play with the site in an easy to use interface.
  • Happier developers: Developers can concentrate on the hard work of building the site while themers and designers concentrate on the visual design.
  • Project managers overcome with joy: Sites will be more easily themed, front-end bugs will be caught earlier, clients can see progress sooner, designers will be less bogged down in Photoshop iterations, and projects will be more successful.

We hope to see you there. It should be a lot of fun and we are genuinely interested in hearing your thoughts. If you are impatient and want to learn more about Pattern Lab and design patterns in general, take a look at this blog post by Brad Frost on designing pattern flexibility.

* not an actual guarantee. Results may vary. Consult your doctor if your clients remain happy for over four hours

Previous Post

Telling Simple (and Complex!) Stories with Open Data

May 06 2015
May 06

In a world where your page load speed is critical to success…

I couldn’t resist. With Drupalcon in Tinsel Town, I’m going to start most of my conversations with “in a world…” [Ed. note: Only if you use the Don LaFontaine voice every time.]

My session for Drupalcon LA is a partner session with Forum One client Mercy Corps. We’ll team up to show you how we maximized the performance of mercycorps.org. Maximizing Site Speed with Mercy Corps will take a tour of specific measures we used to make their critical fundraising platform blazing fast. Come see me, John Brandenburg, and Drew Betts, lead User Experience Designer at Mercy Corps, as we tag team on subjects like measuring user engagement, debugging Drupal caches and measuring performance. We will even discuss some quick tips that every Drupal site manager should use to maximize the performance of their own site.

Why come to this session?

Perhaps because Google itself ranks your site on speed. Or perhaps increments in site speed can demonstrably increase conversion rates. Or perhaps you are tired of hearing the groans of your own digital staff about the performance of your public site. After the session, we will have a Q&A where you can learn from the experts and ask questions about improving the performance of your own sites. In the meantime you can also check out my more detailed blog post on Drupal site speed.

Previous Post

SEO Cheat Sheet: Tips and Tools for Improving Your Standing in Search

Next Post

Telling Simple (and Complex!) Stories with Open Data

May 04 2015
May 04

The U.S. Department of Agriculture’s National Institute of Food and Agriculture (NIFA) works to ensure the long-term viability of agriculture. The institute realized the need for a new website to represent their professionalism and the scope of what they do. The NIFA communications team had been working for two years on web redesign planning before approaching Forum One to work together to create a new site. Prepared with wireframes, whiteboards, and a dedicated team, NIFA wanted to define a process for the development and build of the site. We met informally with NIFA’s product owner at DrupalGov Days in August 2014, and deployed a new site seven months later.

new nifa

Together, we found the most pressing issues that NIFA wanted to improve included updating the decade-old site structure and content, pushing the envelope on USDA’s design standards, and educating internal stakeholders on the use of a Drupal content management system.

Scrapping Archaic Infrastructure

One of the keys to NIFA’s success was that they came armed with a vision for genuinely improving their users’ experience with the new website. First on their list: changing the site’s organization to make content quickly and easily accessible. The old site’s information architecture was organized in a folder-based structure, in some instances requiring users to go as many as 10 levels down to find their desired content.

Taking a cue from USDA’s digital guidelines for agency websites, NIFA took a topic-centered approach to organizing content and leveraged Drupal taxonomies to tag content throughout the site. The result was dynamically-generated “related content” callouts that make it easy to jump between sections in a simplified site structure.

In the month after the site’s launch, analytics indicated that the number of pageviews per session had dropped by about half, but that users were spending about twice as long on pages. This indicates the users were finding what they wanted faster, without having to visit as many pages.

iconsDesign in the Government Context

Designing the look and feel of the new website was one of the more enjoyable challenges that NIFA and Forum One took on. Staying compliant with USDA’s branding and style guidelines for agency websites meant that we had to work within a set framework, but our product owner encouraged us to push the envelope to achieve a modern, clean look. For example, most agencies aren’t allowed to have their own logos, but they can still form their own visual identity through consistency in imagery, color palettes, and fonts. Drawing from NIFA’s existing print collateral – which featured large images, flat design, bold dropcaps, plenty of white space and a soft color palette – we applied a new design that extends the NIFA brand, while maintaining accessibility standards and a mobile-friendly approach, including new icons.

Rather than submitting abstract wireframes or pixel-perfect Photoshop comps every time requirements changed, we iterated on the site’s design through the use of Pattern Lab. Seeing semi-functional prototypes of the site as it came together helped stakeholders grasp abstract concepts and then express their needs. The combination of Pattern Lab and Agile development allowed the work to evolve and change without draining design and theming budgets.

Getting a Grasp on Content

Ultimately, NIFA’s greatest success was in improving the delivery of timely, accurate content to users. Whereas the administration of content on the old site involved complicated workflows and a dedicated team of content administrators, the new site allows NIFA’s communications office to decentralize ownership and management of content on the site. The new website uses Drupal Workbench, along with customized workflows and user permissions, to allow different offices to manage their own content.

While the site was under construction, NIFA undertook an organization-wide training of content migrators, editors, managers, and publishers to formalize processes for content administration on the new content management system. Stakeholders within the agency were intimately involved with the development and execution of NIFA’s new content strategy, taking on a near-wholesale rewriting and migration of content for the redesigned site. With new workflows in place, communications staff are no longer bogged down with the full-time job of pushing content updates, allowing them to focus on strategy moving forward.

Overall, we hope that NIFA’s message can be further amplified with the new website. Users can now more easily explore the variety of NIFA’s topics and programs, and NIFA staff can keep all these topics up-to-date with the improved workflow. We will stay tuned to see the future work of the agency and how their digital efforts support that work.

Previous Post

Styles of Storytelling: Cultivating Compelling Long-form Content

Next Post

SEO Cheat Sheet: Tips and Tools for Improving Your Standing in Search

May 01 2015
May 01

We are pumped to talk about styles of storytelling on Tuesday, May 12, 2015 at 2:15 – 3:15pm.

In our DrupalCon session, we will:

  • dissect what’s become a major buzzword – “long-form content.”
  • take a look at different types of long-form content
  • explore how “story” fits in
  • uncover what types of storytelling best suit your needs
  • help you figure out when long-form is right for you and your organization

Many organizations are embracing storytelling techniques to better connect with their audiences and drive them to action. They’re implementing long-form content as a platform for storytelling making use of its rich imagery, interactive elements, and better sharing capabilities.

People generally learn more and remember more when more of their senses are engaged by a story. Stories that include images get about twice the engagement as text-only stories. Stories told with visual elements are instantly captivating. The more senses that are engaged, the more emotions will be engaged and the more memorable the experience will be.

So come join us! And in the meantime, get yourself excited about storytelling by checking out this TED talk by Pixar writer Andrew Stanton.

Previous Post

Pain Points of Learning and Contributing in the Drupal Community

Next Post

New Design, Site and Workflow for USDA’s National Institute Food and Agriculture

Apr 30 2015
Apr 30

At Drupalcon L.A. I’ll be co-leading a core conversation about the, “Pain Points of Learning and Contributing in the Drupal Community.” A core conversation is not a teaching session, it’s format is a little different and let’s the speaker engage with the audience.

So what is this conversation all about?

I’d like to start with a story. I started contributing to Drupal 8 core just before DrupalCon Portland in 2013. I was listening to  a live hangout with different initiative leads in Drupal 8, and Larry Garfield (crell) was talking about how he needed help with the hook_menu conversion. I asked Larry how can I help and he pointed me to some documentations he wrote on Drupal.org. At this time I took my first steps into core with a normal issue, and I’ve been contributing ever since. This year I’ve been slowly climbing up the contributor list on drupalcores.com.

As someone who puts a lot of energy into contribution, I hope it means something when I say: it’s too hard to contribute to major/critical issues in the Drupal 8 issue queue.

I ran into a great example recently, when I picked up issue 2368769. I figured that after 5 years as a Drupalist, I must be able to make some meaningful contribution to this critical bug. Boy was I wrong! What did they mean by “lazy-loading route enhancers”? I searched the codebase and Drupal documentation, and couldn’t find any example to work from. I found generic Symfony documentation on the subject, but it still wasn’t enough.

What’s going on in the issue queue?

This story reveals a bottleneck in the Drupal 8 development process: the top contributors. There is a group of 50 – or perhaps fewer – who understand and are current on the ongoing major/critical issues with Drupal 8. We all appreciate their incredible hard work, especially since most of them are contributing in their personal time. But in my case, even as an experienced Drupalist and core contributor, I was stuck! Asking top contributors for help in IRC is always an option, but it distracts them from their own work/concentration/thought process  – we don’t want to see top contributors spending 90% of their time answering questions!

So how can we make it possible for non-top-50 contributors to help out on major/critical issues? How can knowledgeable Drupalists who want to contribute to major/critical issues make life easier for top contributors, instead of harder? What are some ways to get knowledge transfer outside of that group?

With just a little more guidance, people outside that “top 50” group could do so much more than the “novice” and “normal” issues we presently tackle. We talk about “continuous contribution” in Drupal 8, where a contributor doesn’t hesitate to work on the issues, and if you’re eager to learn every day, nothing should stop you from contributing.

How will the Drupal world look with our new ideas adopted? What could be possible?

In the Drupal community, we always say “if you don’t like something, make it better.” This session is that first step to make this better.

I’m excited to hear suggestions from the community. How do we break the “top 50” limit, and let the next 100 contributors contribute to major/critical issues? This conversation is where we can work on this problem together, to encourage more contributors to stop limiting themselves and get involved on a deeper level. Maybe we’ll even see the benefits as soon as big sprint day on Friday, May 15, 2015. I hope to see more contributors working on critical major bugs/issues. Let’s break the barrier together!

Previous Post

Why You Should Help the Nepalis, and How to Start

Next Post

Styles of Storytelling: Cultivating Compelling Long-form Content

Apr 28 2015
Apr 28

Lately, I have found myself involved in several discussions on improving the performance of Drupal sites. This can be a deep subject, with many different ways to approach it. In this blog post, I want to highlight three very simple methods to boost the speed of your site and share some raw data to show how much of a boost these measures can provide.

In a nutshell: Use Views Caches, the Page Cache, and CSS/JS Aggregation for better site performance. If you work in Drupal development every day, you know this all too well, but for some actual, benchmark comparisons, read on.

Before I go on, I will recognize that this is a fairly well-covered subject. People talk a lot about site performance. Perhaps it’s because at least one study has demonstrated that a 2-second delay in site load times can increase abandonment rates from 67% to 80%. The reason to bring these three methods up, however, is that they are such simple measures, they are often overlooked. It could be that no one remembered to turn these settings on, or perhaps developers were debugging an issue at one point and disabled something during their investigation, which they never re-enabled. It happens. What I would like to drive home is the importance of these simple measures, backed up by hard data. Also, for these reasons, I recommend using modules like Prod Check, or Site Audit, which will alert you when your Drupal site is not as well optimized as it should be.

For the analysis below, I used a tool called Apache Benchmark. It is well known in the development community, and can simulate large volumes of traffic visiting a site. One option the tool provides is the ability to output results in a csv file that provide a range of percentages (0-99) with a corresponding number of milliseconds (ms). This means that for each percentage, that percentage of requests were served under the corresponding time. These graphs are cumulative, e.g. 20% were served in 500 ms or less, 60% were served in 800 ms or less, and so on. I enjoy using this tool so much, I’ve incorporated it into a script I am developing to provide a consistent suite of tests to evaluate a site’s performance. In each of the following examples, I’ve simulated 1,000 requests, with 10 concurrent requests at a time.

 If you are running a Drupal website, you are almost certainly using the Views module. Views provides a lot of options for building and rendering lists of content. However, probably the most overlooked option in Views is its caching ability. By default, caching in Views is disabled.Turning it on gives you two options: (1) caching the query results and (2) caching the rendered output. You should use whichever timing you are comfortable with. Be aware that longer cache times will delay the appearance of new posts in the View. One helpful module to get around this is the Views Content Cache, which will expire cached views whenever similar content is updated. This might be often if your site is really active, so consider simply being at peace with semi-stale content. Your site will perform a lot better and your users will be happier for it.

So what kind of boost does this provide? Let’s compare three views. In a clean install of standard Drupal, I’ve created 1,000 article nodes through devel_generate and created three different views — one view with the default 10 items per page, another with 100 items per page, and a third with 1,000 items per page. Here is the result comparing each page with no caching whatsoever and versions using the cached views (using Views Content Cache, not that this matters).


The 10 items per page saw a decent decrease in response time. The actual means were 1,626 ms for no cache and 1,407 ms for the views cache, a 219 ms decrease. The effect is significantly more pronounced as we raise the number of items rendered on the page.


The change here in mean request time was from 3,741 ms to 1,483 ms, a 2,258 ms difference. Note the spike at the end represents early requests made before the cache was “primed.”  Better still, look at the effect when we have 1,000 items per page.


When we talk about caching, we usually talk about caching in layers. If one layer of the cache is bypassed, then the layer below is always available to catch it. If you want to take the deep dive in caching in Drupal, here is a great video from DrupalCon 2014. A layer above the Views cache would be the Drupal page cache. This is the mechanism that will actually store the rendered page in a database, which Drupal will serve to the next user until the page expires or the cache is cleared. Let’s just look at the effect of the 100 items per page.


We managed to cut down the mean response rate to 178 ms, a 1,637 ms difference or a 90% decrease in response time! You probably can see that I forgot to clear all the caches prior to the Page Cache test run as it’s priming started at the level of the views cache.

If you have enabled the page cache and don’t feel like you are getting any improvement, you can look at your http headers to confirm that it’s working. Remember, this will only work for anonymous users. If you see:

X-Drupal-Cache: Hit

Then you are hitting the page cache! But if it says Miss, then something may be amiss (see what I did there?). A quick note to developers: if you ever use the $_SESSION global variable in Drupal, then users, including anonymous ones, will be given a session cookie, which means that pages will not be cached for that or any user with a session. A full code review of a site can turn up any usages of this.

Thus far, we’ve focused on improvements that can be described as “back-end enhancements.” They alter the way the software behaves to reduce the time it takes to render the page, which shortens what we call the “time to first byte.” This signifies the point at which the end user begins receiving data from the site. This value is critically important, because it is the initial request that will initiate all the other assets your end user will still need to download, including CSS, javascript, images, fonts, animated gifs of cats, and whatever else you might need to deliver to the user. Each of these items also has its own time to first byte, but until the initial page begins to download, your browser won’t even know to go fetch those items.

Another point to consider is that most browsers only download five to eight assets at a time. So if your page has 80+ items that need to be downloaded, they will need to wait in turn until all assets before them have been downloaded and an asset download slot is available. This means that pages with a lot of assets can take a while to download. Which brings us to the next absurdly simple way you can increase site speed.

Aggregation is the act of taking several of the assets used on a page, combining them into one file, and directing the user to download that file instead of the source version. In Drupal, if you look at the page source, without aggregation, you will see that it includes something like this:

 <script type="text/javascript" src="http://localhost:8080/misc/jquery.js?v=1.4.4"></script>

With aggregation turned on, it will change to something like this:

<script type="text/javascript" src="http://localhost:8080/sites/default/files/js/js_xAPl0qIk9eowy_iS9tNkCWXLUVoat94SQT48UBCFkyQ.js"></script>

With much fewer of these individual references. Now here is where we run into issues with the tools we are using to measure performance. The benchmark tool we have been using doesn’t actually download secondary resources like the javascript and css assets we are aggregating. In general, getting consistent, reliable, benchmark data is pretty difficult. So in my attempt to demonstrate what aggregation can actually do, I need to turn to more anecdotal evidence.

The Procedure

In my own browser, I’ve opened up the developer tools network tab, performed five hard refreshes, and recorded the load time for each. I performed this same procedure for both an un-aggregated and an aggregated page.

Without Aggregation

The Data: 232, 254, 310, 236, 216

Average Page Load: 249.6 ms

With Aggregation:

The Data: 209, 247, 188, 194, 213

Average Page Load: 210.2 ms

It appears that on average, my test site provided a 39.4 ms reduction in total load time, or 15.79%. So there you have it, we started with a page requiring over 1.5 seconds to load and brought that down by nearly 90% with just the simple tools available by default in Drupal. Do you know when it gets even better? When Drupal successfully serves a cached page, the visitor will then start using the “If-Modified-Since” header, so future requests aren’t even downloaded. Average load time for these types of requests was around 130 ms.

It’s 2015, so we can’t talk about Drupal without talking about Drupal 8. What’s new in the world of performance in the next big release of Drupal? The promise is faster loading across the board, thanks to the symfony framework. As far as the configuration options, like the ones we described above, are concerned, all the same tools are still available. The new exciting feature here is tag-based caching available, by default, in Views. This has the potential to provide functionality, like the Content Cache to core, to Drupal 8 views, but in a much more sophisticated way. Cache-based tagging will allow tagging of cache entries with arbitrary values, e.g. a node id, or a language to a view. This allows us to clear the caches later when appropriate, without clearing unrelated caches. This blog post by Dries summarizes the strategy perfectly.

These three suggestions, which anyone with administrative access can do, can make the difference between your site handling a few simultaneous users to handling hundreds and possibly even thousands at once. The above are just the basic, quick wins that any site administrator can perform, with some data on just how much of a boost the three steps can provide. If you are a developer looking for more advanced techniques on improving your site’s performance, below are a few additional ideas. If you aren’t sure how to go about these, get in touch with us, and we can talk about what Forum One can do for you.

  • Optimize images/reduce number of images used
  • Reduce number of social media widgets
  • Enable compression
  • Disable database logging or use syslog instead
  • Keep software up to date
  • Change backend cache to Memcache or APC
  • Use the Entity Cache (which complements the above suggestion nicely)
  • Use a reverse proxy caching tool like Varnish
  • Lazy load images
  • Defer javascript loading
  • Use a CDN
  • Configure webserver to serve serve static assets directly, i.e. avoid routing requests for images and javascript files through Drupal.

Previous Post

What do HeroRATS Have to do with Social Media?

Next Post

Why You Should Help the Nepalis, and How to Start

Apr 24 2015
Apr 24

Launching a website is just the beginning of a process. Websites need nurturing and care over the long run to stay relevant and effective. This is even more true for a service or tool such as LibraryEdge.org. Why would users come back if they can only use the provided service once or can’t see progress over time? And how can you put that love and care into the service if it is not self-funded?

This month, LibraryEdge.org released a number of changes to address just these issues.

Helping Libraries Stay Relevant

Before we dive into the release, here’s a bit on the Edge Initiative.

With the changes created by modern technology, library systems need a way to stay both relevant and funded in the 21st century. A big part of solving that problem is developing public technology offerings. Even in the internet-connected age, many lower-income households don’t have access to the technology needed to apply to jobs, sign up for health insurance, or file taxes, because they don’t have personal computers and internet connections. So where can people go to use the technology necessary for these and other critical tasks? Libraries help bridge the gap with computers and internet access freely available to the public.

It’s important that libraries stay open and are funded so their resources remain widely available. By helping library systems improve their “public access computers/computing,” the Edge Initiative and its partners have made major strides in making sure libraries continue to be a valuable resource to our society.

That’s where LibraryEdge.org comes in. The Edge Coalition and Forum One built LibraryEdge.org in 2013 as a tool for library systems to self-evaluate their public technology services through a comprehensive assessment – plus a series of tools and resources to help library systems improve their services.

New Functionality


The biggest feature update we recently launched was enabling libraries to retake the Assessment. They can see how they have improved and where they still need work compared to the previous year. To create a structure around how libraries can retake the Assessment, we built a new feature called Assessment Windows. This structure allows the state accounts to control when the libraries in their states can take the Assessment. States now have control over when their libraries conduct the Assessment and can track their libraries’ goals and progress on Action Items. This feature allows states to more accurately assess the progress of their libraries and adapt their priorities and programming to align with library needs.

Edge State

Results Comparison

The Edge Toolkit was initially built to allow users to view their results online, along with providing downloadable PDF reports so libraries can easily share their results with their state legislatures and other interested parties. Now that libraries can have results for two assessments, we’ve updated the online results view and the PDFs. Libraries can now see a side-by-side comparison of their most recent results with their previous results.



It’s common knowledge that people retain more of what they see, so we’ve also visualized important pieces of the results data with new graphs. If a library has only taken the assessment once, then the charts will only display its highest and lowest scoring benchmarks. However, if they’ve taken the assessment a second time, they can also see bar graphs for the most improved and most regressed benchmarks.


Improved User Experience


We made a number of enhancements based on feedback from libraries that have been using the tool for the past couple of years, as well as from interviews that we conducted with State Library Administrators. Starting with a series of interviews gave us great insight into how the tool was being used and what improvements were needed.

New Navigation

The added functionality of being able to retake the Assessment increased the level of complexity for the Edge Toolkit. So we redesigned the interface to guide users through this complex workflow. We split out the Toolkit into four sections: introduction/preparation, taking the assessment, reviewing your results, and taking action. This new workflow and navigation ensures a user is guided from one step to the next and is able to complete the assessment.

Notification Messages

Several dates and statuses affect a library system as they work through the assessment, such as how long they have to take it and whether it is open to be retaken. We’ve implemented notifications that inform the user of this information as they are guided through the workflow.


Automated Testing

When we release new features, we need to ensure other components on the site don’t break. Testing this complex of a system can take a long time and will get expensive over the lifetime of the site if it’s done manually. Furthermore, testing some sections or statuses involves taking a long assessment multiple times. In order to increase our efficiency and save time in our quality assurance efforts, we developed a suite of automated tests using Selenium.

What’s Next for Edge

The updated LibraryEdge.org now allows libraries to assess their offerings again and again so they can see how they are improving. Additionally, we’ve built a paywall so Edge can be self-supporting and continue to provide this valuable service to libraries after current funding ends. The launch of this updated site will help Edge remain relevant to its users and, therefore, ensure libraries remain relevant to our communities.

Previous Post

Is Your Site Telling You Something? Start Listening to the Analytics

Apr 07 2015
Apr 07

We don’t have a lot of feedback about how our patrons are using the current equipment booking system. There may be information that users could share with one another (and the library) if given a mechanism to do so. So as part of the new booking system implementation in Drupal, we set a task of including a commenting feature. Each reservable piece of equipment stands alone as a node so all we have to do is turn on commenting, right?


But there are a couple of things that are worth noting about that.

If you’re enabling comments on a content type, it’s probably a good idea to consider who can view (and post comments to) that content. That’s all in the permissions table.

In our scenario, we didn’t want unauthenticated comments and we didn’t want to restrict the individual equipment pages (e.g. the page for iPad copy 2) to any kind of login. The request to reserve equipment from that page would trigger the login.

The snippet from the permissions table below shows how we adjusted the comment access. Note that these will be permissions that will apply anywhere else on we’re using comments on our site … we’re not currently, but if we do in the future we’re fine with this access level.

permissions table for comment settings toggled on for authenticated users

Once authenticated, the comment form defaults to give users a text format selection option. There are advantages to users selecting a WYSIWYG format This too can be handled in the text format configurations or even the permissions table. An easier way is with the Simplify module.

Simplify gives you an interface to hide a bunch of stuff that may be noisy to users adding content — publishing options, path settings, etc.

And for comments it lets you hide text formats.

The finished product:

comment box without text format
Mar 23 2015
Mar 23

For our equipment booking system we needed two kinds of content types: reservation and equipment. Patrons would create nodes of the reservation content type. Staff would create nodes of the equipment content type(s). Because our library circulates a lot of different equipment we need a bunch of content types — one for each: Apple TV, Laptop, iPad Air, Digital Video Camera, Projector, etc. The equiment content types all need the same field structure — title, description, image, accessories and equipment category. It’s tedious creating these individually, but once you get one fully fleshed out (and the skeletons of the rest in place) then the Field Sync module will finish the job with the push of a button.

field sync screenshot

The reservation content type would be canonical for each kind of equipment. In other words, we don’t have to create a Reservation_iPad and a Reservation_Laptop and a Reservation_Projector, etc. There’s just one: Reservation. The way we accomplish this is by using an entity reference field that looks for any nodes of equipment content types.

entity reference configurations for reservation content type

When a patron creates a node of the reservation content type, he/she will select the equipment to be reserved in the entity reference field. This entity reference structure allows us to offer a pretty helpful feature for patrons navigating from a specific equipment page to the reservation form. An Entity Reference Prepopulate and Display Suite combination gives us a “Reserve This Item” link on equipment pages (say Apple TV – Copy 1) that sends the patron to a node/add reservation page that has that piece of equipment (in this case Apple TV – Copy 1) selected in the equipment field.

entity reference prepopulate with reservation link

There’s good documentation out there for Entity Reference Prepopulate — check out this video. But it might be worth explaining how we built the URL that prepoluates the entity reference field. With Display Suite you can create a custom code field — we hardcode a URL to the node/add reservation form, with a token for the id of the currently viewed entity appended onto it. When the user clicks the link, the Entity Reference Prepopulate module kicks off, sees the id and builds the form with that entity referenced.

Mar 23 2015
Mar 23

Landing pages are a must-have for any web business. Every marketer will tell you that pointing ads to a home page is a waste of money. Actually, any campaign should have a dedicated landing page to maximise the the conversion.

Here is the problem: setting-up landing pages in Drupal is not easy. Modules like Panels and Display Suite sure can help, but the flexibility is far from needed. Also, landing pages have to be tweaked over and over again. This can be super time consuming and expensive if you hire designers and developers.

We found a way that enables people with no Drupal skills build completely custom landing pages within minutes. 

If all you have is a hammer, everything looks like a nail.

First, we need to be open to use a tool outside of Drupal. Instapage got our attention.

Instapage is a service that enables anyone to build landing pages with an intuitive drag&drop editor. It includes features like A/B testing, webform builders, analytics, and much more. And we started with that.

Unfortunately we could only publish landing pages on subdomains. We noticed support for Wordpress where landing pages can be put on the domain’s path (eg wordpress-site.com/path). We reverse engineered the WP plugin and made Instapage module for Drupal.

This means you can build your landing page on Instapage and then import it to your Drupal site. You can choose on which URL this page will be located. When you edit the page on Instapage, the changes are immediately visible on the live site, without touching Drupal! I recorded a short screencast explaining how Instapage for Drupal works.

[embedded content]

We started using Instapage to set up all public page and ended up saving whole lot of time. I do have to explicitly mention that in order to use this feature inside Instapage, you have to go with a paid account. If you think about how much does an hour of your Drupal developer cots, it’s still super cheap!

Now go, Install and try Instapage module for Drupal.

Mar 11 2015
Mar 11

This is the first in what’s going to be a series of posts documenting our equipment booking system project. We’re developers working at a library that circulates equipment (laptops, tablets, cameras, etc.) — and we’re sick of maintaining the custom PHP application that manages the reservation process. So we built the whole thing into our existing Drupal site. I say “built” because it’s done … or at least sitting on the production server waiting for content to be entered. We’re doing the documentation after the fact, so I’ll try to pick and choose what’s worth putting out there. I’m guessing that will boil down to plugging a few modules and spending way too much time writing about the PHP script we used to check for reservation conflicts. We’ll see.

The beginning of the project was deciding whether or not we wanted to use a module to manage the reservation process. Actually the beginning was MERCI— we got a little turned around on this one … picked the module and pitched it before we had everything specd out. Once we dug in, MERCI turned out to be a reasonable module but just a little heavier than what we needed. In particular, the “bucket and “resource” model was too much and it was kind of a pain to manage without being able to get into the field configurations. We also tested out Commerce Stock for its inventory tools. Way heavier than MERCI.  To use Commerce Stock we would have to install Commerce and everything that comes with it. Rather than ripping things out that we weren’t going to use (or adding more to our already overstuffed stack) we decided to build the whole thing with content types, rules and views.

No problem right?

Mar 08 2015
Mar 08

A while back I wrote the Part I of how we built the Estonian Government web platform. I had all the intentions to quickly follow that up with Part 2 and 3, but as always, things got busy. The good news is, however, that all the sites are now live for several months and I can use live examples to illustrate my text here.

Part II is about some of the functionality we built and the technical solutions used for it. The development phase lasted throughout the second half of 2013 and first months of 2014 – 7 months of development means a lot of different stuff was built. Instead of trying to cover it all, I will try to focus on the parts that I feel characterize this project the best.

I already wrote about how we implemented the Google Site Search for the Government platform, which definitely was one of the big and complex parts of the project. But yet another big one was…

Integration with centralized state personnel database

Estonian Government is keeping the contact information of its employees – that is, the ministers, officials, department personnel, etc. – in a unified information system, based on SAP technologies. That is a good thing, as theoretically only a single integration is needed to show the public contact details of those people on the Government web platform sites. There was a lot of value there – over the 14 sites, that’s thousands of people, whose contact details, positions and careers within their organizations are in constant change, and keeping that information up to date would be a huge task for a lot of web editors. Instead, we were aiming to get all that data from internal systems, where it was kept up to date anyway.

But there were problems:

  1. Different organizations had different hierarchies – ie., Government Office had two levels of departments, while the Ministry of Economics had 4 levels.
  2. The SAP database did not contain or could not export all the information that was required to be displayed in the web.
  3. Different Ministries had slightly different practices about how they stored the contact information in the data fields, so there were a lot of exceptions of what could be directly presented to the public and what needed manual sanitation to get a more or less unified picture across all the Ministries.
Screen Shot 2015-03-08 at 19.06.49

The field lock checkboxes on Contact node edit page

It was clear quite fast that there were too many exceptions and some of those could not be predicted with 100% accuracy. So we needed some sort of a combination of having 95% of contacts automatically imported, but having the possibility to manually override – and making sure that manual work can be switched back to automatic or vice versa if needed.

The technical solution was to build a “field lock” functionality. A Feeds import made a daily automatic import that created or updated Contact nodes. Each node contained about 10 different fields of data for a person, such as Name, Position, Department, Education etc. If any of those fields needed manual adjustments, the editor could do so, and then use a special checkbox for that field to lock it from automatic overrides – so that when the next automatic import happened the next day, that field was not overwritten. If the data was later corrected in the backend SAP system, the lock could be removed and Feeds would resume writing into that field.

Examples of the views displaying those semi-automatic Contacts can be seen here and here.

WYSIWYG content templates

Government sites contain a lot of content, and it can get quite overwhelming for a visitor who needs to go through and get a sense of several topics fast. So the Government Office wanted each content page to be something more than just a wall of text, to have more structure, sections, visual data and differentiation. We needed a tool that would help content editors craft pages like that themselves, without having to learn HTML and CSS for many months.

So we used the Wysiwyg API template plugin module to add a template selector functionality for CKEditor. Our frontend wizard Hannes pre-baked some 10-15 nice content templates, and content editors only had to select the correct one from a popup, replace the dummy text, and enjoy a professional-looking, rich content experience. Content templates were WCAG 2.0 AA level compatible, fully responsive, and could even be used inside one another.

Screen Shot 2015-03-08 at 20.00.17

Template selector popup within CKEditor

Here are some examples of pages (in beautiful Estonian) made by content editors who probably didn’t have to write a single line of CSS: Ministry of Economic Affairs and Communications is writing about energetics, Ministry of Culture describes how they use foreign resources, Government Office introduces Estonian Presidency of the Council
of the European Union 2018.

Aggregating content between the sites

One goal of the Government platform was to unify the content across different Ministries. This included several behind-the-scenes processes, but also a technical solution to aggregate content across different sites into one central source.

Each Ministry had their own page for news, weekly schedule, and contacts. These in turn had RSS feed outputs, like this and this, which could be used for news readers, open data initiatives and other machine-reading purposes, and also for aggregation.

The central information source and aggregator was to be valitsus.ee (translated, government.ee), which was also running on the Government platform and didn’t represent any physical institution per se, but the government as a whole. So weekly schedule, news, and contact search pages on the valitsus.ee site would display content from all the Ministries, pulled in over RSS. You can see “Source” filters there, in case someone would need to focus on content from just a few of the Ministries.

Unfortunately, the RSS export-import has not yet been configured for all the sites and therefore there is no active aggregation yet – I’m planning to write more about the woes of deploying 13 big sites and the importance of follow-up phases in Part 3 of this series.

Mar 04 2015
Mar 04

I have been thinking awhile about why the Backdrop fork bothers me so much. At first I thought it would just be the fact that it will be splitting the community some or taking resources away from the Drupal project. But lots of projects I have worked with have been forked in the past, and it is almost aways a good thing, so why would this be different?

Thinking about it more, it dawned on me that Backdrop is the easy way out. People can put all sorts of spin on it, but at the end of the day Backdrop is going to cater to developers who don’t want to learn the new development methods in Drupal 8. I understand that mentality, why learn something new when what you know has served you so well for so long. If you jumped on Drupal at version 5, you have had 8 years where you can build awesome web sites and just learn how to develop things the Drupal way.

The thing is, lots has changed in the last 8 years outside of Drupal. I personally think Drupal 8 is a great thing. It is bringing more modern development practices, like object-oriented programming and test-driven development, into the project. It is also doing this by not reinventing the wheel, but by using an established framework in Symfony These are all really good things, not only for the project, but for Drupal developers as well.

Right now, a Drupal developer who just knows Drupal 7 or earlier has limited job opportunities outside of the Drupal community. If they have not kept up on things like MVC frameworks or how to write tests for code, they are behind a lot of other developers. 1

Letting developers live in this world longer, Backdrop does a disservice by having their skills fall further and further behind. I realize, no one is making anyone use Backdrop but by advertising it as “the comprehensive CMS for small to medium sized businesses and non-profits” and talking about how it is easy to convert from Drupal 7, I feel it is making it too easy to suck developers in without thinking about the potential consequences to their careers.

  1. This is just for back-end development, not front-end coding but some of this applies there too ?
Feb 16 2015
Feb 16

I know that BDConf had to be a decent conference for me because four months later I am still thinking about it. Right after the trip, I was focused more internally because the conference was a good kick in the pants that there is a broader web world than just Drupal. The last three years, my job has kept me so focused on Drupal that I had lost some of that perspective, so it was good to get me thinking of the broader world again.

In the time since then, I keep wondering about the tradeoffs Drupal has caused my team to make for our development. Don’t get me wrong, any programming language or platform has tradeoffs, whether it is about complexity, performance, stability or a number of other options. When I mentioned Drupal at the conference, one of the reactions I got was “so many divs!” with a look of distain on her face. That sort of thing makes you take pause and it got me wondering whether we tradeoffs we made when picking Drupal originally still apply.

I still don’t know the answer, I spend more time thinking about it than I probably should. The “so many divs” comment is an example of this. I realize that by having the community develop Drupal and it’s add-ons, our team has been able to make a lot of really cool web sites. But that means that we don’t know what every part of our site is doing, so we have sites that average 1.5-2MB in size routinely with likely inefficent queries, markup, styling and Javascript in there. As someone who came to Drupal from straight HTML, PHP and .NET, this drives me a little bonkers and makes me wonder, is this the right tradeoff to make.

Tangentially to this, I have some thoughts about the Drupal 8/Backdrop too that I am working on another post for. To the web folks out there reading this, I would love to hear your comments because with this spinning in just my head, I am not getting as far as I would like.

Feb 13 2015
Feb 13

Drupal community talks a lot about best practices. When I talk about best practices I mean code driven development, code reviews, SCRUM, automated tests… I immediately realised that introducing new ways of working is not going to be easy. So I figured, why not asking one of the smart people how to start. Amitai (CTO of Gizra) was very kind to have a call with me, explaining how The Gizra Way™ started and evolved.

One thing at a time

Making to many changes at once can backfire with a resistance. A new idea has to be sold to the team. It is impossible to expect that the whole team can just switch to a new way of working. Mastering one trade is better than struggling with many.

Testing period

Having a testing period takes away the pressure. If it does not show any benefit after a week, then you can stop doing it, but you really have to try during that week.

Leading by example

Just giving instructions does not help. In most cases developers will not be able to see the benefit at first sight and it can even feel like a punishment. Even if you are not a developer, you can encourage the team to try their best during the trial.

No exceptions

Starting with a new best practice feels like procrastination. In the beginning I had hard time selling why we need to have every project on a GIT repository. Now we just say: no exceptions.

During our call with Amitai we discovered that reviewing code from other developers is the foundation of a great team. Code reviews can be as simple as asking colleagues what are they up to. So we started our trial week on code review. After one week I was able to see changes in how we communicate and help each other. In this spirit we ended up releasing two modules and switching developers to use same IDE.

What really resonated with me was the fact, that it doesn’t need to be perfect. As Amitai says: I am not a purist. Having one code review or one automated test is better than none at all.

I will leave you with this great presentation from Amitai in DrupalCon Austin 2014:

[embedded content]

Feb 08 2015
Feb 08

This widget allows the Drupal Five Star module to accept inputs from multiple votes for various criteria and average it into one, It then displays the average of the multiple votes.

To create views sortable by the new widget (the average of five votes), hook_votingapi_results_alter() was used in a small custom module.

Below is a tutorial on how to implement the code.

Best practices using multiple votes criteria In node using FiveStar module in Drupal 6.

1. Enable Fivestar module, Voting API.
2. Go to edit content type and Enable Fivestar rating.
3. Add CCK field Type Fivestar Rating with “Fivestar rating” Select list widget.
4. Configure this CCK field: add Voting Axis: which is the multiple criteria separated by a comma. For example: first, second, third, fourth, fifth . Save field settings.
5. Create the template for node type. In this example it is node type is “teacher”. This will be node-teacher.tpl.php .

In the template paste this snippet:

$output = '';
$tags = array(
'first' = t('Communication'),
'second' = t(‘Availability'),
'third' = t(‘Skills'),
'fourth' = t(‘Efficiency'),
'fifth' = t(‘Personality'),
$i = 0;
foreach ($tags as $tag = $title) {
$votes = fivestar_get_votes('node', $node->nid, $tag);
$values = array(
'user' = isset($votes['user']['value']) ? $votes['user']['value'] : NULL,
'average' = isset($votes['average']['value']) ? $votes['average']['value'] : NULL,
'count' = isset($votes['count']['value']) ? $votes['count']['value'] : NULL,

if (user_access('rate content')) { /*check user access for voting in fivestar*/
$settings = array(
'stars' = 5,
'allow_clear' = TRUE,
'style' = 'average',
'text' = 'dual',
'content_type' = 'node',
'content_id' = $node->nid,
'tag' = $tag,
'autosubmit' = TRUE,
'title' = $title,
'feedback_enable' = TRUE,
'labels_enable' = TRUE,
'labels' = array(t('Poor'), t('Okay'), t('Good'), t('Great'), t('Awesome')),
$output .= drupal_get_form('fivestar_custom_widget', $values, $settings);
else {
$output .= theme_fivestar_static($values['average'], 5, $tag);

$reliability_rating = votingapi_select_results(array('content_id' = $node->nid, 'tag' =array('first', 'second', 'third', 'fourth', 'fifth' ), 'function' = 'average'));
foreach ($reliability_rating as $v) {
$a = $a + $v['value]; /*sum values*/
print theme('fivestar_static', $a/$i, '5') ; /* print fivestar widget with general average result*/
print $output; /*print 5 fivestar fields for votes with average result each*/

Check out the module used in this social networking site for yoga teachers, (ignore debug output at top)

Sort By Votes

To create views sortable by votes here is a small custom module that uses hook_votingapi_results_alter().  This is a complete custom module so save this code in a .module file with it’s own .info file.
function mymodule_votingapi_results_alter(&$results, $content_type, $content_id) {
$vote_avg_sum = 0;
$vote_avg_count = 0;
$vote_tags = 0;

foreach($results as $tag = $data) {
if($tag != ‘vote’) {
$vote_avg_sum += $data[‘percent’][‘average’];
$vote_avg_count += $data[‘percent’][‘count’];
if($vote_tags > 0) {
$results[‘vote’][‘percent’][‘average’] = $vote_avg_sum/$vote_tags;
$results[‘vote’][‘percent’][‘count’] = $vote_avg_count/$vote_tags;

1.In views settings add relationships
“Node: Vote results ”

2. configure it
“Value type: No filtering” “Vote tag: Normal vote” “Aggregation function:No filtering”

3. Go to Sort criteria and add
“Vote results: Value”
choose Descending or Ascending.

4. Add the filters that you need.

5. Then you can add to your fields
“Vote results: Value”

Jan 28 2015
Jan 28

As the largest bicycling club in the country with more than 16,000 active members and a substantially larger community across the Puget Sound, Cascade Bicycle Club requires serious performance from its website. For most of the year, Cascade.org serves a modest number of web users as it furthers the organization’s mission of “improving lives through bicycling.”

But a few days each year, Cascade opens registration for its major sponsored rides, which results in a series of massive spikes in traffic. Cascade.org has in the past struggled to keep up with demand during these spikes. During the 2014 registration period for example, site traffic peaked at 1,022 concurrent users and >1,000 transactions processed within an hour. The site stayed up, but the single web server seriously struggled to stay on its feet.

In preparation for this year’s event registrations, we implemented horizontal scaling at the web server level as the next logical step forward in keeping pace with Cascade’s members. What is horizontal scaling, you might ask? Let me explain.

[Ed Note: This post gets very technical, very quickly.]


We had already set up hosting for the site in the Amazon cloud, so our job was to build out the new architecture there, including new Amazon Machine Images (AMIs) along with an Autoscale Group and Scaling Policies.

Here is a diagram of the architecture we ended up with. I’ll touch on most of these pieces below.


Web Servers as Cattle, Not Pets

I’m not the biggest fan of this metaphor, but it’s catchy: The fundamental mental shift when moving to automatic scaling is to stop thinking of the servers as named and coddled pets, but rather as identical and ephemeral cogs–a herd of cattle, if you will.

In our case, multiple web server instances are running at a given time, and more may be added or taken away automatically at any given time. We don’t know their IP addresses or hostnames without looking them up (which we can do either via the AWS console, or via AWS CLI — a very handy tool for managing AWS services from the command line).

The load balancer is configured to enable connection draining. When the autoscaling group triggers an instance removal, the load balancer will stop sending new traffic, but will finish serving any requests in progress before the instance is destroyed. This, coupled with sticky sessions, helps alleviate concerns about disrupting transactions in progress.

The AMI for the “cattle” web servers (3) is similar to our old single-server configuration, running Nginx and PHP tuned for Drupal. It’s actually a bit smaller of an instance size than the old server, though — since additional servers are automatically thrown into the application as needed based on load on the existing servers — and has some additional configuration that I’ll discuss below.

As you can see in the diagram, we still have many “pets” too. In addition to the surrounding infrastructure like our code repository (8) and continuous integration (7) servers, at AWS we have a “utility” server (9) used for hosting our development environment and some of our supporting scripts, as well as a single RDS instance (4) and a single EC2 instance used as a Memcache and Solr server (6). We also have an S3 instance for managing our static files (5) — more on that later.

Handling Mail

One potential whammy we caught late in the process was handling mail sent from the application. Since the IP of the given web server instance from which mail is sent will not match the SPF record for the domain (IP addresses authorized to send mail), the mail could be flagged as spam or mail from the domain could be blacklisted.

We were already running Mandrill for Drupal’s transactional mail, so to avoid this problem, we configured our web server AMI to have Postfix route all mail through the Mandrill service. Amazon Simple Email Service could also have been used for this purpose.

Static File Management

With our infrastructure in place, the main change at the application level is the way Drupal interacts with the file system. With multiple web servers, we can no longer read and write from the local file system for managing static files like images and other assets uploaded by site editors. A content delivery network or networked file system share lets us offload static files from the local file system to a centralized resource.

In our case, we used Drupal’s S3 File System module to manage our static files in an Amazon S3 bucket. S3FS adds a new “Amazon Simple Storage Service” file system option and stream wrapper. Core and contributed modules, as well as file fields, are configured to use this file system. The AWS CLI provided an easy way to initially transfer static files to the S3 bucket, and iteratively synch new files to the bucket as we tested and proceeded towards launch of the new system.

In addition to static files, special care has to be taken with aggregated CSS and Javascript files. Drupal’s core aggregation can’t be used, as it will write the aggregated files to the local file system. Options (which we’re still investigating) include a combination of contributed modules (Advanced CSS/JS Aggregation + CDN seems like it might do the trick), or Grunt tasks to do the aggregation outside of Drupal during application build (as described in Justin Slattery’s excellent write-up).

In the case of Cascade, we also had to deal with complications from CiviCRM, which stubbornly wants to write to the local file system. Thankfully, these are primarily cache files that Civi doesn’t mind duplicating across webservers.

Drush & Cron

We want a stable, centralized host from which to run cron jobs (which we obviously don’t want to execute on each server) and Drush commands, so one of our “pets” is a small EC2 instance that we maintain for this purpose, along with a few other administrative tasks.

Drush commands can be run against the application from anywhere via Drush aliases, which requires knowing the hostname of one of the running server instances. This can be achieved most easily by using AWS CLI. Something like the bash command below will return the running instances (where ‘webpool’ is an arbitrary tag assigned to our autoscaling group):

[[email protected] ~]$aws ec2 describe-instances --filters "Name=tag-key, Values=webpool" |grep ^INSTANCE |awk '{print $14}'|grep 'compute.amazonaws.com'

We wrote a simple bash script, update-alias.sh, to update the ‘remote-host’ value in our Drush alias file with the hostname of the last running server instance.

Our cron jobs execute update-alias.sh, and then the application (both Drupal and CiviCRM) cron jobs.

Deployment and Scaling Workflows

Our webserver AMI includes a script, bootstraph.sh, that either builds the application from scratch — cloning the code repository, creating placeholder directories, symlinking to environment-specific settings files — or updates the application if it already exists — updating the code repository and doing some cleanup.

A separate script, deploy-to-autoscale.sh, collects all of the running instances similar to update-alias.sh as described above, and executes bootstrap.sh on each instance.

With those two utilities, our continuous integration/deployment process is straightforward. When code changes are pushed to our Git repository, we trigger a job on our Jenkins server that essentially just executes deploy-to-autoscale.sh. We run update-alias.sh to update our Drush alias, clear the application cache via Drush, tag our repository with the Jenkins build ID, and we’re done.

For the autoscaling itself, our current policy is to spin up two new server instances when CPU utilization across the pool of instances reaches 75% for 90 seconds or more. New server instances simply run bootstrap.sh to provision the application before they’re added to the webserver pool.

There’s a 300-second grace time between additional autoscale operations to prevent a stampede of new cattle. Machines are destroyed when CPU usage falls beneath 20% across the pool. They’re removed one at a time for a more gradual decrease in capacity than the swift ramp-up that fits the profile of traffic.

More Butts on Bikes

With this new architecture, we’ve taken a huge step toward one of Cascade’s overarching goals: getting “more butts on bikes”! We’re still tuning and tweaking a bit, but the application has handled this year’s registration period flawlessly so far, and Cascade is confident in its ability to handle the expected — and unexpected — traffic spikes in the future.

Our performant web application for Cascade Bicycle Club means an easier registration process, leaving them to focus on what really matters: improving lives through bicycling.

Previous Post

2015 Digital Trends for Influence

Next Post

Communicating Data for Impact takes Seattle


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web