Nov 26 2014
Nov 26

Voices of the ElePHPant / Acquia Podcast Ultimate Cage Match Part 2 - I had the chance to try to pull Cal Evans out of his shell at DrupalCon Amsterdam. After a few hours, he managed to open up and we talked about a range of topics we have in common. In this part of our conversation we talk about 'Getting off the Island', inter-project cooperation in PHP and Drupal's role in that; the reinvention and professionalization of PHP core development; decoupled, headless Drupal 8; PHP and the LAMP stack as tools of empowerment and the technologists' responsibility to make devices and applications that are safe, secure, and private by default.

In Part 1 of this 2-part series, we talked Drupal, PHP convergence and the "PHP Renaissance" (and "Why PHP?"), open source communities, proprietary v open source business and the ethics of helping, the Nomad PHP user group, and more.

Cal on PHP inter-project cooperation

"The PHP community is famous for reinventing the wheel. I've got a closet full of wheels. I don't need any more wheels ... The more all of us can work together ... The fact that we're starting to tear down these walls and stopping reinventing these wheels and starting to work together ... I think this is awesome!"

Decoupled, headless Drupal 8

Cal points out, "All of a sudden, you now have this entire CMS that's headless. I don't have to do the 80% of every project that is exactly the same. It's already built in Drupal! Lemme just take that and then I can do the 20% that is the fun stuff that makes my project special."

Speaking of the fun stuff, as I say in the podcast, here's the biggest benefit I see in Drupal 8 and the biggest differentiator between Drupal and most other projects: "The architecture of Drupal 8 makes it a user interface for building digital businesses. It is a user interface for making APIs and web services – consuming them, outputting them. And the one thing where I believe the Drupal community is going to remain incredibly stubborn is that we are not a developer project to enable developers to do more developing. We're a bunch of developers who've put this system together to empower non-developers to communicate, to build community, to take action in the real world with our technology ... So we will remain the point-and-click community." :-)

We have the technology and the responsibility

Cal puts it like this: "One of the reasons I love being a developer is that it becomes my job to make other people's lives better. What we do makes an impact on everybody, so we need to do it well. Because that makes their lives better, they don't have to worry about those problems. If they want a publishing platform, they want to say something, they can install Drupal, they can get it up and running, and they can get to publishing, which is what they want to do. What we want to do is write code! So it's a win for everybody. But it's wonderful for me to be a developer and to think that what I do has a positive impact on other people."

Guest Dossier

See Cal's full guest dossier in part 1 of this conversation

Interview video

[embedded content]

Nov 26 2014
Nov 26

The Drupal 7 File Resumable Upload Module is a great way to allow your Drupal site to upload large files. This is especially helpful if your server limits the size of files you can upload to your Drupal site. The module simply replaces the standard Drupal file upload field with a better alternative that allows:

  • Multiple files to be uploaded
  • The resuming of uploads that may have been interrupted
  • Drag and dropping of files from your file system

The File Resumable Upload module uses HTML5 for the upload field but does fall back to the standard Drupal file upload field if the widget is not supported by your browser.

In this video we cover:

  • How to install and configure the Drupal 7 File Resumable Upload module
  • How to change existing image or file fields to use the new resumable upload widget
  • How the upload widget allows multiple large file uploads, resuming of uploads, and dragging and dropping from the file system

File Resumable Upload Module Note: The 1.0 version of the File Resumable Upload module has a bug concerning progress that explains why it takes time to show actual progress. This bug has been fixed in the dev version on the module's project page.

Nov 26 2014
Nov 26
    * Implements hook_install().
function artist_install() { 

    * Installs artist module's default terms that are read from
    * text files in the module's includes folder.
function artist_install_terms() {
  foreach (array_keys(artist_vocabularies()) as $machine_name) {
    $v = taxonomy_vocabulary_machine_name_load($machine_name);
    $wrapper = entity_metadata_wrapper('taxonomy_vocabulary', $v);

    if ($wrapper->term_count->value() == 0) {
      $path = drupal_get_path('module', 'artist') . '/includes/terms_' . $v->machine_name . '.txt';
      $lines = file($path, FILE_SKIP_EMPTY_LINES);
      artist_install_term_tree($wrapper, $lines);

 * Installs a term tree.
 * @param $vwrapper
 *   EntityMetadataWrapper of a taxonomy_vocabulary entity.
 * @param $lines
 *   Array of lines from the term text file. The iterator must be set
 *   to the line to parse.
 * @param $last
 *   Either NULL or the parent term ID.
 * @param $depth
 *   Current depth of the tree.
function artist_install_term_tree($vwrapper, &$lines, $last = NULL, $depth = 0) {
  $wrapper = NULL;
  while ($line = current($lines)) {
    $name = trim($line);
    $line_depth = max(strlen($line) - strlen($name) - 1, 0);
    if ($line_depth  $depth) {
      $tid = $wrapper ? $wrapper->tid->value() : NULL;
      artist_install_term_tree($vwrapper, $lines, $tid, $depth+1);
    else {
      $data = array(
        'name' => $name,
        'vid' => $vwrapper->vid->value(),
        'parent' => array($last ? $last : 0),
      $term = entity_create('taxonomy_term', $data);
      $wrapper = entity_metadata_wrapper('taxonomy_term', $term); $wrapper->save();

    * Installs terms into default vocabularies.
   function artist_update_7201(&$sandbox) {

In the preceding code, term names are read from text files that have tab indentation to symbolize the term hierarchy

Nov 26 2014
Nov 26

It is hard to believe, but we just finished our second-to-last board meeting of the year. The Association has grown and changed so much in 2014 and the November meeting was a great chance to talk about some of those changes and what we are planning for 2015. It was a somewhat short public meeting as we spent the bulk of our time in Executive Session to review the financial statements from the last quarter and the staff's proposed 2015 Leadership Plan and Budget. As always, you can review the minutes, the materials, or the meeting recording to catch up on all the details, and here's a summary for you as well.

Staff Update

DrupalCons: DrupalCon Amstedam is over, and we are now focused on evaluation of the event - reviewing all the session and conference evaluations as well as closing up the financials for the event. We will have an in-depth review of the event at the December board meeting. Next up is DrupalCon Latin America, which is progressing nicely now with sessions accepted and registration open. One final DrupalCon note is that DrupalCon Los Angeles session submissions should open in January, so mark your calendars for that.

Drupal.org: Drupal.org has been our primary imperative at the Association this year. We've spent 2014 building a team and really working to pay off a mountain of accumulated technical debt while also balancing the need to release new features for the community. We are very pleased that, with the working groups and community feedback, we've been able to release a Drupal.org roadmap for 2015. We also released a new Terms of Service and Privacy Policy after extensive edits based on community feedback. We'll continue to respond to questions and ideas about these documents and make notes for future releases. We have also finally been able to deploy and use a suite of over 400 tests on Drupal.org. This is work that was initiated by Melissa Anderson (eliza411) and she was extremely helpful in getting those tests up and running again. We're thrilled to be using this contribution after all this time and are extremely grateful to Melissa.

Community Programs: We just held our final Global Training Days for 2014 with over 80 companies on every continent but Antarctica (c'mon penguins!). This program has continued to evolve with new partnerships and currciulums used this time around, as well as a plethora of great photos and tweets sent our way.

Marketing and Communications: Joe Saylor and our team have been working with the Security team to develop a follow up to the recent security announcement focused on the lessons learned and changes our community has made in response to the situation. It's another great example of an Association and community volunteer partnership.

Licensing Working Group

As we discussed in the August board meeting, some community members have expressed concern over the slow and sometimes inconsistent response to licensing issues on Drupal.org. In response, a volunteer group came together to draft a charter which was published for comment just after DrupalCon Amsterdam. Some of the main points from the charter include:

  • All members (chair + 4-5) appointed by the Board?
  • Scope is limited to licensing of code and other assets on D.O only, not other sites, and not determining WHICH license is used?
  • Group responds to issues, does not police for issues?
  • Will maintain the whitelist of allowable assets

The Association board voted to adopt the charter, so our next steps are to recruit members, create a queue for licesning issues, and then provide some legal training for our new Working Group members. If you are interested in participating, you can nominate yourself for the Working Group. We are planning to present a slate of candidates to the board for approval in the January 2015 meeting.

3rd Quarter Financials

Once per quarter, the board discusses the previous quarter's financial statements and then votes to approve and publish them. In the November meeting the board approved the Q3 financials:

I recently wrote a post highlighting how to read our financial statments, but will summarize here for you as well. Generally speaking, we are performing well ahead of our budgeted deficit spend. Though we had planned for a -$750,000 budget for 2014, a combination of slow tech team hiring, savings on Drupal.org contractor expenses, and some better than budgeted revenue means that we will not operate at nearly that level of loss for 2014. Instead the burden of the staffing investment we've made will really be felt in 2015. We'll see more of this and have a larger discussion when we release our budget and leadership plan next month.

As always, please let me know if you have any questions or share your thoughts in the comments.

Flickr phtoo: steffen.r

Nov 26 2014
Nov 26

The next beta for Drupal 8 will be beta 4! Here is the schedule for the beta release.

Tuesday, December 16, 2014 Only critical and major patches committed Wednesday, December 17, 2014 Drupal 8.0.0-beta4 released. Emergency commits only.
Nov 25 2014
Nov 25

The Panels and Panelizer modules have opened up a whole world of options for layouts in Drupal, but too often the usage and features of these powerful tools get confused. My goal with this post is to explore some of the capabilities the Panels module has to offer before ever getting into the realm of using Panelizer.

At a high level, the goals of each of these modules break down into the following:

  • Panels: Create custom pages with configurable layouts and components.
  • Panelizer: Configure Panels layouts and content on a per instance basis for various entities.

Often, I see both modules added and used when Panelizer isn’t needed at all. What’s the problem with that? Introducing Panelizer when it isn’t needed complicates the final solution and can lead to unnecessary headaches later in configuration management, content maintenance, and system complexity.

Panels and Panel Variants

Before the introduction of Panelizer, site-builders got by just fine with Panels alone. This original solution is still valid and just as flexible as it ever was. The secret in doing this lies in understanding how variants work and knowing how to configure them.

Default Layout for All Nodes

Once Drupal finds a Panels page in charge of rendering the page request, Panels proceeds through the page’s variants checking the defined selection rules on each. Starting from the first variant, Panels evaluates each set of selection rules until one passes. As soon as a variant’s selection rules pass successfully, that variant is used and the rest below it are ignored. This is why it’s important to pay attention to the order in which you define your variants to ensure you place less strict selection rules later in the sequence.

Using this to your advantage, a good common practice is to define a default variant for your Panels page to ensure there is a baseline that all requests can use. To do this, you’ll need to define a new variant with no selection rules, so the tests always pass, and place it last in the series of variants. Since the selection rules on this variant will always pass, be aware that any variants placed below it will never be evaluated or used.

Screenshot of variant rule

Custom Layouts per Content Type

Once you have a generic default in place to handle the majority of content items for your Panels page, you can start to tackle the pages that might have more specific or unique requirements. You can do this by creating a new variant above your default and define selection rules to limit its use to only the scenarios you’re targeting.

Screenshot of Panel Content

A common use case for this is the changing layout for content based on content types. To build this out, you need to edit the default node_view Panels page and add a new variant. If this page variant is intended to handle all nodes of this type, I’ll typically name it with the name of the content type so it’s clear. The next step is to configure the selection rules by adding the “Node: Bundle” rule and select the content type we’re building for. Once you save the new variant, any detail pages for that content type should render using the new configuration.

Screenshot of Reorder Variants

Building on this, a single page can be expanded to handle any number of variants using any combination of selection rules necessary. It’s common to see a variant added for each content type in this way. Examples of further customizations that are possible include:

•    A specific layout for a section of the site matching a specific URL pattern

•    Separate layouts based on the value of a field on the entity

•    Alternate views of a page if the user doesn’t have access to it

•    Separate views of a page based on the user’s role

Thanks to CTools, these selection rules are also pluggable. This means if you can’t find the right combination of selection rules to enforce the limitation you need, it’s easy to write a new plug-in to add your specific rule.

Avoiding Too Many Variants

Using the existing selection rules allows for a great deal of flexibility. Adding in custom plugins further improves your options to define any number of increasingly specific variants.

It is possible to take this too far, however. The cost of creating a new variant is that your layouts and content configurations are now forked further. Any common changes across them now have to be maintained in independent variants to maintain continuity.

Visibility Rules

It’s important to also remember the other features Panels offers, including visibility rules. Visibility rules are configured on a per-pane basis inside a specific variant’s layouts. These rules offer the same conditions available in variant-level selection rules. Since they’re configured on individual panes, however, you can focus on the component-level differences between pages. A common use case for this is to use the same node variant for multiple content types with similar layouts and configure the unique panes with visibility rules to limit which pages they show on.

Screenshot of Visibility Rule

To elaborate on this, here’s an example. Assuming a default node variant with a two-column layout, we can define the common elements that all nodes will have, such as the page title, rendered content, and maybe a sidebar menu. If we then add the requirement that all article nodes include a list of similar articles in the sidebar, we can accomodate this by placing the correct pane in the sidebar and adding the visibility rule “Node: Bundle”. We’ll then configure it to use the current node being viewed and limit it to show only when that node is in the “Article” bundle. Now, whenever a node is displayed, it will show the common panes, but the block for similar articles will only show in the sidebar if we’re viewing a article node.

Screenshot of Node:Bundle

Screenshot of Article Bundle

Choosing the Right Approach

Once you get to the level of creating panel variants or visibility rules just for single pages, it’s usually time to ask if you’re using the right tool. When you’ve gotten to the point where individual page instances need to be different, you’ve come to the impasse of determining the best approach.

If it’s only a handful of pages that are each unique, then the most straightforward solution may be to create independent Panels pages for each of these.

If instead, individual instances of the same type need different layouts or configurations, then it may be time to install Panelizer to allow instance-specific overrides.

Previous Post

A Data Dive on Domestic Violence

Developer? Designer?

Join the Forum One team.

Apply now
Nov 25 2014
Nov 25

If you’ve looked at front-end development at any time during the past four years, you know that there has been an explosion of new technologies. We are inundated with new projects like bower and cucumber and behat and KSS. It is a lot to take in. At the past two Drupalcons there have been sessions about this overload, My Brain is Full: The state of Front-end developement. Essentially those sessions are asking “What the hell is going on?”

“Everyone is describing the one little piece they’ve created, but don’t explain (or even reference!) the larger concepts of how all of these elements link together.”

— Frank Chimero, July 2014 Designer News AMA

This is my attempt to explain.

At technology conferences we all typically fall into the trap of focusing on the “the little pieces”, the technologies. Process is left to the periphery. And yet, process changes are much more profound then the technology that enables them. The most interesting thing about responsive design, for example, is that it’s a new precess, a new way of making websites. The set of technologies that responsive design required already existed, but it required us to step back and make sense of the entire web development landscape in order to know what to do with those tools.

So what big picture will help us understand today’s new front-end technologies? I’ve been doing front-end web development since 1993 (before there even was a back-end to web development) and I’ve only just realized that the current front-end flux is easily explainable as the beginning of a momental shift in web development: web development is embracing agile development. And the entire way we build websites is being turned inside out.

How does web development do agile? It creates style-guide-driven development.

The building blocks

If we look at any of the new front-end projects, we can categorize everything (yes, everything) into just three categories:

  • Front-end performance: the front-end is where you see most of the lag while browsing websites, so it is critical to focus in this area.
  • Continuous Integration: automation to ensure what you build today doesn’t break what you built yesterday.
  • Components: a way of bundling reusable chunks of HTML, CSS, JavaScript and other assets.

If you understand those three concepts, you can make sense of any of today’s new front-end technologies. There may be hundreds of new projects, but they are just different programming languages, different platforms, different projects and different APIs implementing one or more of those three ideas.

But why those three concepts? Front-end performance has always been a goal (or should have been,) so its position on the list should be obvious. But to understand the other two categories, you have to understand agile development.

A core concept of agile development is reducing risk by controlling and minimizing your risk. One of the tools to prevent risk of regressions and minimize the risk of refactoring is continuous integration. While back-end developers and devops have been working on this for a while now, we are only now starting to see it in the front-end as those developers slowly get training in agile. (I just became a Certified Scrum Master!)

And to minimize complexity and risk of failure, front-end developers have started to develop components of HTML, CSS, and JS that are reusable and maintainable. Bootstrap? Foundation? Those are just pre-made reusable component libraries. But custom-designed websites and apps are also using the same technique while building custom component libraries.

Even as agile creeps into all the layers of web development, we still need a grand-unifying process that makes the new agile web development possible, unifying back-end, front-end, design, everything. Surprisingly, the once-derided style guide is the key.

Back in the day, website designs were always accompanied by style guides. Even if they weren’t out-of-date before they were delivered (“Ignore that part… I didn’t have time to update it after client feedback”), they always became out-of-date quickly. Since they were separate documents, they didn’t get maintained to reflect the current state of the website and became orphaned documents. But thanks to agile’s continuous integration, style guides can now be auto-generated from the website’s own source code, ensuring that the style guide and the website never get out of sync.

KSS automated style guide: The same source is used in both the application and the style guide.

The new web development process

With an automated style guide documenting your custom component library, building a website becomes straight-forward.

  1. Pick a feature to begin development on.
  2. Look through the existing style guide to see if it already contains a design component that you can use as-is or tweak.
  3. If the new feature requires a new component, designers and front-end developers should work together to design and implement it.
  4. Repeat.

My favorite implementation of auto-generated style guides is KSS, a syntax for writing docblock-like CSS comments that describe the design component being built. Simply write a Sass or CSS comment next to your code to document it in the style guide.

// Buttons
// Buttons built with the button element are
// the most flexible for styling purposes. But
// link and input elements are also supported.
// Markup: button.html
// Styleguide 4.3

In fact, I liked it so much I became a maintainer of the kss-node project, the Node.js implementation of KSS.

My presentation at Drupalcon Amsterdam last month focused on many of the points above. In the video below, I also explain the mechanics of building a reusable component.

[embedded content]

As I’ve started using this style-guide-driven development process, I’ve started to figure out new advantages and ways to leverage the style guide. There’s much more yet to tell.

In my next post, I’ll describe how to set up KSS-node to build your own automated style guide.

style guides
Nov 25 2014
Nov 25

If you're interested in code quality and providing a means by which to bring Drupal beginners up-to-speed on the coding standards, I recommend reviewing code from all developers. I say "all" developers because everyone needs an editor.

The best way to force code reviews is to bake it into your development process. Use a tool like Gitlab (free hosted version) to prevent developers from committing code to authoritative branches. Instead, have them fork the project repository, and submit merge requests. Someone else can then review them. The reviewer can add in-line comments, wait for the developer to make changes, and then accept the request.

Here are some things to look for when reviewing Drupal code submissions. For some of these, we're assuming Git is being used for version control.

  1. Read, understand and follow the Coding standards.
  2. Install, enable, and use the Coder module on your development sites.
  3. For the purists out there, use the Coder Tough Love module as well.
  4. If you're running a continuous integration (CI) system like Jenkins, check the logs for new errors or warnings on new commits. Either way, make sure your development sandboxes have errors being reported to the screen so that developers can see any new errors that they generate. You'll find a lot of errors in your Drupal log if you're not doing this. (Make them refresh their DBs from your Dev site which already has this enabled.)
  5. Speaking of CI, add one or more code quality inspection tools to the mix such as SonarQube. There's actually a pre-configured Vagrant profile to build a VM with everything already set up. See CI: Deployments and Static Code Analysis with Drupal/PHP for details.
  6. Look for unrelated code reversions in merge requests. That is, if you see code changes that aren't related to what the developer is trying to do, there's something wrong. In most cases, this means the developer's branch is out-of-date with the main development branch. He or she should fetch and merge that branch from from the origin repository, fix any conflicts, and then add it to the merge request.
  7. Look for debugging code that wasn't removed such as dd(), drupal_debug() and other output functions.
  8. Look for Git conflict symbols such as ">>" and "===". These usually indicate a botched conflict resolution.
  9. Notice any lack of comments. Stanzas (small blocks of code that do little things) should be separated by blank lines, each with a comment explaining what it does. It may be clear to the original developer, but that doesn't help anybody else.
  10. Make sure that modules are installed in the right place. This is usually sites/all/modules/contrib (for upstream modules coming from drupal.org) or sites/all/modules/custom (for modules written specifically for the project).
  11. In theme files, usually somewhere under sites/all/themes, look for any functionality that is not theme-specific. Functionality should always be in modules, not themes, so that if the theme is changed, the site still works as expected. This is an extremely common error for beginners. For example, JavaScript files related to modules shouldn't be in the theme directory, but the module itself.
  12. Ensure consistency in module package names. For custom modules, it's advisable to give the package name the name of the project so that it's clear that these are site-specific. For contributed modules, use what others are using; don't arbitrarily make one up. This helps keep your list of modules organized.

These are the most common issues I've discovered while reviewing code. If you have any others, feel free to add them as comments. I can add them to the list here.

Happy reviewing!

Nov 25 2014
Nov 25

If you need to create dynamic output on an entity when it is displayed on your Drupal site you have multiple options. One method that is easy to implement is using hook_entity_view().

You can insert this hook function into your custom module (see creating a custom module for more info).


function YOUR_MODULE_NAME_entity_view($entity, $type, $view_mode, $langcode){
   if (
$type == 'node' && $view_mode == 'full' && $entity->type == 'article'){
$output = 'Custom output value';
$entity->content['custom_output'] = array(
'#markup' => $output,
'#weight' => 99,

Within this function you will have access to the full entity object being viewed on your site as well as the entity type, and its view mode. You can then use these values to target a specific condition. In the above example we're only targeting Nodes of the type 'Article' when being view in full mode. Article teasers will be ignored.

You can then populate your $output variable however you'd like, including using HTML. You may need to adjust the #weight to move this output up and down within the display.

You can also use Drupal forms for your output:

$output = drupal_get_form('YOUR_FORM_CALLBACK_NAME);
custom_output'] = array(
#markup' => drupal_render($output),
'#weight' => 99,

NOTE: Computed Field is a module that provides another way to achieve this. We prefer using a small amount of code within our own module but Computed Field offers a lot of flexibility with an UI.

Nov 25 2014
Nov 25

Recently, I had the need to refer back to a custom module I wrote at a previous job years ago and like it tends to do, the code scared me. As far as I know, this module is still chugging along doing its job to this day, and hasn’t had any issues. But for it to work for us at Mediacurrent, it needed some serious refactoring.

It did the job - why change it?

One of the biggest issues in a team environment is making sure your code is maintainable and clean. That means it needs to be readable and make sense to anyone you work with - whether that’s in your own organization or the larger Drupal community itself - a contrib module will be seen and worked on by many different developers around the world.

Community involvement is essential to Drupal and a key factor in distributed teams working together is consistency. Following the same set of coding standards will help everyone focus on the same objectives - not the unimportant details of indentation and whitespace. You’ll want to download the coder module and run against your own module to see what it recommends: 

drush coder-review module-name

Use hook_schema and hook_update_N

If your module stores data in the database that isn’t Drupal entities or nodes that need to be served up as content on their own, chances are your module will be needing its own database tables. The thing to do here is to declare the table schema you will need using hook_schema in the module.install file and then it will be created and removed automatically when the module is installed or uninstalled.

If the schema needs to change at any point - adding, removing or modifying columns - it’s easy to implement these changes in an update hook. Using hook_update_N and bumping the version number, the changes can be performed by any other developer who pulls in your code simply by running:

drush updb

This eliminates a mess of different configurations in a distributed environment.

While we are on the topic of databases, it is also helpful to form queries using Drupal’s db_select, db_update, and db_insert calls. While it’s possible to safely query the database using db_query and using placeholders, this method promotes best practices of not accidentally including strings that haven’t been sanitized, as Drupal does the query building logic for you.

Use what Drupal gives you

Drupal comes with a ton of built-in admin functionality for storing and displaying your module’s data and settings. Module settings pages can be defined with hook_menu and by setting the page_callback to drupal_get_form all you will need to do in the callback function is define and return the settings that need to be stored.

These will be automatically stored in the variable table and that prevents you from storing them yourself and keeps things standardized. A benefit here are that you can also define submit and validate handlers to the form (in case the values need to be integers, or need to be existing data fetched from an API, etc.). These settings can be retrieved and updated using variable_get and variable_set within your module. Additionally, these can also be exported to a feature with the assistance of the strongarm module.

Now that your module has settings screens, you’re going to want to implement some permissions to keep unauthorized users from modifying your settings. Typically, sites may do a check to see if the logged in user is the admin user (user 1) and grant them access. However, to make your module scalable, you’ll want to account for scenarios where a team of admins or content editors will be utilizing it, and implement permission(s) specific to your module(s) needs. Perhaps a simple “administer custom module” will do, or you may want to implement permissions for users who can view data/settings and those who can modify them.

Using hook_permission, you can define one or more permissions that can be assigned to users of different roles in a given site. The permissions that the site admins choose to give can also be exported to features and thus stored in code.

Working with cron

Drupal has built-in cron functionality and a task queue for a reason - it’s something lots of modules need to make use of, so why not use what Drupal already has in place?

There may be a few reasons you might to try to roll your own solution here - maybe your hosting provider doesn’t permit cron tasks to be configured, or maybe there are other issues preventing you from configuring it to run regularly - so, check your site’s status report to be sure.

Your module can implement hook_cron, and integrate its repetitive tasks such as updating content from a third-party, performing reporting, or some other task that needs to occur regularly. This keeps things consistent and gives the benefit of seeing how your module interacts with any other cron tasks that run regularly.

Additional Resources

How To Do A Combined Name Search
Top Drupal 7 Modules: Winter 2014 Edition 

Nov 25 2014
Nov 25

What's new with Drupal 8?

Drupal 8.0.0 beta3 was released since the last core update, with over 200 fixes after the prior release. There are still over 100 known critical issues, which means more beta releases to come.

We take special care to resolve security and performance issues. Both types of issues may require API changes. There is now defined criteria for performance issues as to what kind of improvements qualify as critical issues (and therefore blocking the Drupal 8.0.0 release).

After 2.5 years of work in part sponsored by MongoDB Inc, led by Károly Négyesi (chx), Drupal 8 can now completely install with MongoDB. This is the first time Drupal in its entirety has been successfully installed in a NoSQL database.

Check out the demo video

In almost three weeks, the Drupal Association and Wunderkraut are sponsoring a focused sprint in Ghent to help move core critical issues forward. However, you can help make things move faster anytime from anywhere, so read on!

Where's Drupal 8 at in terms of release?

Since November 8, we've fixed 29 critical issues and 26 major issues, and opened 19 criticals and 40 majors. That puts us overall at 117 release-blocking critical issues and 717 major issues.

How long does it take to fix a critical?

So how long will it take us to fix those critical issues and get to a Drupal 8 release candidate? The average time it takes to fix a critical issue varies enormously depending on the scope of the problem and the resources our contributors can devote to fixing each. Over the course of the Drupal 8 development cycle:

  • 30% of critical issues were fixed within one week of being filed,
  • 50% of criticals were fixed within one month of being filed,
  • 80% of criticals were fixed within six months,
  • and then 20% took more than six months.

This means that one great way to help get Drupal 8 to a release faster is to accelerate some of those long-running issues. Look for the oldest critical issues, or the critical issues that have gone awhile with no updates, and help assess them. Is each still relevant? Is something blocking it? What's the next step that's needed? Is the issue summary up to date? As we both focus on our next milestone and bring these longer-running issues to a successful resolution, we'll be able to narrow our focus to incoming issues and get Drupal 8 done. :)

Current focus

The current top priority in Drupal 8 is to resolve issues that block a beta-to-beta upgrade path (critical issues tagged 'D8 upgrade path'). Supporting an upgrade path between betas is an important step for early adopters to begin building with Drupal 8 (and lending their resources to getting other critical issues done).

We also need core contributors to continue evaluating issues for the beta phase based on the beta changes policy. Note that Dreditor now includes a handy button for inserting a beta evaluation template in issue summaries! Thanks Cottser and Mark Carver for adding this feature so quickly.

The new Dreditor button to 'Insert beta evaluation'

Finally, keep an eye out for critical issues that are blocking other work. Add the blocker issue tag that other issues are postponed.

(Note that we're changing this section of the Drupal Core Updates to highlight ongoing goals rather than specific issues, because calls to action in these posts haven't resulted in additional momentum in highlighted issues. Instead, we'll be making brief, separate posts every week or two highlighting top-priority issues in critical areas. If you're a core subsystem maintainer or initiative lead and want to highlight a specific issue, we encourage you to submit your own brief announcement to g.d.o/core (requires access). Anyone in MAINTAINERS.txt is authorized to post to this group!)

How to get involved

If you're new to contributing to core, check out Core contribution mentoring hours. Twice per week, you can log into IRC and helpful Drupal core mentors will get you set up with answers to any of your questions, plus provide some useful issues to work on.

If you'd like to contribute to a particular Drupal 8 initiative or working group, see the regularly scheduled meetings on the Drupal 8 core calendar:

Google Calendar ID:

Note that ultimike is now running a virtual Migrate sprint, Wednesdays 19:00-21:00 EST. See the Migrate in core group page for information and updates.

If you are interested in really digging into a tough problem and helping resolve a stagnating release blocker, or if you are stuck on a critical currently, join the #drupal-contribute IRC channel during weekly critical issue office hours on Fridays at 12:00p PST. See chx's office hours reports for an idea of what we've done so far!

You can also help by sponsoring independent Drupal core development.

Notable Commits

  • Issue 2236855 by rachel_norfolk, stefank, ngocketit, lauriii, LewisNyman, alexpott, yuki77, rteijeiro | mortendk: Use CSS for file icons in file fields.
  • Issue 2364647 by chx, alexpott: Fixed [sechole] Remove blacklist mode from Filter:XSS.
  • Issue 2322509 by prics, cilefen, gaurav.goyal, harijari, Temoor: Replace all instances of node_load(), node_load_multiple(), entity_load('node') and entity_load_multiple('node') with static method calls.
  • Issue 287292 by almaudoh, mr.baileys, drewish, Berdir, znerol, boombatower, dawehner, jpetso, floretan: Add functionality to impersonate a user
  • Issue 2267453 by alexpott, dawehner, damiankloip: Views plugins do not store additional dependencies
  • Issue 2352155 by Wim Leers: Remove HtmlFragment/HtmlPage
  • Issue 2375225 by LewisNyman, davidhernandez: Add emma.maria as Bartik maintainer
  • Issue #2362987 by Wim Leers, Codenator, Pinolo: Remove hook_page_build() and hook_page_alter()
  • Issue #2339151 by EclipseGc, tim.plunkett, Gábor Hojtsy, effulgentsia: Conditions / context system does not allow for multiple configurable contexts, eg. language types
  • Issue #2376791 by dawehner, Wim Leers: Move all _content routing definitions to _controller
  • Issue #2378789 by Wim Leers: Views output cache is broken
  • Issue #2324055 by dawehner, cilefen, znerol: Split up the module manager into runtime information and extension information

You can also always check the Change records for Drupal core for the full list of Drupal 8 API changes from Drupal 7.

Drupal 8 Around the Interwebs

Drupal 8 in "Real Life"

Whew! That's a wrap!

Do you follow Drupal Planet with devotion, or keep a close eye on the Drupal event calendar, or git pull origin 8.0.x every morning without fail before your coffee? We're looking for more contributors to help compile these posts. You could either take a few hours once every six weeks or so to put together a whole post, or help with one section more regularly. Read more about how you can volunteer to help with these posts!

Nov 25 2014
Nov 25

DCO2014 Graduate Steve Fisher

Twelve weeks after it began, the first online class of Drupal Career Online (DCO) graduated yesterday, launching six new Drupalists on their way to a new career. With this class, DrupalEasy has now graduated 71 participants from Drupal Career online and in-person programs. Our graduates were taught the fundamentals of Drupal site-building, Git, introductions to module and theme development, site maintenance, distributions, and much more. Along they way, students were required to use the same communication tools as the rest of the community (including IRC), were provided with a community mentor, and were encouraged (pestered?!) to get involved in their local communities.

DCO 2014 Graduates

Joe Arsenault

  • Drupal.org username: jarsenx
  • IRC nick: jarsenx
  • Hometown: Baltimore, MD
  • Community mentor: Ben Hosmer (bhosmer)
  • Notes: Volunteered at Baltimore DrupalCamp, has several commits to Drupal 8

Rick Esser

  • Drupal.org username: rick_e
  • IRC nick: ricke
  • Hometown: Jacksonville, FL
  • Community mentor: Jay Epstein (jeppy64)
  • Notes: Has travelled all over the southeast United States attending Drupal events

Steve Fisher

  • Drupal.org username: fisherstudios
  • IRC nick: fisherstudios
  • Hometown: Raleigh, NC
  • Community mentor: Ryan Price (liberatr)
  • Notes: Emmy Award winning television broadcast engineer

Linda Green

  • Drupal.org username: lindagreen
  • IRC nick: LindaGreen
  • Hometown: Truckee, CA
  • Community mentor: Linda Cook (lscook)
  • Notes: Drupal site administrator, volunteered at BADCamp 2014.

Alan Lilly

  • Drupal.org username: pspot
  • IRC nick: pspot
  • Hometown: Orlando, FL
  • Community mentor: Michael Tripp (Bowevil)
  • Notes: Interested in real estate Drupal sites, profesional actor

Bill Pollard

  • Drupal.org username: pollardw
  • IRC nick: pollard2
  • Hometown: Merritt Island, FL
  • Community mentor: Dennis Solis (densolis)
  • Notes: Background in unix administration and accounting

What's Next For the Graduates?

Now that the coursework is complete, our attention has turned to introducing as many of these graduates as possible to forward-thinking organizations interested in hosting a graduate or two as an intern. We've had some success in making introductions to students in several geographic areas, but are still looking for additional opportunities - especially from organizations who would be willing to work with remote interns.

As part of Drupal Career Online, we provide ongoing graduate mentoring and introductions between organizations and graduates.

DrupalEasy Career Training Grows

Our next Drupal Career Online class, accessible internationally, is scheduled to begin February 10, 2015, with classes taking place in the early evening (EST) for those East Coast students with full-time jobs. Applications are open until mid-January.

Drupal Career Online is comprehensive expert-led training unlike any other Drupal training that we are aware of. We believe in measured, holistic training that allows students the time to digest and practice. Our resource-rich curriculum results in stackable technical and community skills that build capabilities and confidence. DrupalEasy career training is not a quick-turn-around firehose-style bootcamp, it is not a self-taught program, and it is not a proctored team-learning experience. Sessions are three times per week - twice for classroom training, one for instructor-led self-paced lab hours. All of our sessions are led by a live, online expert Drupal instructor via GoToMeeting. In addition to the instructor, students have access to PDF handouts and reference documents, and a comprehensive library of screencasts.

We are also honored to announce that Starting December 1, DrupalEasy and Mike Anello will present a reformatted version of our Drupal Career Starter Program for the delivery of the technical training for Acquia U. This is an amazing endorsement of our program, and we are honored to have been selected by Acquia.

Trackback URL for this post:


Nov 25 2014
Nov 25

1 Ton WeightLoad testing is an important aspect of any project: it provides insight into how your site and infrastructure will react under load. While critical to the launch process of any project, load testing is also useful to integrate into your standard testing procedure. It can help you locate bottlenecks and generic performance regressions introduced in the evolution of a site post launch (due to code and infrastructure changes).

There are many different methodologies and applications for performing load tests, but generally it involves bombarding various pages on the site with enough traffic to start causing degradation. From there, work can be done to isolate bottlenecks in order to improve performance. By performing periodic load tests, you are much more likely to catch something while it’s still a minor performance issue, not after it becomes a major problem.

Different Types of Load Tests

There are a number of different load testing configurations (sometimes referred to as test plans) which can be used individually or in conjunction to provide insight into site performance. Generally, we end up running three different types of tests:

  • Baseline tests These tests are run with a relatively low amount of traffic in order to obtain some baseline information about site performance. These are useful for tracking general user-facing performance (time to first byte, time for a full page load), and to compare against a higher traffic load test result, as well as to track regressions in the standard case.
  • High traffic tests Tests with relatively higher traffic are run in order to see when site performance begins to degrade as traffic increases. These tests can give you an idea of how many requests a site can handle before performance deteriorates to an unacceptable degree. At the same time, these types of tests are very good for uncovering bottlenecks in a site; many times issues with underlying services or service integration require a higher load in order to trigger. Generally speaking, this type of load test is the one most frequently run.
  • Targeted tests Most tests are designed to cover all different request types and page types for a site. Targeted tests take a different approach; they are designed to test one or several specific features or user paths. For example, if you are working to improve performance of a certain page type on your site, you could run a load test which only focuses on that particular area.

Depending on the load testing tool in use, these tests could all be based on the same test plan by tweaking the amount of traffic generated and/or by enabling or disabling certain parts of the test plan in order to focus testing on only a subset of the site.

Creating a Valid Test

One of the most difficult aspects of load testing is creating a test that represents real site traffic. If the test diverges greatly from real traffic patterns, the performance results and bottlenecks found during the test may not help you improve real world performance in a meaningful way. By starting with a test that as closely as possible matches real world traffic (or expected traffic, if you are looking at a new/growing site), you’ll more reliably uncover performance issues on your site. Fine tuning a load test can happen over time in order to keep it in line with shifting traffic patterns and new features added to a site. Things to take under consideration when creating and reviewing a load test plan include:

  • User browsing patterns. What pages are visited more often? How long do users spend on different page types? What are the most common entry points into the site?
  • Logged in traffic. What percentage of traffic is logged in versus anonymous users? Do logged in users visit different pages than anonymous users?
  • Amount of content. When creating a new site, do you have enough content on the site to perform a valid test? If not, then consider creating content programmatically before running a test. The Devel module is great for this purpose (among other things).

When to Test

There are many ways to approach load testing for a given website and infrastructure. How frequently tests are run is entirely up to you. Some sites may run a load test manually once per month, while others may run tests multiple times per day. Whatever you decide, it’s important to run a baseline test occasionally to understand what “normal” performance looks like. Only once you have baseline numbers for user-facing and server-side performance during a load test can you define what is “good” or “bad” in a particular test result.

Continuous Integration

Testing can be tied into your development process using a tool such as Jenkins. Tests could be set up to run each time a new release is pushed to the staging environment. Or, if you have sufficient resources, tests could even be run each time new code is pushed to the site’s code repository.

Periodic Testing

For those who don’t want to deal with the overhead of testing for each new code push, an alternative approach is to test on some pre-determined schedule. This could be daily, weekly, or even monthly. The more frequently tests are run, the easier it will be to directly link a change in performance to a specific change on the site. If you go too long between tests, it can become much harder to pinpoint the cause of a performance problem.

Manual Targeted Testing

In addition to the above approaches, it can be useful to run manual tests occasionally, especially if you are trying to test a specific aspect of the site with a targeted test plan. For example, if you are planning a media event which will drive a lot of new users to your site, it might be beneficial to run targeted tests against the main site entry points and features – such as user registration – which may receive higher than normal traffic.

Interpreting Test Results

One problem that many people encounter with load testing is that they are bombarded with too much data in the results, and it’s not always clear what information is important. In most situations, you will at least want to examine:

  • Time to first byte – also referred to as latency in some load testing applications. This is quite important as it usually represents the actual Drupal execution time (or caching time), whereas other data points are less focused.
  • Full page load time – also referred to as response time in some applications. The difference between first byte and this is mainly the size of the page, network path, and other issues.
  • Requests per second The number of page requests per second is an important statistic to look at when planning for traffic loads and infrastructure scaling.
  • Error responses These are another very important data point, and also the most likely to be ignored. A failing site will often load tests very well; however, that’s obviously not very useful information and misrepresents actual site performance.

    Depending on what the goals are for your load test, you may also be looking at additional information such as bytes transferred or throughput for a particular page.

    No matter what data you choose to track, it becomes even more valuable if you are able to track it over time. By comparing multiple test results, you’ll get a much better idea of your site’s performance as well as gain the ability to see trends in the performance data. It can be very useful to observe things like page load time to see how it varies over time, or how it might increase or decrease in response to a specific code or infrastructure change.

    Another important consideration is to understand how requests from load testing software differ from requests done by a user using a standard web browser. For example, JMeter does not execute javascript, and by default it will not download linked assets on a page. In general, those differences are acceptable as long as they are understood. However, it can be worthwhile to perform some sort of additional testing with a tool that more accurately represents a browser. These tools are not always capable of high-traffic load tests, but can at least be used to establish a baseline.

    Server Monitoring During Load Tests

    When you run a load test, you’ll be presented with a list of results which focus entirely on client-side performance, since that is all that the load testing application can see. It’s important to monitor servers and services during load test runs in order to get the most from your tests and to be able to track down infrastructure bottlenecks. Of course, this sort of monitoring could be left to the automated systems, but it can also be useful to manually watch the servers during test runs to see “live” how things are affected and be able to adjust what you are monitoring. Different sites will suffer from completely different infrastructure bottlenecks, so it’s best to keep an eye on as much data as possible, but for starters we recommend:

    • Web servers Watch overall system load, memory usage, swap usage, network traffic, and disk I/O. Also keep track of things like Apache process count to see if you are approaching (or hitting!) the MaxClients setting. As always, don’t forget to watch logs for Apache to see if any errors are being reported.
    • Reverse proxies and other caches like memcached Watch load, network traffic, and caching statistics. Is your cache hit-rate higher or lower than normal? Try to understand why that might be (e.g. a test plan which only hits a very small subset of the site’s pages would likely cause higher cache hit-rates). Watch memory usage and evictions to be sure that the cache isn’t becoming overfilled and forced to delete items before they’ve expired.
    • Database servers: watch the server load and connection count. Watch the MySQL error log for any unusual errors. Ensure that the MySQL slow query log is enabled, and watch it for potential query improvements that can be made (see e.g. pt-query-digest in the Percona Toolkit). You can also watch MySQL statistics directly or with tools such as mysqlreport to watch things like InnoDB buffer usage, lock wait times, and query cache usage. Watch the MySQL process list to see if there are certain queries running frequently or causing waits for other queries.

    Where to Test

    There are a number of options to determine which environment to run load tests against: development, staging, production, or potentially some environment dedicated to load testing. In an ideal world, tests should always be run against the production environment to obtain the most valid data possible. However, site users (not to mention investors) tend to dislike the website becoming unusably slow due to a load test. While some people may be able to run tests against production, even if it means scheduling the test for three a.m. or some other low-traffic time, others won’t have that option. Our advice is to take into account what the goals and requirements are for your load testing and, based on that, run the test against an appropriate environment.

    One other consideration when planning which environment to run a load test against is whether or not the test will be creating and/or deleting data on the site. In general, testing things like user comments against a production site can be very difficult to do in a way which doesn’t interfere with your users.

    As a general rule, the closer your staging environment mimics production, the more useful it will be. In the case of load testing, staging can be a very fitting place to run load tests. While your staging servers may not be as numerous and high powered as in the production environment, you can still easily track down many performance issues and infrastructure bottlenecks by running tests against staging, if it is similar to your production environment.

    Another option is to run tests against a development environment. This is especially valid for tests integrated with CI. While the performance numbers here will, expectedly, differ from production, it’s still a great way to test for performance changes when code changes occur.

    When running tests against an environment that is not your production environment, be aware that any user-facing performance numbers should be taken with a grain of salt. That is, performance will likely be slower than your production environment, but the numbers can still be useful when comparing test results over time.

    In cases where test or staging environments are under heavy use, it may not be possible to run load tests against those environments. For those situations, usually the only alternative is to have a dedicated “load testing” environment used specifically for load tests – or potentially used for other automated tests such as acceptance testing. As always, the closer this environment can mimic production, the more valid your test results will be. For those infrastructures that run mostly in the cloud, this environment might be spun up on demand when tests need to be run, but otherwise left offline.

    Some sites might insist on running load tests against production in order to have “real” numbers. While this can make sense in certain situations, it’s rare that a dedicated staging environment wouldn’t be sufficient to get the data required. Generally, our recommendation would be to only run low-traffic tests against production in order to obtain user-facing performance information. If you are trying to put the site under a heavy load in order to catch performance bottlenecks, then doing so in a staging environment should yield useful results.

    Image: ©IStockphoto.com/craftvision

    This article text is excerpted from High Performance Drupal published by O’Reilly Media, Inc., 2013, ISBN: 978-1-4493-9261-1. You can purchase the book from your local retailer, or directly from O’Reilly Media’s online store: http://wdog.it/3/2/hpd

  • Nov 25 2014
    Nov 25

    Indian flagDrupalCon is an important community event that brings a diverse group of Drupalers together under one roof to share knowledge, grow skills, and strengthen community bonds. As an organization, it is very rewarding to facilitate these experiences around the world.

    In the past, our attendance data has shown that DrupalCon is primarily a regional event, attracting attendees from nearby countries and drawing in some great international speakers, trainers, and community leaders. Knowing this, the Drupal Association is committed to hosting DrupalCon in regions other than just North America and Europe.

    In 2015, this third DrupalCon is in Bogota, Colombia. In 2016, we want to host DrupalCon in India and this blog explains why. Can you tell us your thoughts on this host country and which city you think should be the next DrupalCon location?

    How do you pick locations?

    The Drupal Association Board sets the staff’s direction with vision and strategy and they asked staff to come up with a list of possible countries in regions outside of North America and Europe that are viable DrupalCon Locations. They provided us with selection criteria that supported their strategy, which included in no particular order:

    • Popularity - People want to visit the host location to site see.
    • Strong local community: Size of the camp in the proposed city / # of camps in a country
    • Size of Drupal business community - # of Drupal businesses within that country and region
    • Ease of doing business with a country: Easy to set up a financial entity so we can collect ticket sales revenue and pay vendors in local currency.
    • Local Support: The community reached out to the Association, requesting a DrupalCon and offering to help produce it.
    • Inexpensive travel  - This gives insight into the average cost of lodging in the city
    • Visa: Easy to get a visa or one is not required to enter a country.
    • Strong business environment - The number of Fortune 500 companies in that region
    • Community survey: Where does the community want to go?

    After researching locations and ranking countries and cities based on the above criteria, several places bubbled to the top. Latin America and specifically Bogota, Colombia was a clear leader, which is why we are hosting DrupalCon 2015 there. India was ranked second over other possible locations.

    What makes India a good host country?

    DrupalCamp Deccan 2011India has an impressive number of Drupal leaders, contributors, businesses, and end users. This country drives the 2nd highest amount of traffic to Drupal.org (Unites States had 1M sessions last month, while India had 450,000.) DrupalCon in India can highlight this country’s Drupal strength to the global community while conversely showing the local community how they can contribute more back to the Project.

    More specifically, India ranked high because it has several mature camps and many local communities throughout the country. Producing DrupalCon outside of North America and Europe requires a strong partnership with community leaders who can help us with logistics. Plus, a DrupalCon needs power in numbers to make it financially worthwhile and effective, so a large community base is key.

    In India, technology conference tickets are usually very inexpensive, so a DrupalCon in India ticket price will need to cost much less than the standard DrupalCon ticket price. Because of the cost of putting on conferences, having low ticket prices means the event will be funded through sponsorships. India has a strong Drupal business community and many are already pledging sponsorship to make the event financially viable.

    Which host city is best for you?

    We would like to thank Rahul Dewan, Jacob Singh, Mayank Chadha, Rachit Gupta, Ani Gupta, Hussain Abbas, Chakrapani R, Sunit Gala, Venky Goteti, and Shyamala Rajaram and other community leaders for determining the financial feasibility for hosting a 1,000 person DrupalCon in their city. After several discussions, we decided Bangalore, Mumbai, and Delhi are our three most viable host cities. Now we need your input in picking the city where we will host our first DrupalCon in India. Below is a chart summarizing their key findings.

    Which city should host DrupalCon and why did you chose that city? (e.g. travel distance, travel cost, ideal hotel cost, preferred venue(s), community size). Please use comments to tell us.

      Bangalore Mumbai Delhi

    Venue Recommendation:

    (based on 1,000 attendees, 1 keynote room, 6 session rooms, 4 BOF rooms)

    1. Manpho Convention centre
    2. The Lalit Bangalore
    3. NIMHANS Convention centre
      (OSI days happens here)
    1. Grand Hyatt Mumbai
    2. Renaissance Powai 
    3. Films Studios. (under consideration)
    1. The Ashok, New Delhi (more info)
    2. Jaypee Greens Golf & Spa Resort, Greater Noida
    3. Hyatt Regency, Gurgaon


    (lunch and coffee/tea break per day)

    Rs.500 per person per day
    ($8) Rs 2,460 - 3,075 per person per day
    ($40 - $50)
    1. Rs 2,460 per person per day (tax inclusive) ($40)
    2. Rs 2,100 per person per day (tax inclusive) ($35)
    3. Rs 1,400 per person per day (tax inclusive) ($25)
    Hotel Room Cost 2KMs from the Venue Manpho

    Hotels (3-star)
    Rs. 3,500 per night ($57)

    Lalit (5-star): Rs.7,500 + tax per night ($121) Grand Hyatt  Rs 9,275 per night ($150)

    There are 50+ options around Grand Hyatt for all price points, between Rs 1,545 - 7,420 per night ($25-$120)

    Renaissance Powai Rs 11,130 per night ($180)
    1. Rs 8,000 per night ($120) - 2 seater room
      Rs 11,000 per night ($180) - 3 seater room
    2. Rs 9,000 per night ($140) - 2 seater
    3. Rs 8,000 per night ($120) - 2 seater
      Other 3* hotel option (~5km) from the venue - Rs 2,500-4,000 ($40-$80)
    Travel to city

    From Delhi
    By plane - Rs 9,275 RT ($150)

    From Bombay
    Plane Rs 7,420 ($120)
    Bus: (20hr) Rs 3,090 ($50)
    Train (20 hr): Rs 3,710 ($60)

    From Hyderabad
    Mostly Bus and train(overnight journey - 8hrs).
    Plane: Rs 7,730 ($125)
    Bus: Rs 3,710 ($60)
    Train: Rs 3,710 ($60)

    From Delhi
    Plane (2 hrs) Rs 8,040 ($130)
    Train (16 hrs) Rs 4,950 ($80)

    From Bangalore
    Plane (1hr40) Rs 7,420 ($120)
    Train (24 hrs) Rs 3,710 ($60)

    From Hyderabad
    Plane (1hr10) Rs 5,570 ($90)
    Train (10 hrs) Rs 4,330 ($70)

    From Bombay
    Plane (2hr00): Rs 7,730 ($125)
    Train Rs 2,475 ($40)

    From Bangalore
    Plane (2hr30): Rs 9,275 ($150)

    From Hyderabad
    Plane (2hr00): Rs 10,820 ($175)

    Camps None - Meet Ups, Mini-camps and Trainings Only 500+ attendees 350+ attendees (consistently for past 3 years)

    Indian flag image from Wikipedia.

    DrupalCamp India image credit to Dries' blog.

    Nov 25 2014
    Nov 25

    Everyone is happy to get a sign of appreciation, right? So were we having received a message from the American ranking service Clutch.co, which is regularly analyzing the market worldwide to make up lists of the best companies in certain spheres. Not so long ago they published a new list, based on fresh-conducted research. And know what? InternetDevels company was included in those new list!

    «I am really happy about the fact, that our company and the services we provide received such a nice mark of appreciation and respect from our customers and partners. We will continue working in order to improve our skills and experience even more!», — remarked Viktor Levandovsky, InternetDevels’ CEO.

    Clutch.co remarked our company in few criteria: web development, Drupal development and market presence. Our references and experience also got a good estimate. So, taking this into consideration, our company was included to the new list of the best Drupal developers in the whole world!

    We do what we love and we do it well. Yet over 7 years ago by now we started developing web applications on Drupal and let this technology actually create us. We have successfully fulfilled more than 330 projects and devoted all our passion for each of them. Our entire team works hard to deliver the projects of the best quality, which would satisfy the customers.

    About Clutch: Clutch is a Washington, DC-based research firm that identifies top services firms that deliver results for their clients. The Clutch methodology is an innovative research process melding the best of traditional B2B research and newer consumer review services. Clutch utilizes a proprietary framework, the Leaders Matrix, which maps firms' focus areas and their ability to deliver on client expectations. To date, Clutch has researched and reviewed 500+ companies spanning 50+ markets.

    Nov 25 2014
    Nov 25

    There are a plethora of modules out there that can help provide this functionality. For example:

    Amongst others..

    These are great modules and will likely do what you need, if not more.  An alternative lightweight option is to write a small ctools plugin to complete the simple functionality.  

    Our site building process has moved more and more towards panels, panes and display suite.  A quick, exportable and configurable solution to showing tweets on your site, if using panels and ctools can be found below.

    First we need to build out a boilerplate module to tell Drupal where to find our plugins.  This code is placed into your mymodule.module file, and of course you’ll need to write a .info file but you know that already!

    Next up, the nuts and bolts of our plugin.  This code is placed into your mymodule.inc file under mymodule/plugins/content_types and that directory is based from our hook_ctools_plugin_directory hook defined above.

    Nov 25 2014
    Nov 25

    The concept of official initiatives came out of lessons learned from the Drupal 7 development. We learned a lot from that and in a recent blog post about Drupal initiative leads, I recognized that we need to evolve our tools, our processes, and our organizational design. Others like Nathaniel Catchpole, Larry Garfield and Gábor Hojtsy have shared some of their thoughts already. One of the things I'm most proud of is that the Drupal community is always looking to improve and reinvent itself. Evolving is an important part of our culture. Each time it will get better, but still won't be perfect.

    For me, one of the biggest take-aways (but not the only one) is that for an initiative to succeed, it needs to be supported by a team. An initiative needs to carry out a technical vision, plan the work, communicate with all stakeholders, mobilize volunteers, raise funding, organize sprints, and more. It can easily be more than one person can handle -- especially if it isn't your full-time job or if your initiative is complex.

    More specifically, we have learned that the most successful initiatives appear to be run by teams that are self-managed; the team members collaborate in the development of the initiative, but also share both managerial and operational responsibilities like planning, coordinating, communicating, sprint organizing and more.

    Because self-managed teams are both responsible for their outcomes and in control of their decision-making process, members of a self-managing team are usually more motivated than traditional hierarchical teams. This independence and greater responsibility are important in volunteer communities. Self-managed teams also build and maintain institutional knowledge. The outcome of their work is also more easily accepted by other stakeholders (like core committers) because they have already built a lot of consensus.

    If I were to be an initiative lead, I'd feel strongly about building my own team rather than being handed a team. My initial assumption was that each initiative lead would build his/her own team. In hindsight, that was a mistake. Team building is not easy. It requires a time investment that can seem to compete with technical priorities. This is an important lesson and something we can do better going forward. Before making an initiative official, we have to make sure that each initiative has a good team and the support to be successful -- either we can help create a team, provide more coaching or formal training around team building, or we shouldn't designate the initiative official until such a team has coalesced.

    Nov 25 2014
    Nov 25

    Following on from Earl Miles’ request that people help fund the Views In Core initiative for Drupal 8, Code Enigma is proud to announce a Views In Core sprint, in Paris, straight after DrupalCon Munich. We have decided to sponsor this piece of work, something we believe is going to be a huge positive for Drupal 8, and pay for a venue, accommodation and some travel for the Views In Core team to meet in Paris, 26th-28th August, and push the project along. This is over and above any funding Earl is raising himself for the effort.

    The event will take place in the historic Le Marais district of Paris, central, close to the river, Notre Dame and various other sites, so if you fancy a few days in Paris you’ll be well placed for some sight-seeing after. And if you’re in Paris anyway, well, it’s easy to get there and we’d love to see you.

    The sprint will be led and managed by Earl and a decent team of Views and Core developers are able to attend. There will also be three or four developers from Code Enigma helping out. Obviously, space and capacity to manage are limited, but if you would like to attend and do some sprinting with Earl and the team then there are still places. Please do contact us and we’ll put you in touch with Earl.

    Nov 25 2014
    Nov 25

    When preparing an email newsletter, one part of it that is time consuming is gathering together all the content that is needed. In my experience, virtually all the content already exists elsewhere, such as in the local CMS, in CiviCRM, or on a blog, or some other online source.    So I was thinking how can I make this process easier.  What I did: I created mail merge tokens for CiviCRM that autofill a list of recent blog posts, stories, or any other type of CMS content.  So the end-user sees a list of tokens, one for each content type, each term/category, and for each date range. Such as "Content of type 'blog' changed in the last 7 days" .  What is particulary powerful about this approach, is that if you are also using a CMS aggregator (such as the aggregator module in Drupal core) then virually any external RSS feed is turned into CMS content, which is now available as a CiviCRM token. 

    Some examples of how this new extension may help your organization:

       - Your staff posts new content of type "story" each week.Your monthly newsletter editor can use the new token for "Content of type 'story' changed in the last 1 month" to save time preparing the newsletter.

      - A national organization you are affiliated with has a number of blogs that they host. Your local organization would like to include recent blog posts from the national organization in the local member newsletter.  Your local webmaster previously configured the aggregator module to pull in those external blogs into your CMS. Your monthly newsletter editor can use the new token for "Content of type 'feed item' changed in the last 1 month" to save time preparing the newsletter.

    - Any other situation where there is existing content that you want to include in your email or PDF.

    This new extension "Content Tokens" is published in the CiviCRM extensions area https://civicrm.org/extensions/content-tokens 

    This new extension is designed in the same style as the "Fancy Token" extension that provides tokens for upcoming events, contribution pages, profiles, and WebForms. 

    -Sarah     sarah@pogstone.com 

    Nov 25 2014
    Nov 25

    It's all about perspective when it comes to launching. We always think we need something more, that missing set of instructions or special ingredient that will propel us to finish. That special person, few moments of sanity, or foolproof plan.

    But at the end of the day, while those things may encourage us to make the choice to take action or bring balance and clarity, they never change the fact that we won't make it anywhere until we start the journey. We could be waiting at the bus stop thinking it's the only way we'll get there, when the whole time we could have been on the train.

    I experienced this this past summer when I showed my first documentary short since leaving film school over a decade ago. I've had cameras and editing equipment before, I had the training and even the subject matter.

    But I found myself only moving furiously when I felt the passion and impetus which often burned out by the end of the weekend. Looking back I planned too much or too little, always leaving an excuse available for why my project failed.

    Instead of making it actionable I made it mythical, and when it didn't happen I said it just wasn't possible without _________. When I look at our e-book launches they have been much the same, mythical things that will happen "someday" a day that never comes because I never actually do.

    At the end of the day launching isn't about myths, and dreams, and should haves. It's about what's next, will this work, and knowing when to say "this is enough let's ship." It's about the actionable tangible day to day moments and choices that get the work done.

    I suspect much of life is like this, and all we need is to just accept this reality and do the work that will propel us in the directions we want to go. No special maps and foolproof plans needed.

    What do you do day to day to go from mythical hopes to actionable realities? Tweet us @shomeya. We'd love to hear about it! Join our ebook mailing lists if you want to be the first to know about our coming soon (for reals) e-books on how to Model Your Data with Drupal, and Refresh Your Drupal Site without a designer.

    Nov 24 2014
    Nov 24

    Today I got my feed wet with Drupal 8 Configuration Management. For those who are new to this excellent feature in Drupal 8, you should read the documentation at drupal.org (Managing configuration in Drupal 8) or watch this DrupalCon Amsterdam video. This article assumes you are familiar with Drush and Drush aliases.

    I worked with a very basic workflow using a development site and an acceptance site. Both sites are under revision control, where the complete webroot (core + modules, but not settings.php) is placed in git. For gitignore I used a copy of the example.gitignore which you'll find in Drupal's root directory.

    I used Drush 7 to import and export the configuration. 'Export' is to transfer a site's configuration from Drupal to file and 'Import' is to transfer in reversed direction. Using drush @dev config-export you export the configuration from the development site to the sites/default/config_*/staging directory. The staging directory now holds many *.yml files that each contain the configuration of an individual section. I've chosen to use git to transfer these files to the acceptance site. At the acceptance site drush @acc config-import is used to transfer the configuration from the file system to the acceptance site.

    Make sure that the acceptance site is a copy of the development site. For example using drush @dev archive-dump you can make a copy of both the files and database. With this you can create a copy of the site using drush archive-restore.

    I made these changes to the .gitignore file to allow the staging directory to be added to git:

    # Ignore the system files, except for the config staging sub directory.

    Further I noticed that drush @dev config-export removes the .htaccess file from the staging directory while exporting the yaml file. For that I created this Drush issue.

    Nov 24 2014
    Nov 24

    Several weeks ago, we issued an RFP for Drupal.org Content Strategy project. We got a number of great submissions, and the next couple of weeks the Drupal Association staff and the Drupal.org Content Working Group members spent reviewing proposals and interviewing potential vendors.

    Today we are happy to announce that we’ve selected a vendor for the content strategy project: Forum One, an open-source digital agency.

    Their proposal met all our project requirements and outlined a solid plan for how we can make this project happen. During the interviews Forum One impressed us with their professionalism, passion for content strategy, their extensive experience working on large content strategy projects, and their deep knowledge of the Drupal community and Drupal.org, the website.

    We believe that together our staff and Forum One team will make this project a success. Can’t wait to start working and improve Drupal.org content for all our varied audiences.

    Nov 24 2014
    Nov 24

    What is Drush?

    If you’re asking that question right now then congratulations! You are one of the lucky people who will have your life changed today! Cancel everything and read up on Drush, the command line bridge to Drupal.

    Everybody knows about Drush, ya Dingus!

    That’s more like it. Who doesn’t love Drush, right? Right!

    But more and more, I find myself seeing people reinventing things that Drush already handles because they just don’t know all that Drush can do. It’s getting frustrating, and I want to fix that.

    First, The Basics

    Stuff everybody knows

    Here are a few Drush commands that most people know and love, just to get them out of the way:

    • drush updb: run pending database updates
    • drush cc all: clear all caches
    • drush dl : download the module
    • drush en : enable the module

    Stuff everybody knows (Features Edition™)

    And if you’re using Features, you’re probably familiar with:

    • drush fra: revert all Features
    • drush fe: export a new or updated Feature with a new component
    • drush fu : update the Feature with updated site config
    • drush fr : revert the site’s config to the current state of the Feature


    For a lot of the fun stuff, you’ll have to understand Drush aliases. If you don’t, here’s the gist: Drush aliases give you an easy way to run Drush commands on remote Drupal sites, as opposed to only being able to use it on your local sites. If you’re constantly SSH’ing into different environments just to run a couple quick commands, you need to stop doing that.

    There’s lots of documentation about Drush aliases and how to create your own, but most of the docs lack notes on some of the lesser known awesome things you can do with aliases. Keep reading, sailor.

    Well, one more thing. This is probably a good time to mention a couple quick commands.

    Firstly, let’s run an arbitrary shell command on our dev environment.

    drush @foo-dev exec echo $SOMETHING 1 drush@foo-devexececho$SOMETHING

    Or maybe we should just go ahead and SSH in to do something a little more complex.

    drush @foo-dev ssh 1 drush@foo-devssh

    Or maybe we need to do a bunch of aliased commands, but we want to do it without SSH’ing in (because the commands require local files or something). We can make a Drush alias persist until we tell it to stop using:

    drush site-set @foo-dev 1 drushsite-set@foo-dev

    And then when we’re done doing what we do, we can just run it again without the “@foo-dev” argument to unset it.

    Now, keep reading, sailor.

    Syncing Ship

    (Warning: these headlines are going to get worse and worse)

    One of the most common things to do with Drush aliases is to sync stuff from one alias to another.

    For example, want to sync the dev site database down into your local?

    drush sql-sync @foo-dev @foo-local 1 drushsql-sync@foo-dev@foo-local

    How about files? Sync ‘em!

    drush rsync @foo-dev:%files @foo-local:%files 1 drushrsync@foo-dev:%files@foo-local:%files

    Or maybe some unwashed sent you a DB dump and you have to import it the old fashioned way?

    cat ~/path/to/file.sql | drush sql-cli 1 cat~/path/to/file.sql|drushsql-cli

    Sometimes you want to drop your entire database before importing, to make sure you don’t get any tables left behind from your old install that aren’t supposed to be there. That’s as easy as:

    drush sql-drop 1 drushsql-drop

    Sometimes, it’s useful to be able to automate running arbitrary SQL commands on multiple environments, and that’s pretty easy too. Say for example that you quickly want to get the username for uid 1 on the prod environment (the “drush user-information” command would be much better for this, but shut up).

    drush @foo-prod sqlq 'select name from users where uid = 1' 1 drush@foo-prodsqlq'select name from users where uid = 1'

    That one is also good for automation, like if you want to write a quick script that changes the username for uid 1 on all environments.

    Drupal Without Drupal

    It’s often useful to run one-off arbitrary code within the context of Drupal, without having to actually put it in the codebase somewhere. This is typically done one of two ways:

    If it’s just a short one-liner, then there’s the ever-useful “php-eval” (aka “ev”) command. For example, let’s inspect a node object.

    drush @foo-dev php-eval 'print_r(node_load(123));' 1 drush@foo-devphp-eval'print_r(node_load(123));'

    Or if it’s a longer one, then we can just throw our code into a PHP file, and run it using:

    drush php-script filename.php 1 drushphp-scriptfilename.php

    Reports Cohorts

    Drush is really good at getting us information from Drupal without waiting for a full page load.

    How many times have you navigated to the Watchdog page and sat through page load after page load while you went through the pagination and added filtering and blah blah blah to find an error message? Stop doing that! Do this instead:

    drush watchdog-show 1 drushwatchdog-show

    There are a lot of useful options for watchdog-show, such as:

    • –tail (continuously show new messages)
    • –full (show the full output, with all of the fields, instead of the summarized version)
    • –severity (such as “–severity=error”)
    • –count (show more than the default 10, such as “–count=100?)

    And I’d bet you’re familiar with “drush vget” to get variables, but did you know you can pass “–format=whatever” to get the results formatted as JSON or CSV or YAML or a bunch of other things, for easy scripting?

    Another one of my favorites is this charm, which basically prints out the stuff you see on the Status Report page in Drupal. It’s nice for sanity checking before pushing releases live.

    drush status-report 1 drushstatus-report

    And then there’s this guy, which prints out a bunch of useful info about the current installation, such as DB info, path to PHP executable and .ini file, Drupal version, Drupal root, etc. It’s a nice first step when debugging a broken install.

    drush status 1 drushstatus

    And for those times when you need to edit a config file (php.ini, or settings.php, or an alias file, or .htaccess, etc.), you can run this to let you choose which of those files to edit and it’ll open it up in an editor for you:

    drush config 1 drushconfig

    Using Users

    Drush is nothing short of a miracle when it comes to user management.

    First of all, there’s the ever-annoying task of logging in as this user or that user. You usually don’t know the password, or maybe you’re just too lazy to type it. Run this to open up your browser with a one-time login link so you can skip all of that malarky:

    drush user-login name-or-uid 1 drushuser-loginname-or-uid

    Or, if you’re slightly less lazy, and just want to change the password to something so that you can log in the old fashioned way:

    drush user-password name-or-uid --password=test1234 1 drushuser-passwordname-or-uid--password=test1234

    Then there’s the “fun” process of adding a user and filling out the form. Skip that:

    drush user-create person123 --mail="what@isthis.com" --password="letmein" 1 drushuser-createperson123--mail="what@isthis.com"--password="letmein"

    Once that’s done, you probably want to give that new user some roles. For role stuff, you have this:

    drush user-add-role "user editor" person123 drush user-remove-role "user editor" person123 1 2 drushuser-add-role"user editor"person123drushuser-remove-role"user editor"person123

    But watch out! The role you need to add doesn’t exist yet! Let’s add it, and give it some permissions.

    drush role-create 'user editor' drush role-add-perm 'user editor' 'administer users' 1 2 drushrole-create'user editor'drushrole-add-perm'user editor''administer users'

    If you just need to show information about a user, such as email address, roles, UID, etc., try this. I’m embarrassed to say that I’ve been using raw SQL for this for years.

    drush user-information name-or-uid 1 drushuser-informationname-or-uid

    Fields of Dreams

    One of the most under-used things that Drush gives you is field management tools. I’m going to be lame here and just copy and paste the docs, since they’re pretty self explanatory.

    Field commands: (field) field-clone Clone a field and all its instances. field-create Create fields and instances. Returns urls for field editing. field-delete Delete a field and its instances. field-info View information about fields, field_types, and widgets. field-update Return URL for field editing web page. 1 2 3 4 5 6 Fieldcommands:(field)field-cloneCloneafieldandallitsinstances.field-createCreatefieldsandinstances.Returnsurlsforfieldediting.field-deleteDeleteafieldanditsinstances.field-infoViewinformationaboutfields,field_types,andwidgets.field-updateReturnURLforfieldeditingwebpage.

    Other Schtuff

    Here are some great commands that don’t really fit into any clear-cut categories.

    It has some neat archiving tools:

    drush archive-dump #backup code, files, an DB to a single package drush archive-restore #expand one of those archives into a full Drupal site 1 2 drusharchive-dump#backup code, files, an DB to a single packagedrusharchive-restore#expand one of those archives into a full Drupal site

    Somewhat similar to those is this one, which will download and install Drupal, serve it using a little built-in server, and log you in, all in one command. Note that this one includes about a bazillion options and is super duper powerful.

    drush core-quick-drupal 1 drushcore-quick-drupal

    Drush also lets you play with the cache, which can save a lot of time when debugging a caching issue:

    drush cache-get your-cid your-bin drush cache-set your-cid your-data your-bin your-expire 1 2 drushcache-getyour-cidyour-bindrushcache-setyour-cidyour-datayour-binyour-expire

    There are a couple unknown commands for working with contrib:

    drush pm-info modulename #display included files, permissions, configure link, version, dependencies, etc. drush pm-releasenotes modulename #show the release notes for the version of module that you're using 1 2 drushpm-infomodulename#display included files, permissions, configure link, version, dependencies, etc.drushpm-releasenotesmodulename#show the release notes for the version of module that you're using

    Run cron! Not exciting, but super useful.

    drush cron 1 drushcron

    Have you ever been in the middle of debugging and you know that something is happening in a form_alter (or some other hook) but you’re not sure in which module? Try this command, which will tell you all of the implementations of a given hook, and let you choose one to view the source code of it.

    drush fn-hook form_alter 1 drushfn-hookform_alter

    And finally, this bad boy is basically “Drush docs in a box” and has a TON of useful info. Seriously, try it now.

    drush topic 1 drushtopic

    Drushful Thinking

    There’s a giant heap of useful Drush commands, some of which you hopefully hadn’t seen before. So what, right?

    The “so what” is that it’s useful to start thinking in terms of “how can Drush do this for me?” and you’ll often find that the answer is “pretty easily.”

    Play a game with yourself. Next time you’re working on site building or anything that involves a lot of clicky clicky in the Drupal UI, give yourself a jellybean or a chip or something every time you do something in Drush instead of in the UI.

    But why? Well for one, before you know it, you’ll be spending  much less time waiting on page loads. But secondly, Drush lends itself to automation, and thinking in terms of Drush naturally leads you to think in terms of automating and scripting things, which is a great place to be.

    Practice some Drushful Thinking! And let me know any of your favorite Drush tips and tricks in the comments. Check out drushcommands.com for some more inspiration.

    Nov 24 2014
    Nov 24


    Imagine having done your job a certain way for many years, and then all of a sudden a new way of doing things is introduced. There will certainly be those who embrace change, but there will also be those who fear it. Just look at the public outcry every time Facebook makes a little change to their UI. That isn’t just a technology world thing, that’s a life thing.

    Having worked with Drupal for many years, it would be easy for me to take for granted just how sophisticated a platform it is. With sophistication comes a level of complexity. Of course, once you learn it, everything is smooth sailing. The question is, how long should it take to learn, and can you afford that amount of time to learn it?

    If the answer to that question is ASAP, then some work will need to be done. That is the beauty of Open Source - you can react quickly to requirements. This post talks about several ways that we set out to accomplish this during a recent project for Time Inc.


    Time Inc is an American New York-based publishing company. It publishes over 90 magazines, most notably its namesake Time. Other magazines include Sports Illustrated, Travel + Leisure, Food & Wine, Fortune, People, InStyle, Entertainment Weekly, and many more. Many, if not all, of these Titles have web properties. Depending on the Title, their website may be built on a variety of technologies ranging from Wordpress, to Vignette, to Drupal and other legacy technologies in-between. The fact is, running all these different technologies means running specialized teams that are not particularly cross-functional. Editorial staff who know how to publish content on time.com for example, may not have a clue how to publish content on Sports Illustrated or People. This is just one of several catalysts for standardizing on a single technology platform.

    Understanding existing systems. Likes and Dislikes

    In the first week or so that I started learning about Time Inc.’s editorial workflows, I often found myself saying in my head “Drupal can do that better” or “That won’t be a problem with Drupal”. Things of that nature. I was so heavily biased towards Drupal that it didn’t occur to me right off the bat to consider the things that an editor actually likes about their existing workflow. There were many instances where the overall feeling that an editor had towards their platform was generally sour, except when we got to talking about the individual features within them. There were things that they just didn’t want to part with. For example, how they search for an image, what types of filters do they rely on, what is the search interface like? Learning all the things that editors liked was going to be absolutely crucial to the success of Drupal adoption. Without having learned about what an editor likes and dislikes about their workflow, there really isn’t a starting point for ‘change'. Having had such candid and in-depth sessions with editorial staff enabled me to become connected, zoned in, and to make professional decisions that found a balance between what I knew needed to change versus what I couldn’t change (without getting thrown out of the building).

    Holding workshops and capturing requirements and future state needs

    This is where the real fun began. With the convenience of having my target audience actually present in the building, getting them involved in the process was easy (thanks to the project managers). However, if all editorial staff are sitting in a room with me for a few hours, that breaking celebrity gossip is going to have to wait. Obviously that was not acceptable, so we had to break the workshops up in to three groups which, at the time, was the way that Titles were structured anyways. The sessions would need to be efficient, effective, and as thorough as possible in the limited amount of time I could spend with each group.

    Prior to the first workshop, I had developed a conceptual wireframe document that showed what their new workflow ‘might’ look like. The wireframes mirrored the forms in a default Drupal installation, except I made some changes based on what I knew from my one-on-one meetings with editorial staff. We created print outs of these wireframes and placed each document at a boardroom seat, just like a place-mat at a fancy restaurant, for when our guests arrived. We also set writing utensils, sticky notes and markers so that the Editors knew we wanted them to collaborate. I walked through the wireframes on the screen and had them write their feedback directly on their wireframe documents for me to collect later. There was good dialogue throughout these sessions. Lots of questions, lots of ideas, lots of feedback. Repeating the same workshop with 3 separate groups of Editors allowed us to see what was common and absolutely critical to Editors across all the different titles. We found common pain points and common future-state needs. All I had to do was address them in the next iteration.

    Refine and repeat. We held workshops just like that for several iterations until we were confident that we had an up to date wireframe document that laid all the groundwork necessary to extract requirements and user stories ripe for prioritization and development. Concurrently, I also had enough information to start thinking about the actual design.

    This is part 1 of a 2 part blog series.

    Part 2:
    - Demo’s & training forums
    - General usability and user-centric approach.
    - Maintaining authoring context.
    - Getting stakeholder buy in for the editorial user interface

    Nov 24 2014
    Nov 24

    Essential Drupal 7 Modules


    Drupal is a great CMS but really isn't very powerful out of the box. To harness its power you'll need to use contrib (contributed) modules. Contributed modules are created by the community and are not part of the core Drupal download. But they can be downloaded and installed from the drupal.org website.

    At the time of this writing we've been working with Drupal for almost 9 years and we've learned a lot in our journey from Drupal 5 to Drupal 7, including what we consider to be essential Drupal modules. Here is a listing of some of the modules that we install on almost every website that we build and consider most of them to be essential on most of our sites. Some of these modules are common knowledge but hopefully we'll introduce you to some modules that you haven't used yet.


    Administration menu is a must-have module and is the first thing that we install. This gives you quick access rop-down menus to your site's administrative functions. Be sure to disable the overlay module to make your site admin-friendly (see the tips section below).

    Module Filter will group your site's modules into tabbed categories for easier administration and also provides a must-have search filter.

    All most every Drupal site is going to have Views installed. It is a powerhouse that Drives the output and display of a great amount of Drupal data. It's so important that it is part of Drupal 8's core modules. We assume that you know what Views is and are already using it on your site.

    Views Bulk Operations is a very powerful module that provides Views with a 'field' that gives your View operation options such as bulk deleting, un-publishing, changing node field values & taxonomy terms and much more.

    Administration Views replaces your content/admin pages with actual views with expanded capabilities and the ability to quickly customize your admin pages since they are now Views.

    Views UI: Basic Settings is a handy module. If you've every had a client or administrator who needs to edit the header or footer content of a View but you don't want to give them Administrative access to Views then this module solves that problem. With Views UI: Basic Settings you can setup your Views and permissions to allow access to privileged users to edit Views header and footer content without access to edit the rest of the View.

    Field Group is a very powerful module for organizing, grouping and displaying node information. It can be used to group your node creation form screen or node displays into groups of tabs (horizontal and vertical), fieldsets, divs, accordions and more.

    Backup and Migrate is another must-have module. Backup and Migrate allows you to create manual and scheduled automated backups of your database and website files. You can download the backups or store them on your server or on external services like NodeSquirrel.com and Amazon S3. This module is flexible and can be a site-saver if your site crashes. This should be one of the first modules you install. Be sure to set an automated backup right away.

    WebForm allows you to quickly and easily create forms on your site to collect information from your users. Do not use Drupal's core Contact module as it is not flexible and you cannot add additional fields to it. Webform allows you to create multiple forms with multiple types of fields (email, text, date, options, etc). You can also setup multiple email workflows to send your form submitting data to admins and messages to the sender.

    A newer form sytem is EntityForm and it is much more flexible than Webform. EntityForm allows you to create forms with any Drupal fields, where Webform only allows a select few of its own built-in field types. With EntityForm you can add use address, telephone, link (URL), references, and many more types of fields. It also has more configuration options and customizable submission Views. It does, however, usually take more time and tweaking to get setup. If you want a quick and basic form then use Webform but for anything more advanced then EntityForm is the way to go.

    Add Another gives you several options to quickly create additional nodes of the same type. With it you can add a button next to the node Save button to "Save and Add another". This button will save your node and then open another node of the same type ready for creating. You can also add a tab to the node view form for adding a new node of the same type. This can be a valuable time saver when entering multiple nodes of the same type.

    Login Toboggan expands your Drupal login options. We install this on every project and enable its options allow logging in with your email address. This is a great option to prevent users from logging in if they forget their user name. We set the email login option during a script install using Drush with this command:

    drush vset -y logintoboggan_login_with_email 1;

    Add To Head is a slightly niche module but it has come in handy for us multiple times. It allows you to easily add scripts, styles and meta data to your website through an administrative interface. It also allows you to add it to specific pages only. This is handy when you want to allow someone else to easily add meta tags and scripts to your site without having to grant them access to your actual theme files.

    Role Assign is another module that allows further granular control of permissions you can set. You'll often want to allow someone to create new users in a site but you may not want them to be able to assign any role to the users. Role Assign allows you to restrict the roles that a user can assign to accounts. This way you can prevent someone from having the ability to assign themselves or someone else an Administrator type role.

    Override Node Options is another module that improves Drupal's permission system and allows you to give granular control to users to have access to edit a Node's published, promoted, authoring, sticky and revision settings without having to grant a user the "Administer content" permission which gives them full access to edit Nodes.

    Custom Contextual Links is a very handy module that allows you to create your own contextual links within a site. In case you're not familiar with them, contextual links are the action links that appear when you click on the little gear/cog icon that appears when you hover over certain elements. This modules allows you to create your own content links within Nodes and Views. We use it to make sure content administrators have quick and simple access to create new Nodes from pages that contain that Node type. We also ensure that every node, whether on its own page or listed within a View, will always have links to edit or delete the Node. It also allows you to create your own action text. For example, you can create a context link that says "Delete Photo" to, of course, delete a photo node.

    Password Reset Landing Page makes resetting passwords less problematic on your Drupal site. We used to receive frequent complaints from users having issues resetting their passwords. The problem occurs when the user clicks the 'one time password reset' link sent to their email. They are then logged into the site but don't change their password because it's not obvious that they need to do this. This module creates a better password reset landing page that allows the user to create their new password at the same time they log in after clicking their reset link.

    Text Formatter is a module that provides a 'field display formatter' with an option to display your field output in a comma separated list.

    User Protect is a handy module that allows fine grained control over various user account edits. For example, you can give someone privileges to create and edit user account but prevent them from being able to edit or delete accounts with the administrator role. The can help to protect important user accounts.

    Draggable Views is a great way to build a sorting View with drag and drop functionality. Often your content administrators will want to be able to custom sort a View listing. There is no easy way to do this natively with Nodes and Views but with the Draggable Views module you can create a a second View of the content you want to sort that contains drag and drop handles. This makes custom sorting of View content a breeze!

    Email Field adds an email type field that you can add to your entities. The email field will automatically validation the the value entered is formatted as a proper email address. At a minimum, we use this on contact forms that we build with entityforms.

    Disable Password Strength will remove the password strength indicated that is displayed when users are creating their passwords. We used to have users complain that they were having issues creating their password because they thought the low-strength indicator meant their password was invalid or didn't match. We found this caused too many issues and simply installing this module resolved them.

    Colorbox is a light-box modules that uses the colorbox library to popup images for a larger view. This is a solid module that integrates colorbox nicely into Nodes displays and Views.

    Superfish is our favorite module for making Drupal's nested menus into a drop-down menu system. It works great and has a ton of built in configuration options. It even has support for dynamic mobile menus.

    Honeypot is a great spam deterrent module that filters forms submissions that may be spam without having to use a CAPTCHA. Honeypot adds a hidden field to your forms that only a bot would see. If this field gets filled out then the system knows it is spam. Also, there is an adjustable time limit and if the form is filled out faster than this limit then it is also considered spam. There's a support module called Honeypot Entityform that adds an additional option to Honeypot to protect Entityforms as well.

    SEO (Search Engine Optimization) MODULES

    Meta Tag allows easily setting up meta data for your website such as title and descriptions tags. This meta data can be automated and/or customized per page.

    Global Redirect fixes many common SEO penalties that a Drupal site may receive out of the box due to duplicate content issues that are inherent to Drupal.

    PathAuto allows the creating of 'patterns' that will automatically create a node's URL alias based on the tokens you setup. This is great to automatically create user friendly URLs and may give your pages a boost in the search engines.

    Google Analytics does not help your search rankings per say but it is a quick easy way to integrate your Drupal site with Google Analytics for monitoring of just about any metric you can think of. There are also expansion modules to gather data from your Drupal Commerce or Ubercart ecommerce sites.


    Support modules do little to nothing on their own but are required for use by other modules. Here are some common support modules that we use on every site:

    jQuery Update is a module that updated the version of jQuery that Drupal uses. Drupal core is set with jQuery version 1.4.4 but many modules or custom jQuesr code you use may require a higher version. Simply install this module and select the jQuery version you want you Drupal installation to use.

    Ctools (Chaos Tools)

    This list isn't exhaustive and is only the modules that we tend to use. What are your essential modules?


    If you used the default Drupal install then there are likely modules installed that you do not need. It is recommended to disable any modules that you're not using to free up resources and simplify administration on your site. Here is a list of modules that we disable on most sites. Of course, the modules you would disable will depend on your particular site.

    • Color - disable this module if you're not using a theme that has color changing ability.
    • Toolbar - disable this module if your using the Administration Menu (listed above) and then enable the Tool Bar that comes with Administration Menu.
    • Shortcut - if you're not using shortcuts then you can disable this module.
    • Comment - if you're not using comments then you can disable this module.
    • Dashboard - if you're not using the dashboard then you can disable this module.
    • Overlay - the overlay module is problematic for multiple reasons and is highly recommend to disable it to prevent potential data lose and to speed up administration of your site.
    • Help - if you do not need the help system then you can disable this module.
    • Locale - is only needed if your site is non-English. Disable it if not.
    • Search - of course, if your site does not utilize site search then disable this.
    Nov 24 2014
    Nov 24

    It's been a well-known fact that using native VirtualBox or VMWare shared folders is a terrible idea if you're developing a Drupal site (or some other site that uses thousands of files in hundreds of folders). The most common recommendation is to switch to NFS for shared folders.

    NFS shared folders are a decent solution, and using NFS does indeed speed up performance quite a bit (usually on the order of 20-50x for a file-heavy framework like Drupal!). However, it has it's downsides: it requires extra effort to get running on Windows, requires NFS support inside the VM (not all Vagrant base boxes provide support by default), and is not actually all that fast—in comparison to native filesystem performance.

    I was developing a relatively large Drupal site lately, with over 200 modules enabled, meaning there were literally thousands of files and hundreds of directories that Drupal would end up scanning/including on every page request. For some reason, even simple pages like admin forms would take 2+ seconds to load, and digging into the situation with XHProf, I found a likely culprit:

    is_dir xhprof Drupal

    There are a few ways to make this less painful when using NFS (since NFS incurs a slight overhead for every directory/file scan):

    • Use APC and set stat=0 to prevent file lookups (this is a non-starter, since that would mean every time I save a file in development, I would need to restart Apache or manually flush the PHP APC cache).
    • Increase PHP's realpath_cache_size ini variable, which defaults to '16K' (this has a small, but noticeable impact on performance).
    • Micro-optimize the NFS mounts by basically setting them up on your own outside of Vagrant's shared folder configuration (another non-starter... and the performance gains would be almost negligible).

    I wanted to benchmark NFS against rsync shared folders (which I've discussed elsewhere), to see how much of a difference using VirtualBox's native filesystem can make.

    For testing, I used a Drupal site with about 200 modules, and used XHProf to measure the combined Excl. Wall Time for calls to is_dir, readdir, opendir, and file_scan_directory. Here are my results after 8 test runs on each:

    NFS shared folder:

    • 1.5s* (realpath_cache_size = 16K - PHP default)
    • 1.0s (realpath_cache_size = 1024K)
    • Average page load time: 1710ms (realpath_cache_size = 1024K, used admin/config/development/devel)

    *Note: I had a two outliers on this test, where the time would go to as much as 6s, so I discarded those two results. But realize that, even though this NFS share is on a local/internal network, the fact that every file access goes through the full TCP stack of the guest VM, networking issues can make NFS performance unstable.

    Native filesystem (using rsync shared folder):

    • 0.15s (realpath_cache_size = 16K - PHP default)
    • 0.1s (realpath_cache_size = 1024K)
    • Average page load time: 900ms (realpath_cache_size = 1024K, used admin/config/development/devel)

    Tuning PHPs realpath_cache_size makes a meaningful difference (though not too great), since the default 16K cache doesn't handle a large Drupal site very well.

    As you can see, there's really no contest—just as NFS is an order of magnitude faster than standard VirtualBox shared folders, native filesystem performance is an order of magnitude faster than NFS. Overall site page load times for the Drupal site I was testing went from 5-10s to 1-3s by switching from NFS to rsync!

    I've updated my Drupal Development VM and Acquia Cloud VM to use rsync shares by default (though you can still configure NFS or any other supported share type), and to use a realpath_cache_size of 1024K). Hopefully Drupal developers everywhere will save a few minutes a day from these changes :)

    Note that other causes for abysmal filesystem performance and many calls to is_dir, opendir, etc. may include things like a missing module or major networking issues. Generally, when fixing performance issues, it's best to eliminate the obvious, and only start digging deeper (like this post) when you don't find an obvious problem.

    Notes on using rsync shared folders

    Besides the comprehensive rsync shared folder documentation in Vagrant's official docs, here are a few tips to help you get up and running with rsync shared folders:

    • Use rsync__args to pass CLI options to rsync. The defaults are ["--verbose", "--archive", "--delete", "-z"], but if you want to preserve the files created within the shared folder on the guest, you can set this option, but without --delete.
    • Use rsync__exclude to exclude directories like .git and other non-essential directories that are unneccessary for running your application within the VM. While not incredibly impactful, it could shave a couple seconds off the rsync process.

    Not all is perfect; there are a few weaknesses in the rsync model as it is currently implemented out-of-the-box:

    1. You have to either manually run vagrant rsync when you make a change (or have your IDE/editor run the command every time you save a file), or have vagrant rsync-auto running in the background while you work.
    2. rsync is currently one-way only (though there's an issue to add two-way sync support).
    3. Permissions can still be an issue, since permissions inside the VM sometimes require some trickery; read up on the rsync__chown option in the docs, and consider passing additional options to the rsync__args to manually configure permissions as you'd like.
    Nov 24 2014
    Nov 24

    drupal web servicesThe Services module allows you to provide web services from your Drupal site.

    Services is really popular and works with formats such as REST, XMLRPC, JSON and SOAP.

    However, when asked about Services during a training class last week, I was realized that the students were asking me because there was little-to-no clear documentation available.

    So, I sat down and decided to write a Beginners guide to the Services module.

    Here's a 5-step guide to using Services creating a REST API for your Drupal site.

    #1. Installation

    Setting up the basic REST API requires these three modules:

    If you want to also set up authentication for your API, you'll need to add a module such as OAuth.

    It's worth noting that you may need to test different versions of these modules. The best documentation I could find on this on drupal.org recommended sometimes using older versions of these modules, in order to avoid bugs.

    When you enable these modules, you'll be able to choose the types of server you want. In addition to REST and XMLRPC other modules are available on drupal.org, including SOAP.


    #2. Server Set-up

    • Go to Structure > Services
    • Click Add
    • Enter a "Machine-readable name"
    • Choose your server
    • Enter a "Path to endpoint". This will become part of the URL for your server.
    • Click Save.
    • Click Edit Resources
    • You'll now be able to edit different settings for your server

    #3. Server Settings

    • The Server tab will allow you to choose the formats and parser types for the server:

    The Authentication tab will allow you to control access to the server. Here's an example with OAuth:

    • Install OAuth
    • Go to Configuration > OAuth > Add Context
    • Set up the details for your OAuth connection
    • Go to Structure > Services > Edit Resources
    • Check the OAuth authentication box:
    • Click the Authentication tab
    • Choose your OAuth context:

    Finally, the Resources tab allows you to control what your server can do. In this example, we need to make sure we check the Views box:


    #4. Creating the View

    • Go to Structure > Views > Add New View
    • Create a view, using only the block format:
    • Click Continue and Edit
    • Click Add at the top of the View, then click Services.

    You'll see a message saying "Display "Services" uses a path but the path is undefined."

    • Solve this by clicking the forward slash next to Path:
    • Enter a Path such as "myrestapi":
    • Look down at your preview and you'll see a preview of your your REST API.
    • Save this view.

    You can now customize this view, using all of the normal features of Views.

    #5. View the server

    The final step is to see your server output. There is no automatic link to the server, so you'll need to use this patter:


    In my example, these were my settings:

    • path-to-endpoint: I set the "Path to endpoint" in Part 2 as "myrestapi".
    • view-machine-name: I set the name for view as "myrestapi". You can confirm this by editing your view and looking at the URL for the unique name of your view.

    So, here is the URL to my server: /myrestapi/views/myrestapi/

    And here is the output:


    Find out more

    Nov 24 2014
    Nov 24

    If you do a lot of Drupal development and need to deploy configuration I am sure that you are using update hooks to some extent at least. If you don't use Features and want to create a taxonomy vocabulary or something in code, the hook_update_N() hook is the way to go.

    But have you ever needed to perform an update the size of which would exceed PHP's maximum execution time? If you need to create 1000 entities (let's just say as an example), it's not a good idea to trust that the production server will not max out and leave you hanging in the middle of a deploy. So what's the solution?

    You can use the batch capability of the update hook. If you were wondering what the &$sandbox argument is for, it's just for that. You use it for two things mainly:

    • store data required for your operations across multiple passes (since it is passed by reference the values remain)
    • tell Drupal when it should stop the process by setting the $sandbox['#finished'] value to 1.

    Let me show you how this works. Let's say we want to create a vocabulary and a bunch of taxonomy terms with names from a big array. We want to break this array into chunks and create the terms one chunk at the time so as to avoid the load on the server.

    So here is how you do it:

     * Create all the terms
    function my_module_update_7001(&$sandbox) {
      $names = array(
      if (!isset($sandbox['progress'])) {
        $sandbox['progress'] = 0;
        $sandbox['limit'] = 5;
        $sandbox['max'] = count($names);
        // Create the vocabulary
        $vocab = (object) array(
          'name' => 'Names',
          'description' => 'My name vocabulary.',
          'machine_name' => 'names_vocabulary',
        $sandbox['vocab'] = taxonomy_vocabulary_machine_name_load('names_vocabulary');
      // Create the terms
      $chunk = array_slice($names, $sandbox['progress'], $sandbox['limit']);
      if (!empty($chunk)) {
        foreach ($chunk as $key => $name) {
          $term = (object) array(
            'name' => $name,
            'description' => 'The name is: ' . $name,
            'vid' => $sandbox['vocab']->vid,
      $sandbox['#finished'] = ($sandbox['progress'] / $sandbox['max']);

    So what happens here? First, we are dealing with an array of names (can anybody recognise them by the way?) Then we basically see if we are at the first pass by checking if we had set already the progress key in $sandbox. If we are at the first pass, we set some defaults: a limit of 5 terms per pass out of a total of count($names). Additionally, we create the vocabulary and store it as a loaded object in the sandbox as well (because we need its id for creating the terms).

    Then, regardless of the pass we are on, we take a chunk out of the names always offset by the progress of the operation. And with each term created, we increment this progress by one (so with each chunk, the progress increases by 5) and of course create the terms. At the very end, we keep setting the value of $sandbox['#finished'] to the ratio of progress per total. Meaning that with each pass, this value increases from an original of 0 to a maximum of 1 (at which point Drupal knows it needs to stop calling the hook).

    And like this, we save a bunch of terms without worrying that PHP will time out or the server will be overloaded. Drupal will keep calling the hook as many times as needed. And depending on the operation, you can set your own sensible chunk sizes.

    Hope this helps.

    Nov 24 2014
    Nov 24

    There was a session at BADCamp this year asking how Men can be better allies for Women in tech. The panelists had experiences with males that ranged from helpful, to innocently bungled, to outright demeaning. There was a small amount of suggestions about what men can do to be better allies.

    I'd like to take a brief look at how R.O.O.S.T.S. can play a part in helping Women in tech.

    The choice in the R.O.O.S.T.S. acronym is meant to evoke images of a nest. In other words, a safe nurturing place where young birds are nourished until they are ready to fly off into the world on their own. R.O.O.S.T. sites are warm, welcoming, safe and nurturing. They are also "ridiculously open." The creators of R.O.O.S.T.S. are willing to be vulnerable, make and admit mistakes, and share everything they have learned in the process of creating the site. Additionally, the sites are designed to be collaborative in nature.

    Don't all of the aforementioned qualities seem to be helpful and supportive to not only female site members, but to all site members?

    How many R.O.O.S.T. sites currently exist? Are you willing to help with the creation of one? If so, then checkout out my Kickstarter campaign to build a R.O.O.S.T. site to teach people how to teach themselves Drupal. Let's see what we can do to address this longstanding problem in the technology sector!

    Nov 24 2014
    Nov 24

    By Cathy Theys, with help from xjm, Michael Schmid, and Donna Benjamin.


    The target audience of this post is decision makers in businesses that are deciding if and how their employees might work on Drupal 8 in a way that helps Drupal 8 be released faster. There are benefits to the individuals and the company from every kind of contribution, even if it does not match the recommendations in this post.

    Do not let anything hold you back. Just doing it is better than not doing it at all. Contribution does not have to be perfect. Drupal is great at helping people get involved at whatever level they want to be involved.

    See these great resources to help you get started:

    People will be helpful and supportive.


    There are lots of ways that businesses invest in Drupal. Some sponsor events like Drupal camps or DrupalCons. Some help fund travel for key contributors to attend sprints (hearing about the need via word of mouth or an employee who knows of someone in need and brings it to their employer's attention). Some host sprints in their offices. Some have Drupal Association memberships, are Drupal Association Supporting Partners, or join contribution alliances like Large Scale Drupal. Some produce training or documentation. Some contribute funding directly to community members working on a specific project. Some give money to teams or individuals via Drupal Gratipay.

    Companies paying their employees to contribute

    Some businesses are giving their employees contribution time or are hiring people specifically to contribute.

    This post covers some ideas to make employee paid contribution time an even more effective investment, especially when companies want to help with getting Drupal 8 released.

    Types of Contribution

    Strategies in this post can apply in general to contributing:

    • to any open source project,
    • to a Drupal project, module, theme, distribution,
    • to Drupal core,
    • to Drupal.org infrastructure or testbot (Continuous Integration aka Drupal.org CI 2.0),
    • on the security team
    • by planning an event like DrupalCon, a Drupal camp, or a sprint,
    • by preparing talks or trainings for Drupal events,
    • at a meta level by working in a governance group like the DA board or a Drupal Working Group,
    • to Drupal.org improvements like issue queue workflow, profiles, landing page content, or
    • by building, maintaining, or sponsoring outside tools (like simplytest.me or Drupical).

    Drupal is constantly improving its recognition and definition of contribution to include: organizing, communicating, fundraising, testing, documenting, mentoring, designing, architecting, reviewing, and coding.

    A particular interest to me is contributing to Drupal 8 and helping it get released sooner. (It is of interest to some businesses too :) which is what inspired this post.)

    Contributing to Drupal 8 release

    The Drupal 8 branch of the codebase was opened for development in early 2011. The first Drupal 8 beta was released October 1 2014. There are 125 Drupal 8 critical issues (some complex, some straightforward). A Drupal 8 release candidate will be tagged when there are zero critical issues, and once subsequent critical issues are resolved, one such release candidate will become the 8.0.0 release. There will be much rejoicing. (Check the Release cycle page on Drupal.org for up-to-date release cycle information.)

    What is really needed to help Drupal 8 get released?

    • Reviewing
      Lack of quality reviews is the biggest problem we have. People get good at giving quality reviews first by just reviewing. Their review skills will get better over time.
    • Keeping critical issue summaries clear and up-to-date
      This is not easy busy work; this is much appreciated and important. Some issues will not be committed without an accurate summary. Summaries help people get involved with, stay involved in, and review issues.
    • Adopting issues
      An issue can have a working patch, but that is not sufficient to get it committed. Sometimes an issue needs someone to adopt it and not give up until it is marked fixed and committed. This person becomes familiar with the issue, and checks in on it to see what it needs: maybe a re-roll, maybe an issue summary update, maybe track down a particular person whose feedback is needed, … they pay attention to the issue and help it get whatever it needs so that issue gets committed.
    • Focusing on development milestones and release blockers
      Unblocking the beta-to-beta upgrade path will enable more early adopters to begin investing resources in Drupal 8. Work on upgrade path issues, other critical Drupal 8 issues which block release, and release-blocking changes to Drupal.org is the most direct way to accelerate the release itself.
    • Paying attention to Drupal 8 news and priorities
      Reading Drupal 8 updates is a good way to stay up-to-date.

    Benefits of Contribution

    What benefits would a company be looking for?

    • A quicker release of Drupal 8 means your organization can use all of Drupal 8's improvements for real projects, as well as drive growth in Drupal-related businesses (like Drupal hosting and training).
    • Employees with expertise in Drupal 8.
    • Employees with better skills. (Employees will interact with a huge community of experts, and learn from them.)
    • Employees with more skills. All Drupal 8 issues have to pass core gates in: documentation, accessibility, usability, performance, and testing. People who work on core issues learn about those areas.
    • Employees with even more skills. Working with the community builds other valuable skills, that are not strictly about technology, applicable to internal processes as well as to client work.
    • Saving money on training. Businesses just have to pay for one side of the "training", for their employee time. They do not have to pay for the trainer time like they would for on-site training, or pay for training classes for their employees to attend.
    • Making connections with possible future additional employees.
    • Raising the company's profile, brand recognition, and appeal. (See Dries's DrupalCon Amsterdam Keynote on contribution recognition.)
    • Steering the future of Drupal in ways that align with the company and the community.

    Strategies for businesses investing in getting Drupal 8 released sooner through giving their employees Drupal contribution time

    Reduce ramp-up time.

    Sometimes people who are experts at their job, in a certain area, can feel ineffective or inefficient while contributing. Before telling everyone "Go contribute", businesses can:

    • consult with an experienced contributor or mentor for advice on structuring your contribution policies or program ,
    • share resources with employees about how to contribute,
    • get employees tools that might help for contributing (they maybe not the same tools necessary for their job), and/or
    • have employees attend a sprint that has mentoring for new contributors, or work with experienced contributors and mentors online in #drupal-contribute in IRC, in #drupal during core mentoring hours, or arrange for an experienced contributor or mentor to hold an event onsite (or virtually) for employees.

    Reduce pressure to work on client or internal deadlines.

    Not every employee will be interested in spending work time on contribution. Instructing all employees to contribute may not have the best results.

    For example, if a company schedules certain time for contribution, say the last Friday of the month, for all employees to optionally spend the day contributing, some people will want to spend that time on client work, or internal projects, maybe because of a deadline, or maybe just because they do not want to contribute that day. People who want to contribute during the scheduled time will see their co-workers working on work projects and feel pressure to also not contribute.

    Something that can overcome this, is letting people who want to contribute, contribute during off time, so they are still working while the rest of their team is. They can keep track of their contribution time, and later exchange it for scheduled vacation or professional develop time going to conferences or training.

    Consolidate time.

    The following strategies center around an idea: Concentrate your resources.

    Let's take an example: a business who has 10 employees that is thinking about giving each person 4 hours of contribution time every week (10% time), or having one day a month where everyone contributes (5% time).

    4 hours a week, or one day a month is not enough to work on a complicated issue. The time would start with reading any new comments on an issue, maybe changing local environment (requirements are different for Drupal 8 compared to Drupal 7), seeing if any changes in the code base effect the issue, maybe verifying the problem still exists, … thinking, trying ideas, maybe there would be enough time to implement them, and post back to the issue, update the issue summary as needed and explain what was changed and why in a comment, but more likely, there would not be enough time. It depends on the complexity of the problem.

    There are things that a person can do with 4 hours a week that are helpful contributions. There are even things on issues blocking Drupal 8 release that people could do… but there are things where after 4 hours, a person is just understanding enough to get started ... and then they are out of time, and have to wait till next week. Where they might need to spend the 4 hours getting back up to speed again.

    Consolidate by saving up contribution time.

    If employees have 4 hours a week contribution time, let them save it up for a couple months, and then use it in a chunk. For example, if someone does not contribute for 2 months, they could then have 36 hours of contribution time they could save up and take a (almost) full week to tackle a complicated problem, or tackle a bunch of not quite so complicated issues, without having the overhead of ramping up and context switching.

    Consolidate by giving fewer people more contribution time.

    Instead of 10 people contributing 4 hours a week, pick 2 people from those that that want to contribute, give them each 20 hours a week.

    With 20 hours a week, there is time to work on a complex problem, and also respond to feedback quickly. This reduces the overhead of needing to come back up to speed, context switching, or rebasing on a code base that has changed a lot.

    With more consecutive time, people can concentrate on more complex problems, and stay up to date better, with less overhead. We can take that even further...

    Focus long-term.

    If instead of 10 people contributing 4 hours a week, you have 2 people contributing 20 hours a week, let them plan to do that for a few months.

    Some issues need someone to look after them, week after week, to see the issue through to completion.

    When someone shows they can make a reliable and ongoing contribution to the project, other experienced contributors, or project leaders will invest more into bringing that person up to speed and helping them get things done.

    Give people 3-4 months where they can plan on contributing.


    After a few months, bring an employee back to full time client billable hours. And give another employee a turn to concentrate on contributing for 3-4 months.

    Employees learn so much while contributing. Returning to focus on client projects or in-house work with their team is an opportunity to share that learning with everyone in the company. The company benefits from the improved skills and new community connections that employee gained while contributing directly to the project.

    This can also help to protect people from burning out on contributing.


    Here are some examples of businesses having their employees contribute. Some are recent, some have been doing this for years. [These are examples of direct Drupal core contribution by employers. There are many ways to contribute to Drupal, and many businesses contribute in different, valuable ways.]

    • Blink Reaction had a sprint for their employees and brought in local experienced contributors and mentors to help their employee contribution time be effective, and get help targeting issues that are currently relevant. Blink Reaction had their event on non-working hours, a Saturday, so people did not have to stop working during regular hours when they feel like they should be working on projects with deadlines. People who work a full day at the contribution event on Saturday, get a compensation day they can schedule to take later.
    • Pantheon is hiring a contributor, and going to bring in an experienced contributor to mentor that person for a week or two.
    • Acquia has multiple full-time employees working on Drupal 8 issues.
    • Chapter Three employs one of Drupal 8's four branch maintainers to work on Drupal 8.
    • NodeOne (now part of Wunderkraut)), Zivtech, erdfisch, comm-press, Cheppers, Breakthrough Technologies, and New Digital Partnership (among others) dedicated 25-50% of one employee's time for several months to a particular Drupal core initiative.
    • Freelancers and independents like Jennifer Hodgdon (and many others) incorporate contribution work with their billable time.
    • PreviousNext hired Donna (kattekrab) Benjamin to help focus the company's community engagement activities. She spends half her time on client work to ensure her role is sustainable, and half her time on community activity, such as the community working group, Drupal Association board and organising events. She also works with the PreviousNext team to help them find their own niche for making a useful contribution. Lee (larowlan) Rowlands and John Albin also spend some of their paid time mentoring other PreviousNext staff to contribute, who all have 20% time to work on the Drupal project code or community.
    • Amazee Labs built their own company website on Drupal 8 Alpha and continues to implement customer websites on Drupal 8. Employees are paid to: find issues in Drupal 8, open issues in the issue queues, fix them, post the fixes on the issues, and further work on the issues.
    • BlackMesh hired me. :) To work on Drupal 8 issues and to help others contribute to Drupal.

    Help for businesses

    Sometimes it helps to have someone you can just talk to. You can talk to me. Reach out and ask any questions you have. I can answer them, or connect you with people who can.


    Contribute. Doing it in any way is better than doing it perfectly. These are some strategies for paying employees to contribute that will help Drupal 8 release sooner. Concentrate your resources. Talk to others about what works at their companies. Get help from experienced contributors and mentors.


    If there are corrections or missing examples, please let me know.

    @YesCT or Drupal.org contact form


    How to find and contact mentors and experienced contributors

    See the list of core mentoring leads in MAINTAINERS.txt, and contact them in #drupal-contribute in IRC or via their Drupal.org contact pages. There are also more mentors beyond the mentoring maintainers, and there is not exactly a list of experienced contributors. So, please feel free to just contact me and I can put you into contact with others.

    Nov 23 2014
    Nov 23

    I just wanted to take a moment to talk about how I approached the hot word "headless Drupal" on my blog. It uses some sort of "headless" communication with the Drupal site, but it also leverages Drupal in a standard way. For different reasons. (by the way, if you are interested in "headless Drupal", there is a groups.drupal.org page about the subject.)

    First of all, let's examine in what way this simple blog is headless. It is not headless in the way that it offers all the functionality of Drupal without using Drupals front-end. For example, these words I am typing is not typed into a decoupled web-app or command-line tool. Its only headless feature is that it loads content pages with ajax through Drupal 8's new REST module. Let's look at a typical set-up for this, and how I approached it differently.

    A typical setup

    A common way to build a front-end JavaScript application leveraging a REST API, is using a framework of your choice (backbone / angular / or something else *.js) and build a single-page application (or SPA for short). Basically this could mean that you have an index.html file with some JavaScript and stylesheets, and all content is loaded with AJAX. This also means that if you request the site without JavaScript enabled, then you would just see an empty page (except of course if you have some way of scraping the dynamic content and outputting plain HTML as fallback).

    Head fallback

    I guess the "headless" metaphor sounds strange when I change it around to talk about "head fallback". But what I mean with this is that I want a user to be able to read all pages with no JavaScript enabled, and I want Drupal (the head) to handle this. All URLs should also contain (more or less) the same content if you are browsing with JavaScript or without it. Luckily, making HTML is something Drupal always has done, so let's start there.

    Now, this first part should be obvious. If a user comes to the site, we show only the output of each URL as intended with the activated theme. This is a out-of-the box feature with Drupal (and any other CMS). OK, so the fallback is covered. The next step is to leverage the REST module, and load content async with AJAX.

    Head first, headless later

    A typical scenario would be that for the front page I would want to request the "/node" resource with the header "Accept:application/hal+json" to get a list of nodes. Then I would want to display these in the same way the theme displays it statically on a page load. The usual way of doing this is that when the document is ready, we request the resource and build and render the page, client side. This is impractical in one way: You are waiting to load the entire document to actually render anything at all. Or maybe even worse: You could be waiting for the entire /node list to load, only to destroy the DOM elements with the newly fetched and rendered JSON. This is bad for several reasons, but one concrete example is a smart phone on a slow network. This client could start rendering your page on the first chunk of html transferred, and that would maybe be enough to show what is called the "above the fold content". This is also something that is a criteria in the often used Google PageSpeed. Meaning in theory that our page would get slower (on first page load) by building a SPA on top of the fallback head.

    It is very hip with some "headless Drupal" goodness, but not at the cost of performance and speed. So what I do for the first page load, is trust Drupal to do the rendering, and then initializing the JavaScript framework (Mithril.js in my case) when I need it. Let's take for example you, dear visitor, reading this right now. You probably came to this site via a direct link. Now, why would I need to set up all client side routes and re-render this node when all you probably wanted to do, was to read this article?

    Results and side-effects

    OK, so now I have a fallback for JavaScript that gives me this result (first picture is without JavaScript, second is with JavaScript):

    As you can see, the only difference is that the disqus comment count can not be shown on the non-js version. So the result is that I have a consistent style for both js and non-js visitors, and I only initialize the headless part of the site when it is needed.

    A fun (and useful) side-effect is the page speed. Measured in Google PageSpeed this now gives me a score of 99 (with the only suggestion to increase the cache lifetime of the google analytics js)

    Is it really headless, then?

    Yes and no. Given that you request my site with JavaScript enabled, the first page request is a regular Drupal page render. But after that, if you choose to go to the front page or any other articles, all content is fetched with AJAX and rendered client side.

    Takeaways and lessons learned

    I guess some of these are more obvious than others.

    • Do not punish your visitor for having JavaScript disabled. Make all pages available for all users. Mobile first is one thing, but you could also consider no-js first. Or both?
    • Do not punish your visitor for having JavaScript enabled. If you render the page based on a AJAX request, the time between initial page load and actual render time will be longer, and this is especially bad for mobile.
    • Subsequent pages are way faster to load with AJAX, both for mobile and desktop. You really don't need to download more than the content (that is, the text) of the page you are requesting, when the client already have the assets and wrapper content loaded in the browser.


    First: these techniques might not always be appropriate for everyone. You should obviously consider the use case before using a similar approach

    If you, after reading this article, find yourself turning off JavaScript to see what the page looks like, then you might notice that there are no stylesheets any more. Let me just point out that this would not be the case if your _first_ page request were without JavaScript. By requesting and rendering the first page with JavaScript, your subsequent requests will say to my server that you have JavaScript enabled, and thus I also assume you have stored the css in localStorage (as the js does). Please see this article for more information

    Let's just sum this up with this bad taste gif in the category "speed":

    Nov 23 2014
    Nov 23

    Basic Cart ModuleI wasn't sure whether to publish this review or not.

    One of our members wanted a really simple shopping cart for Drupal. They found the Basic Cart module and asked me about it.

    So, I set up and tested Basic Cart. Everything worked great, until the end when I realized there was no payment gateway. None. Basic Cart is a great little e-commerce store ... except it has no way to pay.

    That left me with a dilemma. Was it worth publishing the tutorial still? In the end, I decided it was. Maybe it will inspire someone to add a gateway, or maybe it will help other users save time by avoiding Basic Cart for now.

    So, here is guide to using the features that are available in Basic Cart ...

    Nov 22 2014
    Nov 22

    The Drupal community can now proudly claim its own implementation of a Todo app with a RESTful backend!

    TodoMVC is a site that helps you select the right JS MVC library. But more then that, it allows you to learn by comparing those libraries, as they all implement the same thing - a simple Todo app.

    I've decided to fork the Angular example, and build it on top of RESTful. Looking at the Angular code, I was pleasantly surprised.

    As it turns out, TodoMVC's good folks have written the Angular app with both an API backend and a local storage one. If no backend is found, it silently falls back to local storage.

    This means all I had to do was to write the RESTful resource. You can take a look at the code needed, to appreciate how very little effort is required to make Drupal a proper RESTful server that Simply Works.

    On the client side I did two slight modifications:

    1. Added the ability to inject the ENV with a configurable backend URL (to enable local/production/etc development environments)
    2. Changed the response the app was expecting after create/update, from todo.id = resp.data.id; to todo.id = resp.data.data[0].id;

    (Note that I could have kept the demo app completely unchanged and do this on the server side by writing a custom RESTful formatter that would wrap the result as expected by the app)

    Keeping with "The Gizra Way" obsession with best practices, the entire package is published as an installation profile. It even has a Behat test to verify it's installed properly! Go ahead and try the app.

    Nov 22 2014
    Nov 22
    $ dig A paulbooker.co.uk @ 
    ; > DiG 9.8.3-P1 > A paulbooker.co.uk @
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER
    $ dig MX paulbooker.co.uk @
    ; > DiG 9.8.3-P1 > MX paulbooker.co.uk @
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER

    If no DNS server argument is provided, dig consults /etc/resolv.conf and queries the name servers listed there.

    $ cat  /etc/resolv.conf
    # Mac OS X Notice
    # This file is not used by the host name and address resolution
    # or the DNS query routing mechanisms used by most processes on
    # this Mac OS X system.
    # This file is automatically generated.

    These nameservers are provided by my broadband provider ..

    $ dig -x
    ; > DiG 9.8.3-P1 > -x
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER

    .. virgin media via DHCP.

    If you want to find out more about dig then you need to "man dig" not sure what man is, then you need to "man man". Dig? :D

    Nov 22 2014
    Nov 22
      * Implements hook_user_login().
    function mymodule_infusionsoft_user_login(&$edit, $account) {
      global $tag;
      if (user_access('administer site configuration')) { 
        return TRUE;
      $contact_active =  _mymodule_infusionsoft_contact_active($account->mail);
      if ($contact_active == FALSE) { 
        // Load the anonymous user
        $user = drupal_anonymous_user();
        drupal_set_message(variable_get('mymodule_infusionsoft_message', 'Infusionsoft account not created or has expired.'), 'warning');
        if (empty($tag)) drupal_goto('user/register'); 
        if ($tag == "expired") drupal_goto('account-expired');
        if ($tag == "blocked") drupal_goto('account-blocked');
      } else {                      
            // Update membership roles to match CRM groups/tags
    function _mymodule_infusionsoft_contact_active($mail) {
      global $tag;
      $contact_active = FALSE;  
      $contact_id = infusionsoft_contact_load_by_email($mail); 
      if (!empty($contact_id) && is_numeric($contact_id)) {
        $groups = infusionsoft_group_contact_options($contact_id);  
        $num_groups = count($groups); 
        if ($num_groups == 0) return FALSE;
        if (in_array(STATUS__EXPIRED, $groups)) {
          $tag = "expired";
          return FALSE;
        if (in_array(STATUS__LIVE, $groups)) {
          $tag = "active";
          return TRUE;  
    Nov 21 2014
    Nov 21

    Since Google announced that it gives an additional SEO boost for sites that are fully encrypted with HTTPS it is now advisable to encrypt your entire site and not just pages with sensitive information such as user login and checkout pages.

    There are multiple method to achieve this. We like using the below modification to .HTACCESS file.

    In the .HTACCESS file that is located in the Drupal root directory after the the line:

      RewriteEngine on

    Simply add this code to the:

    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

    Your Drupal .HTACCESS file should now have a section that looks similar to this:

    # Various rewrite rules.

      RewriteEngine on

      # Set "protossl" to "s" if we were accessed via https://.  This is used later
      # if you enable "www." stripping or enforcement, in order to ensure that
      # you don't bounce between http and https.
      RewriteRule ^ - [E=protossl]
      RewriteCond %{HTTPS} on
      RewriteRule ^ - [E=protossl:s]

    NOTE: This, of course, assumes that you've procured a valid SSL certificate for your site/domain and have it installed correctly.

    Nov 21 2014
    Nov 21

    When buying a car, there’s a reason you are given such a comprehensive user's manual to cover everything that the salesperson or technician was unable to show you how to do in the first demonstration. Things like what to do when the "Check Engine" light comes on, or which grade of oil to use when it comes time for an oil change. Although you may not reference it often, when needed the supporting user manual is worth its weight in gold. The user referencing it in time of need can save frustration, time and money; but more importantly, without it, there is the possibility of the owner causing serious damage with potentially dangerous outcomes. Imagine someone over inflating their tires, and having them fail while driving on a highway.

    Showing proper website documentation

    The same principles apply to documentation in a web development environment. Regardless of how often it is referenced, or the implication of the task being referenced, not having it in time of need causes unnecessary client frustration and at minimum, a poor user experience. The worse case scenario can mean site outages and significant opportunity cost from brand damage or revenue loss. A successful project that was delivered on time and budget can be tainted; truly, a lose-lose scenario.

    Delivering complete and relevant client documentation can be tricky, and ultimately you do need to determine the comfort level of a client to achieve this. Some pointers to keep in mind for successful documentation:

    1. Enrich your documentation - Screenshots (and sometimes even videos) are great to include in the documentation. Enriching your documentation with media adds a level of polish that will really set your team above the competition.

    2. Take nothing for granted - It goes without saying that everyone has their own comfort level when it comes to the care and feeding of a website. Even within the client's content team, there may be a range of skill levels. The best thing to do is cover as much as you can, short of giving a tutorial on how to use the browser or OS.

    3. Get an outside opinion - While it can be good to have a developer or solutions architect on the project write most of the documentation, it is usually worth having someone not as familiar with the site attempt to actually attempt to *use* the documentation. It may even be worth getting someone non-technical to look at it, depending on the comfort level of the client.

    A good way to begin authoring client documentation is to form the structure of it from the actual requirements documentation that was drafted during the discovery phase.  The reason for this is two fold: it gives you hierarchy and chronology to the document, and it also ensures that you don't miss outlining any of the features painstakingly built out by the developers.

    Consider the following scenario: The tech firm, Web Monkeys, delivers a completed project to their client, Hotels for Alpacas. The client is extremely happy with the way the project played out, from discovery to delivery.

    Eventually, after extended use of the website, they realize there are some particular pain points that put undue stress on the content team. Hotels for Alpacas call up Web Monkeys to explain the situation, but there is a problem. The entire original web team has moved on to other opportunities. All that Web Monkeys can do to address their clients' concern at this point is to spend a lot of unnecessary time and effort to solve their issue. This is worsened by the fact that the original developers built a fix for the issues in question, but did not document it!

    I think by now, it is obvious that this (and many other) post-launch issue could be alleviated or avoided altogether fastidiously documenting everything we do. As this is such an important topic, we will be continuing our approach to documentation in the coming months with subsequent posts outlining best practices in documenting specific areas of a project -- stay tuned!

    Nov 21 2014
    Nov 21

    This talk was given at Drupal Camp Baltimore 2014. In it, I discuss REST and (briefly) SOAP APIs built with Drupal. I give a number of hands on examples using Views Datasource, RESTful Web Services (restws), and the Services module.

    Nov 21 2014
    Nov 21

    If you wanted to add right double angle quotes or raquo to each of your comment block comments. First you would track down the functionality that needs changing to theme_comment_block() inside the comments module ..

    function theme_comment_block() {
      $items = array();
      $number = variable_get('comment_block_count', 10);
      foreach (comment_get_recent($number) as $comment) {
        $items[] = l($comment->subject, 'comment/' . $comment->cid, array('fragment' => 'comment-' . $comment->cid)) . ' ' . t('@time ago', array('@time' => format_interval(REQUEST_TIME - $comment->changed))) . '';
      if ($items) {
        return theme('item_list', array('items' => $items));
      else {
        return t('No comments available.');
    function comment_block_view($delta = '') {
      if (user_access('access comments')) {
        $block['subject'] = t('Recent comments');
        $block['content'] = theme('comment_block');
        return $block;

    and then make your changes by overriding theme_comment_block inside your theme's template.php as ..

    function mytheme_comment_block() {
      $items = array();
      $number = variable_get('comment_block_count', 10);
      foreach (comment_get_recent($number) as $comment) {
        $items[] = l('» ' . $comment->subject, 'comment/' . $comment->cid, array('fragment' => 'comment-' . $comment->cid, 'html' => true)) . ' ' . t('@time ago', array('@time' => format_interval(REQUEST_TIME - $comment->changed))) . '';
      if ($items) {
        return theme('item_list', array('items' => $items));
      else {
        return t('No comments available.');

    After clearing the cache your theme function will be now called instead of the theme function provided by the core comment module.

    Nov 21 2014
    Nov 21

    As Drupal Watchdog approaches its fifth year of publication, we’re sending out a call for contributions to our upcoming Spring/Summer 2015 issue. Guided by helpful feedback from our readers, I’m excited to announce that our next issue will be a Strategy Cookbook. What does that mean? I’m glad you asked…

    Anyone that has spent any time with Drupal knows that it is a very flexible tool. And while flexibility is wonderfully powerful, it can also be wickedly complex. Whether you’re a business owner or product owner, site builder or developer, a site maintainer or a project manager, a business strategist or analyst, a themer or a systems administrator, a designer or a student, you have certainly struggled with complexity around Drupal.

    This next issue of Drupal Watchdog aims to document a variety of useful strategies for navigating this complexity in all of its forms. We are looking for useful recipes, case studies, tips, and tricks for how to best leverage Drupal to solve strategic business problems.

    We are looking for articles on Content Strategy: the analyzing, sorting, constructing, placing and managing of content on a web site. Why are people visiting your website, what is the content they’re interested in, and how can you assure them a meaningful experience? What contributed modules and configuration choices do you use to support your content strategy?

    We’re looking for articles on Business Strategy: how stakeholders set goals and objectives that take into account available resources, competition, and the entire business environment. Are there key questions that need to be asked, specific to using Drupal? How does a business adapt to meet the changing landscape?

    We’re looking for articles that help readers differentiate the forest from the trees, focusing on value. We’re looking for explorations of the role of analytics in evaluating content and deployment strategy. And we’re looking for examples of organizational and business problems that Drupal is good at solving.

    Our Strategy Cookbook will be this and much more: please email me at jeremy@drupalwatchdog.com with proposals for what you’d like to write for this next issue of Drupal Watchdog!

    For more information on content length and process, visit the following links:

    We will require a rough draft of your contribution and any supporting materials by Monday, February 2nd, 2015. We must receive the final draft (including all images, tables, code snippets, etc) by February 16th, 2015.

    Email your proposals to jeremy@drupalwatchdog.com.