Apr 27 2015
Apr 27

Taking advantage of the DrupalCon Amsterdam gathering of Drupal bigwigs – and in our eternal quest to quench our thirst for enlightenment, and thrust onwards into the future and deeper into the unknown – we corner some of those key Drupal players to ask the all-important question burning in everyone’s mind: When is Drupal 9 coming out?

Dries Buytaert (Drupal Creator and CTO & Co-Founder of Acquia): Nine?

MortenDK (Viking, geek Röyale): Oh! Ho-ho-ho!

Leslie Hawthorn (Director, Developer Relations, Elasticsearch): After Drupal 8.

Tom Erickson (CEO, Acquia): Drupal 9!

Michael Meyers (V.P. Large Scale Drupal, Acquia): I think that the real question is how fast can we accelerate the pace of innovation in the Drupal community and get to 8.1, 8.2, 8.3, and I’m a lot less interested in Drupal 9 right now.

Holly Ross (Executive Director, Drupal Association): How much time have you got?

Bastian Widmer (Development and Operations Engineer, Amazee Labs): Let’s first finish Drupal 8.

Robert Vandenburg (President and CEO, Lingotek): When is Drupal 9 coming out? Never! When pigs fly!

Fabian Franz (Senior Performance Engineer, Technical Lead, Tag1 Consulting): I’m still thinking about Drupal 7.

Kieran Lal (Technical Director, Corporate Development, Acquia): I actually know the exact time... When it’s ready.

Apr 27 2015
Apr 27

Use Features together with a configuration management workflow and get the best of both worlds.

This is a preview of Nuvole's training at DrupalCon Los Angeles: "An Effective Development Workflow in Drupal 8".

Features for Drupal 8 will exist and it is already in alpha state. This may be surprising to those who wrongly assumed that Configuration Management was going to supersede Features; but, as we explained before, it all comes down to understanding how to use Features the right way in Drupal 8.

If you are using Features in Drupal 8 for deployment of configuration, you are doing it wrong!™

The role of Features

While the configuration management was built to keep configuration consistent and enable deploying it between different environments of the same site it was not built for sharing configuration between different sites. This is where Features comes in. Features allows you to easily bundle related configuration and re-use it on another site.

For re-using configuration and building starter profiles the current alpha version is already great. It works just like the re-usable “feature” we were blogging about last year, except it comes with a familiar UI and lets you do everything as a site builder. It even comes with a smart system that auto detects configuration that belongs together. Features-overview


Configuration changes are development

Developers and site builders working with configuration in Drupal 8 need to understand that changing configuration needs to be treated equally with development. While this may seem trivial at first, the consequences are a fundamental shift to the approach of building a Drupal 8 site.

Drupal 8 comes with the configuration management system that was built to deploy configuration between different environments of the same site. It allows the whole configuration of a site to be synchronized both through the UI for “site builders” and through drush and git for “developers”. While the system allows individual configuration objects to be exported and imported, deployment and synchronization is always done with the entire set to ensure it is consistent.

This means that when you change configuration on a production site you either opt out of using the configuration management to deploy configuration or you need to go through a more elaborate work flow to synchronize the configuration changes with the development environments similar to what you would need to do if you were to change code directly on the production server.

For the math-oriented,

? config ? development

What if you want to be free of the temptation to edit configuration in production? Luckily there is a module for that! It is called Configuration read-only mode and allows to lock the forms where you change the configuration, thus enforcing its role as a "developers only" tool.

Of course some configuration will differ between environments. Just as the database credentials are kept separate from the rest of the code, specific configuration can be overridden in the instances’ settings.php or services.yml. The rest of the code is usually treated as a whole for consistency reasons, the same should be true for configuration.

What about not using configuration management?

Features and packaging related modules should not be regarded as a solution to deploy only partial configuration. Packaged configuration by definition is never aware of the whole site and the possible inter-dependencies of the site's particular configuration. Deploying only partial configuration circumvents the safeguards Drupal put in place to make the deployment more robust. Of course nobody is forced to use the new tools for configuration management and you can easily opt out of using it by just ignoring that option. It is also still possible to develop a Drupal 8 site by sharing a database dump just as with Drupal 5. But remember that a Drupal 7 approach will yield the Drupal 7 headaches. We would recommend re-evaluating the deployment strategies when starting to use Drupal 8.

Managing distributions in Drupal 8

The more complicated scenario which is yet to be tackled is a feature for a richer distribution which will update over time. For example a newer version of a feature could come with an updated view or additional or updated fields for a content type.

First steps in that direction have already been taken by Configuration Update Manager (a dependency of features) and Configuration Synchronizer (a sandbox module). Configuration Update Manager compares the configuration in use on the site with the configuration which was provided by a module or installation profile. Configuration Synchronizer goes a step further and keeps a snapshot of the configuration when it was installed in order to determine whether the configuration has been customized by an administrator or whether it can safely be updated to the one provided in the new version of the module/feature.

All the modules mentioned above are used in a development environment of a particular site. Or in other words in an environment where configuration changes are expected and part of the process of updating a site.

For more, see our presentation from the Drupal Developer Days (embedded below); thanks to all the people there for the fruitful discussions we had on this topic during the event.

Apr 27 2015
Apr 27

The scheduling of posts was the first things I noticed that was missing from Drupal after moving over from Wordpress.  Thanks to the Scheduler Module, we have that functionality back, with more control over what and who can access it.

The Scheduler module allows for Publishing, and Unpublishing on a Content Type by Content Type basis.  You can set up Scheduled Publishing on the Blog Content Type.  Then, you can set up Scheduled Publish and Unpublishing on something like a Sale, or Contest Content Type.  You can also control which roles have access to the scheduling functionality, and the ability to see what content is currently scheduled.

You want to make sure you have an external Cron/crontab setup on your server, and make it run at the time that you schedule to ensure that your posts go out on time.  If no one is visiting your site it is going to sit idle and cached.  Cron sitting on the server, running on the server's clock, and the schedule you set up, will poke Drupal on that schedule just to say "Hey, How's it going over by there fella?  You need to be doing anything right now?"  Drupal is smart enough to say, "Yes, I need to do all of the things, thank you very much", and then does them.

Installing and Configuring Scheduler

Scheduling Content

  • Depending on hour you configured the Content Type, the scduling options will appear on your Node Add screens either in their own fieldsets, or in the Vertical Tab at the bottom of the screen.
  • Type in the Date, or select the date if you use the Date/Date Popup Modules.
  • Check the Published checkbox and click Save.

Not every website I've built has needed the scheduling of content, but when they do, Scheduler is a very easy solution to add and configure.

Apr 27 2015
Apr 27

We love to share our knowledge. At every Drupal camp and Drupal Open Days in Ireland (not to mention most DrupalCons and DevDays internationally) we give presentations. This year we are giving three talks at Drupal Open Days Ireland 2015. Here's what we'll talk about:

Alan Burke: Doing the post-launch dance

You got the project live. It was on time. It was under budget. You did the post-launch dance. Now ... it's time to support that badboy. That great client, who now has a great website, wants some more great features (and that thing you thought was working, no longer is).

Here's the art and science of successful support.

Read about our Drupal Advice and Support | View event notice

Mark Conroy: The content editor deserves an easy life

You know how it is. You design a beautiful website (and the site visitors love it), but the backend edit page is long, and ugly, and hard to follow, and ...?

As developers, we often forget that our end users are the website editors. Their end users are the site visitors. Well, let's fix this mess so that website visitors and content editors both have the time of their lives.

Read about our Content Strategy Service | View event notice

Stella Power: Building a CRM into your Drupal website

You've heard of SalesForce, and Microsoft Dynamics, CiviCRM, and a whole host of other CRM systems that can plug into your Drupal website, haven't you?

Yes? Well, have you ever thought of building a CRM within your Drupal website, using Drupal contributed modules, entities, and fields? It's a great solution for small- to medium-sized enterprises that want to keep control of all of their data but can't afford the price of commercial CRMs.

In this presentation, we'll walk you through the steps needed to create a CRM using Drupal 7 and RedHen.

Read about our CRM Solutions | View event notice


And if hearing our pearls of wisdom isn't enough, there is a whole host of other great talks going on, not least the keynote "Managing Projects the Drupal Way" by Emma Jane Westby.

Will we see you there? Let us know in the comments if you're coming and any suggestions you might have for our talks.

Apr 27 2015
Apr 27

Are you wonderig what code quality guidenlines your Drupal developers should follow in addition to Drupal's coding standards to make the code readable, secure and performant? What are the best practices so that developers can follow each other's code easily and make code review faster?

Here are the guidelines you should follow in addition to the Drupal coding standards. Note that we use PHPStorm as our editor so some of our guidelines are based on what it does. Please adjust these recommendations based on the editor you are using.

1) Use of t() function is mandatory otherwise it is a security risk. It also helps if the you want to make the site multilingual at some point of time. Make sure to always use it. Note that in hook_menu(), the title value should not use t() function since it passes through t() function later automatically.

2) If you are getting values from a form, make sure to use check_plain() otherwise it is a security risk.

3) Each function should start with a phpdoc style comment (starting with /**) about what the function does. Write input parameters as well as return parameter in the comment. PHPStorm gives a warning if the phpdoc comment doesn't match the function structure. Do not ignore it.

4) If function is a hook, write "Implements hook_{name}". There is no need to write arguments and return arguments in comment then. For e.g.

 * Implements hook_menu().

For hook_form_FORM_ID_alter(), add which form you are altering. For e.g.

 * Implements hook_form_FORM_ID_alter() for user_register_form().

5) All the variable names that you define have to be meaningful. For e.g. use $old_category_name instead of $val. A person reading your code should know what that variable stores. Also, if a variable has two words in it, use "_" to separate them. For e.g. do not use $memberid. Use $member_id instead. PHPStorm does spellcheck automatically. There is no reason to use a variable name with wrong spelling, unless it is something related to Drupal. For e.g. PHPStorm gives spelling warning on $nid, which we are fine to use since it has direct connection to Drupal.

6) Any one function should not be more than 50 lines long including spaces. If it is long, then create sub-functions. In some cases, the function will be more than 50 lines long and that's fine. Just be prepared to justify that why it needs to be so long.

7) Use meaningful names for functions. Separate words in a function name by "_". For e.g. instead of using "search_relatedmembers_display", use "search_related_members_display".

8) .module file should contain only hook implementations. For e.g. hook_init(), hook_menu(), hook_preprocess_node(), etc. .module file can also contain form's validate and submit handlers. All other functions that are not hooks should be in separate files. This will reduce page load times. Separate files you create should have logical structure based on which functions will get loaded simultaneously in a function call. For e.g. when I am searching users, I will not be needing functions related to content search search. So include user search related functions in a separate file which will be loaded only on user search page using hook_menu() or module_load_include(). If a function is shared between multiple files, write it in a common file such as <module-name>.common.inc and include it using module_load_include() function. Before you move any function to a file outside .module file, use Edit -> Find -> Find Usages in PHPStorm to see which other modules/files is using this function. You will need to use module_load_include() in these files so that the code does not break.

9) When you commit code via PHPStorm, it shows you a list of errors and warnings. A lot of warnings are that $form, $form_state are not used, etc. These are fine. But you should take warnings and errors such as "variable may not have been defined" seriously and remove them.

9) PHPStorm menu has Code -> Locate Duplicates. Use "Locate Duplicates" for searching code duplicates in custom modules folder. If there is significant and logical duplication, then create a function out of it and use it.

10) If you find that two functions are doing the same thing except for minor changes which can be handled using arguments, create a function with arguments and use that.

11) If you are querying for entities, use Entity Field Query. For every other query, use dynamic queries. Do not use db_query() since the query you write may not be applicable to all the database types and you may need to change them if database type changes in future.

12) Always use LANGUAGE_NONE instead of 'und'.

13) When you are getting value(s) of fields from node or entity, use field_get_items() instead of getting the items using $node->field[LANGUAGE_NONE][0]['value]. The advantage of field_get_items() is that it takes care of getting the items in the correct language, which can be useful later in case the client wants to create a multilingual site.

14) Template files should not have any logic except "for" and "if" statements. Put any logic in preprocess or process functions. Code execution in preprocess or process functions is much faster than in template file.

If you have some other guidelines that you follow in addition to the above, please write them in the comment below. Want to know more about Drupal development and how to create and manage large Drupal sites? Subscribe to our weekly newsletter!

Apr 27 2015
Apr 27

Deciding upon an open source, flexible, enterprise-ready solution for your marketing efforts, is a daunting task. Should you build custom solutions leveraging a framework, or adopt a pre-built CMS? What are the costs involved? Can you live with limitations or being boxed in? How well can it be supported?

What if there was a world-class CMS that fully embraced an enterprise framework so that you could have the best of both worlds? What if that solution not only covered your website marketing needs, but could also be a full blown API for your mobile, POS systems, CRM and other marketing channels? What if that solution was open source so that you were no longer handcuffed to someone else's development roadmap and were not forced to pay $100k+ licensing fees just to use the software?

As a marketer, you could free up licensing fees for more important initiatives. You could leverage pre-built CMS solutions while maintaining the flexibility to build completely custom solutions. You could leverage data across different channels and/or combine channel data to build a truly remarkable hub for all of your marketing efforts.

Symfony 2 adds class(es) to your enterprise

Symfony, Drupal’s newly adopted object-oriented framework, provides strict compliance and more structure to the underlying application, making it more easily accessible, maintainable and scalable. Developers that understand object-oriented programming (OOP) patterns, or that have worked with Symfony in the past, can begin contributing more easily without having to learn a non-standard, "hacky" CMS-centric way of developing a website.

While Drupal 7 is widely known as a resource "hog", Symfony was designed to load only the resources it needs upon each page load. This should decrease the amount of resources needed to run a Drupal 8 site. Other benefits include:

  • Improved security
  • Better upgrade paths
  • Built in version control
  • Improved language support
  • Mobile-first design

"This "success" is due to many factors but probably because we were one of the first frameworks to recognize and embrace the changes of the web and PHP: PHP 5.3 vs PHP 5.2, Git vs SVN, components vs a monolithic framework, Twig vs PHP, dependency injection, ... Symfony2 is a revolution, not just a big evolution of the platform."



Drupal 8 will also adopt Twig as its template engine, which provides several advantages over its predecessor, PHPtemplate. Front-end developers now have a easier theme system, more freedom to innovate and reduced overhead; all of which translate to a better user experience for your consumers, more flexibility for your designers, and more profits for your department. Simply put, you can do a lot more with a lot less.

REST Easy With This API

You'll gain the most benefit from Drupal 8’s uncoupling of its data and presentation layers – leveraging a fully, built-in RESTful interface. This powerful change allows developers to more easily implement Angular.js, Ember.js or other client-side MVCs to share/display content and data across different marketing channels.

Marketing campaigns are no longer limited to your website or social media pages. Marketing data can be consumed by and shared easily with CRM, Marketing Automation Systems, POS systems, native mobile applications, wearables, desktop applications and more.

Imagine being able to change content on your website, mobile application, POS system or any other connected application, through one, and only one, system - Drupal.

The world is becoming a more connected place at every turn. Drupal proves to be leading the way into a bright future. How can more structure (and more flexibility) help your organization?

Apr 26 2015
Apr 26

The open web achieved a small but important milestone with the latest release of Chrome (42) and its added support for push notifications, offline usage and performance. These features are essential in closing the gap between native (IOS/Android) and web (HTML5) apps.

That's a really big deal (and the most exciting thing in HTML5 so far) because native apps are kicking our ass mobile. Mobile has already surpassed desktop but people are using native apps in favor of using web apps through the browser (source). 

Not surprising. Web apps are lacking all the great functionality which native apps offer, like offline usage, better performance, background services, push notifications and so on. However as (Drupal) site builders/developers, creating a responsive website is about the best we can do, that doesn't cut it. 

Until now!

Meet the Service Worker

The service worker is piece of javascripts which runs in the browser but outside the DOM as a first-class citizen. This API offers us things like:

  • Caching (offline usage, "install" a web app, no more need for AppCache)
  • Push notifications (native push notification through Android/Chrome)
  • Fetch (replaces XMLhttpRequest)
  • Sync (background updates en retries)
  • Geofencing

I highly recommend watching this great video presenation by Jake Archibald, one of the people on the Chromium team.

Can I use it?
First the good news: If you're using Chrome (42+) you can already use it, Firefox is following closely. The bad news: no word from Safari if they will support it. Internet Explorer is considering it. There is also this overview page listing all the various implementations and browsers. 

Because of the high-level permissions used by this API, you wil also need a secure connection (https/ssl) in order to get it working. Although, during development, localhost is allowed.

Push Notifications
One of the cool things made possible by the service worker, are native push notifications. We are moving from a pull-based web into a push-based one (The Big Reverse of the Web) This new web is less about websites and more about webapps. Notifications are the new platform. In our case, the platform is Android or Chrome(book).

There is a great tutorial on HTML5rocks howto implement push notifications. Just remember you will need https. If you want to go for a quick demo, try github pages which offer https by default. Based on the tutorial I made a quick demo page.

Push notification Chrome

Example of a push notification in Chrome. Note how Chrome does not have to be the active window and your site does not have to be open.  

Back to Drupal. How would these new possibilities fit into Drupal 8? I can imagine it can work very well in a headless/decoupled setup, but I don't think it's a good fit in the traditional output Drupal generates. One might argue webapps and traditional websites service different needs. However, if the future is a push-based web, traditional websites might vanish to the background.

Looking at push notifications specifically, there already is a module for native apps which could be enhanced to implement the new service worker.

On a side note: Chrome is also working hard on improving the performance of web apps. Using the transition API, animations should be just as smooth as on native apps, without any more annoying "flashing white screens" which break the user experience. 2015 is promosing to become an interesting year for web apps.

Apr 26 2015
Apr 26

It's been a while since I quick benchmarked Drupal 7 on PHP7. But at the time of that writing it was still not possible to benchmark D8 in PHP7, there were too many compatibility issues that simply would not let D8 boot on PHP7.

There is D8 on PHP7 initiative in this queue issue at Drupal.org that has helped get D8 working on PHP7.

Today I was finally able to get D8 up and running on PHP7 and the results were astonishing. We only found 1 PHP bug that was reported (and solved in a matter of hours by the PHP team), indication that PHP7 is still under heavy development and that any test ride should expect issues.

It is not only the switch to PHP7 what makes a difference, but the fact that D8 is faster than D7 out of the box (in the sense that it leverages caching by default and in a more sofisticated way). 

I am though not very happy that the D8 performance efforts are so focused on caching. Relying on caching as the main performance driver is not good for a CMS that aims at being a CMF (sort of a development framework). Caching adds complexity to applications and makes development and troubleshooting slower and more time consuming, and in it's heart it is very dangerous because it helps cover serious performance flaws in the overal system's design. It's sort of a rat's trap because it helps you hide bad implementations or designs in the sort term, but it simply bursts once your application has grown leaving you with no other alternative than starting from scratch because - um - you have been hiding you trash under the carpet for a very long time and there is nothing to be saved. Premature optimization is after all not the root of all evil - it just makes sense to design you system from the ground to be fast and scalable. Designing at your will thinking that later you will make it scalable and fast by caching is a double edged sword.

Premature Optimization can be defined as optimizing before we know that we need to.

Now the results. I had to shift from concurrent throughput testing to sequential single requests because D8 on PHP7 D8 was hiting hardware limits leaving us with numbers such as 1200 Requests Per Second with 8 concurrent requests. Anyways, sequential non concurrent requests give us a better idea of the real performance difference.

This is a very simple 1,000 sequential requests Apache Benchmark test against the default Home Page with Page caching turned off and opcache enabled.

With PHP 7.0.0-dev (Build date: Mar 25 2015 22:22:57):  210 requests per second

With PHP 5.6.6: 22 requests per second.

We are not comparing D7 to D8, but PHP 5.6 to PHP 7 on the same exact setup. Our D7 comparison threw a +16% for cached pages and +58% for uncached, but this time we are seeing a x10 fold in performance and turnning page caching on and off made no difference to the results (possible indication that we are either setting up something wrong or there's something broken in D8 page caching).

Apr 26 2015
Apr 26

Given the fact, that in Drupal 8 every module and theme has to declare their CSS and JS files as libraries, we should think about and discuss naming conventions and best practices for these.

We will have a massive increase in library definitions in D8, as every single theme developer and many module developers are concerned. Before in D7, a theme just had their JS and CSS files listed in its info file, without any explicit library declaration, and therefore without naming the assets. Also most of the modules didn't have to declare anything. Most of time, the modules just include their assets on page load and/or attach it to a form element, etc. In fact, the only D7 modules that have to care about library definiton and naming, are the ones that integrate a third-party Javascript library in Drupal, like Colorbox or our Outdated Browser module.

This changes with D8. There will be a lot of simliar use-cases and repetitive tasks. Every theme of course has its global CSS and JS file(s). Also a module, that integrates a 3rd part JS library, does have at least two libraries to define: the library itself and a small custom script that does the initialization and attaches a Drupal behavior. These are some typical examples, you'll often need yourself to declare. And for these cases, it would be important, if we would have some naming conventions.


Let's start with themes. This is rather easy, as there are already some standard themes shipped with D8, e.g. Bartik. The recommended way here seems to combine your CSS files into a library named "global-styling".

The only open question is, what if you also have some scripts in your theme? Should we imply that the script file(s) should be declared separately as a "global-scripts" library?

But what, if they belong together inseparably? Wouldn't it be better to define a single library containing both CSS and JS assets? Then something like "global-styling-and-scripts" or "global-styles-and-scripts" sound appropriate.

(While reviewing my post, I've just found a blog post by Andrei from Appnovation, recommending "global-scripts" for JS files only and "global-styles-and-scripts" for combined libraries.)


I'm now talking about the typical use case of modules doing third party JS library integration. Typically we have the JS library itself one the one hand, and the module's initialization script on the other hand. In D7, it was easy: you don't declare your initialization script, but just load it under certain circumstances. And the JS library would be declared with the help of Libraries API, naturally based on the name of the JS library itself. So for example the Colorbox module defines the JS library under the name "colorbox", Outdated Browser module "outdatedbrowser", etc.

In D8, you have the different situation, that the libraries are namespaced with your module's name, e.g. there's "core/jquery" or "core/modernizr" defined in Drupal core. That means, that whatever name we give our library, it's name only has to be unique within our module.

That leads us to our first question: as typically the module's name matches the JS library's name, should we still name the library based on its original name, so that we end up with "colorbox/colorbox", "outdatedbrowser/outdatedbrowser", etc.? Or should we use something like "library" instead, so that the full qualified name will be "outdatedbrowser/library"? Then of course, we have to distinct between modules having equal names as the integrated library and the ones whose names differ (image something like a "mycoolslideshow" module, that integrates different libraries like "ultraslider" and "megaslider").

Maybe, a combined third option would be even better, taking "LIBRARY_NAME.library", like "outdatedbrowser/outdatedbrowser.library"?

And what about the initialization script - the file, that mostly only a few lines of code, that actually initialize and start the library? Here, a few things came up into my mind, like "init", "setup", "drupal". Additionally we have to think about the same question like above, whether or not to include the library's name. Based on examples above, we now have multiple combinations, like: "outdatedbrowser/init", "outdatedbrowser/drupal", "outdatedbrowser/outdatedbrowser.init", "outdatedbrowser/outdatedbrowser.drupal", "outdatedbrowser/drupal.outdatedbrowser", and so on.

Examples in the wild

I've done a small research on the currently existing D8 modules. Unfortunately there aren't many existing at the moment and a couple of them doesn't seem very mature or even working.

It's always good to have a look at Drupal core first, but I couldn't find exactly the use case described above. CKEditor for example is split: the library itself is defined within the core libraries as "core/ckeditor", whereas the actual usage of the library is done within the ckeditor modulem, defined as "ckeditor/drupal.ckeditor", thus following the pattern "drupal.LIBRARYNAME" for the initialization script.

In our Outdated Browser module, we've decided to use "outdatedbrowser/library" and "outdatedbrowser/drupal" for the moment. But I'm thinking about changing it to "outdatedbrowser/outdatedbrowser" and "outdatedbrowser/drupal.outdatedbrowser".

Colorbox does it differently. At the moment, the Colorbox library isn't defined within libraries.yml file at all, but with Libraries API hook. The initialization script of course is part of the yml file, but named "colorbox/colorbox", which is very misleading imho.

Another example I've found is using "core" for the library and "drupal" for the behaviors (sidecomments/core and sidecomments/drupal), which is slightly different to another example: mathjax/source and mathjax/setup.

Sticky Sharre Bar's decision is again different to all the others: sticky_sharrre_bar/sharre for the Sharre JS library, and splitting its own assets in sticky_sharrre_bar/sticky_sharrre_bar_css and sticky_sharrre_bar/sticky_sharrre_bar_js.

I've also seen modules defining one single library containing both the external library and its own assets, which is imho not the best solution, and that's not what was intended by the Drupal library API.

Apr 26 2015
Apr 26

Drupal. I didnt come to Drupal code 14 years ago, I came for the community and stayed for the functionality. That is part why I never liked the "Come for the code, stay for the community" slogan. Sure, it is a perfect cheesy slogan. If all you want attract are coders in the community, it is even a perfect slogan. For a perfect community, of perfect happy coders.

We have got to learn to address humans. Not just humans who can code. That is, if we want to be a true community for a product. A product that is well designed and does attract both the business and the user to participate in the product, the process and hence the community.

Leaderers. Entrepeneurs. Visionaries. Testesters. Document writers. Project Managers, marketeers. To name just a few. Of course developers can also have the skills to do these jobs, an often overlooked fact. But someone who is "just" a marketeer, will not come for the code. (S)He might come for the job at hand, money that might be involved, the functionality, but the best reason why an external non developer should come to the community to help out, is the community that is helping her/him out. Not clean lines of code, but helping hands of love.

This is I am active in the Drupal community, to help out to get others on board. With a rocking team ( Marja, Imre, Rolf and Peter and others) we are organising the DrupalJam event in the low lands. The DrupalJam started with 20+ persons and pizzas in a room and is now a big event with over 300 people attending, over 25 sessions and a budget in the tens of thousands.

DrupalJam -organised by the Dutch Drupal foundation- will be held in Utrecht, April 30 and it really represents the helping hands -not just the lines of code- of the community. With keynotes from Bruce Lawson ( HTML fame), Marco Derksen (digital strategist, entrepreneur) and featured speakers like Jefrey Maguire (moustache fame, D8), Anton VanHouke (leading design agency in the NL, introduced scrum in to strategy and design), Stephan Hay (designer, writer) and Ben van 't Ende (Community Manager for the TYPO3 Asssociation).

And like last year, Dries will do a virtually Q-and-A. If you want to ask him nearly anything, do so at this form.

The event will be held in an old industrial complex as can be seen in these shots

I am really looking forward to this event, it has a long tradition and always strengthened the community and brought in new blood. People who "Come for the business and stay for the community" Those who come of the need for design and stay for the love. Or love the functional and stay for organising the next DrupalJam.

PS: Now this head has rolled, it is time we decide what we do the body. If you have 5 minutes of your spare time, read this post and if you have one minute more, see this one from 2008 as well.

Apr 25 2015
Apr 25

I recently had to switch profiles for this website. In the process of doing that, I immediately afterwards said “wow, I feel like other people have had this issue”. Sure enough they have… on this blog early last year by our own @aitala in the post How to Remove Drupal Install Profile.

So, when I then went to convert my own blog to use the publicize distribution I said “NOT AGAIN!”, and read his post. At the bottom, he pointed to a project called Profile Switcher and then this issue to add drush integration peaked my interest. Because I’m really cool, I decided this is how I’d spend my Friday night downtime… patching a project I have had no need for outside of 5 minutes prior to starting to work on it :).

Below is a video showing how you can use the patch to run a the profile switch drush command as well as (of course) a drush recipe that automates everything you should do during the migration process that can be done via drush.

Apr 25 2015
Apr 25

The year is 2020. We’ve managed, through automated testing, Travis CI, Jenkins CI and crontabs; to completely eliminate downtime and maintenance windows while at the same time increasing security and reducing strain on sys admins. There are very few “sys admins” in the traditional sense. We are all sys admins through Vagrant, docker, and virtualization. We increasingly care less about how scripts work and more about the fact that they do, and that machines have given us feedback ensuring their security. They don’t hope our code is insecure, they assume that it is and instead treat every line authored by a human as insecure.

We’ve managed to overcome humanity’s mountains of proprietary vendors, replacing their code and control with our own big ideas, acted upon by which helps build the tools we need for us. We have begun to bridge the digital divide on the internet, not through training, but by refusing to be solely driven by financial gains.

We are open source. And we are a driving force that will bring about the Singularity (if you believe in such a thing). So we did it gang, it took awhile but wow, we do almost nothing and we’ll always be employed because we know how the button or the one-line script works. Congrats! Time to play Legos and chew bubble gum right?

Or is this just some far off, insane utopia that developers talk about over wine in “The valley”.

This vision of the future, 5 years out, isn’t as crazy as it might sound if you see the arch of humanity. In fact, I actually believe we’ll start to get to that point closer to 2018 in my own work and at a scale 1000s of times beyond what was previously thought possible. This is because of the convergence of several technologies as well as the stabilization of many platforms we’ve been building for some time; yes, all that work we’ve been doing, releasing to Github, drupal.org, and beyond… it’s now just infrastructure.

Today’s innovation is tomorrow’s infrastructure and this becomes the assumed playing field by which new ideas stand on. Take Drupal’s old slogan for instance, Community Plumbing. That thing we all paid for significantly at one time, but now just take for granted; that thing is becoming even more powerful then any of us could have imagined.

So, enough platitudes. (ok maybe just one more)

“Roads… Where we’re going we don’t need road.”

This next series of posts that will start to roll out here are things I’d normally save for presentations, war gaming sessions and late night ramblings with trusted colleagues. I’m done talking about the future in whispers; it’s time to share where we’re going by looking at all the roads and how they’ll converge. After all, the future of humanity demands it.

If you haven’t seen the video “Humans need not apply” then I suggest you watch it now and think. What can I do to help further bring about the end of humanity… Ok wait, not that, that’s too dark. Let’s try again…

What can I do and what do I invest in knowledge wise to help be on the side developing the machines instead of the side automated into mass extinction (career wise).

Hm… still pretty dark? No no, that’s pretty spot on. What job are you be working towards today so that 5 years from now you are still relevant? I hope that the videos to be released in the coming days provide a vision of where we’re heading (collectively) as well as some things to start to think about in your own job.

What are you doing right now that can change the world if only you spoke of it in those terms?

Apr 25 2015
Apr 25

Whenever there is a constraint on the number of developers in a pool, it can make it more difficult to solve issues. As we have been developing Nittany-Vagrant, I have found that there is definitely a smaller pool of developers running on a Microsoft Windows host for their vagrant based virtual machines.

The extra credit problem of the day for me was how to allow vagrant to automatically size a virtual machine's memory pool when utilizing VirtualBox as the VM provider on Windows. This is a well known solution on OSX:

`sysctl -n hw.memsize`.to_i / 1024 / 1024 / 4

However, it took a bit of digging to figure out a way that could be reliably reproduced for the folks that are using our Nittany-Vagrant tool on Windows hosts.

Standard with Microsoft Windows is a command line interface to the Windows Management Interface (WMI) called WMIC. WMIC has a component that allows you to read/edit system information. In particular:

wmic os get TotalVisibleMemorySize

will pull the total memory that is available to Windows. However, this also produces a number of headers that we don't want. We can then use grep to pull a line that starts with digits:

grep '^[0-9]'

Use Ruby to cast it as an int (just to be sure):


and then divide it by 1024 (to get MB instead of KB).

We then only want to use 1/4 of the RAM, so we divide it by 4 and slap it into the vm memory size customization:

v.customize ["modifyvm", :id, "--memory", mem]

That's it! That's all... but there were a few other catches that made this ... less intuitive than one would hope.

  1. grep doesn't exist on Windows systems. However, if you install git-scm for windows, it installs a number of GNU like tools including ssh and grep. If you ensure that these (located in "C:\Program Files (x86)\Git\bin") are on your path, you get grep. The easiest way to make sure this happens is to select the third option "Use GIT and optional Unix tools from the Windows Command Prompt" when installing git-scm. Winner.
  2. determining your host...
  3. will report what your host operating system is... kind of ... With git-scm installed, it reports as "mingw32". With this, you can specify the command to run to retrieve the memory size based upon what OS is being reported. This is more trial and error than it probably should be, however, it does work. If necessary, use:
    puts RbConfig::CONFIG['host_os']
    to be able to determine what your OS is reporting so you can branch appropriately in your Vagrantfile.

This leaves us with:

if host =~ /mingw32/
              mem = `wmic os get TotalVisibleMemorySize | grep '^[0-9]'`.to_i / 1024 / 4
              if mem < 1024
                mem = 1024
v.customize ["modifyvm", :id, "--memory", mem]

Hopefully, that helps some independent windows based developer out there... Cheers, +A

Header image - Vagrant - courtesy of mike krzeszak Creative Commons 2.0

Apr 25 2015
Apr 25


I’ve been hearing about Yeoman for quite some time now. Pretty much since the project took off, or soon after. As a tool born in the Javascript community, I came across this little gem when I was learning about Node.js and the different tools and frameworks available for it, either in my free time, or as part of my labs time at my company. Sadly, I didn’t really pay much attention to it. At the end of the day, Node.js was just something I was learning about, but not something I was going to be able to put in place or introduce in projects in the short term. Or, even if it was something I *could* do, it wasn’t in my plans, anyway.

The other reason why I didn’t look into it closer, was that I mistakenly thought it to be a tool only useful for Javascript developers. Some time ago I noticed that Yeoman was getting plenty of attention from other communities too, and in a closer look, I understood that it wasn’t a tool for Node.js, but instead, a tool built on top of Node.js, so I decided to give it a try and see if I could make something useful out of it for Drupal development.

Warming up…

So, what’s Yeoman then? It’s a code scaffolding tool. That is, it’s an utility to generate code for web apps. What’s the purpose of that? Well, the purpose is that developers save time by quickly generating the skeleton of the web apps they build, leaving more time for the important things, such as the most complex business logic of the app, integrations, testing, etc… In short: it’s a tool that should help developers deliver more quality in their apps. To get a better picture of what Yeoman can do, I’d point everyone at their site, which has some nice tutorials and a very good documentation for writing your own generators.

My plan was to write a few generators for the most common pieces of boilerplate code that I normally have to write in my projects. Unsurprisingly, I found that there are a few yeoman generators for Drupal already out there, so I thought I should review them and see if they’re of any use to me, before writing one that already exists. Yes, that can be a boring task if there are too many generators, but I was lucky that there aren’t *that* many for Drupal, so I just spent a couple of hours testing them and documenting my findings. Hopefully, this blog post will help other Drupal developers to find out in a matter of minutes whether the existing generators are useful for them or not. So, let’s get into it!

1.- Generator-drupalmodule

Github repository here. Creation date: Around 2 years ago.

Structure created:

module: |- drupalmodule.css |- drupalmodule.info |- drupalmodule.js |- drupalmodule.module |- package.json 1 2 3 4 5 6 module:|-drupalmodule.css|-drupalmodule.info|-drupalmodule.js|-drupalmodule.module|-package.json

This one scaffolds a basic structure for a simple module. Needs bower and a package.json file to download dependencies, but not a problem anyway since you’ll probably have drush. Creation is a bit unintuitive: you need to create the module folder first, cd into it, then execute yo drupalmodule.

The generator asks if you want JS and CSS files, but it doesn’t even add functions to add them to the page. It’s a generic purpose generator, and doesn’t have anything that is not in module_builder already.

2.- Generator-drupal-module

Github repository here. Creation date: Around 2 months ago. Latest commit about 2 weeks ago.

Structure created:

module: |- templates (if hook_theme chosen). |- drupal_module.info |- drupal_module.install |- drupal_module.module 1 2 3 4 5 module:|-templates(ifhook_themechosen).|-drupal_module.info|-drupal_module.install|-drupal_module.module

More neat than drupalmodule in the surface, but doesn’t do much more. It asks us if we want hook_theme(), hook_menu(), hook_permission() and hook_block_info / view implementations, which is nice, yet that doesn’t make it much of a gain compared to other simple scaffolding tools, like PhpStorm live templates. In contrast to the drupal-module generator, this one doesn’t ask us if we want a CSS or JS file.

3.- Generator-drupalentities

Github repository here. Creation date: 9 months ago. Latest commit about 6 months ago.

Structure created (“publisher” entity):

Views and license files are optional, based on the settings specified in the command-line.

module: |- views |- publisher.views.inc |- publisher_handler_delete_link_field.inc |- publisher_handler_edit_link_field.inc |- publisher_handler_link_field.inc |- publisher_handler_publisher_operations_field.inc |- LICENSE.txt |- publisher.admin.inc |- publisher.info |- publisher.install |- publisher.module |- publisher.tpl.php |- publisher-sample-data.tpl.php |- publisher_type.admin.inc 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 module:|-views    |-publisher.views.inc    |-publisher_handler_delete_link_field.inc    |-publisher_handler_edit_link_field.inc    |-publisher_handler_link_field.inc    |-publisher_handler_publisher_operations_field.inc|-LICENSE.txt|-publisher.admin.inc|-publisher.info|-publisher.install|-publisher.module|-publisher.tpl.php|-publisher-sample-data.tpl.php|-publisher_type.admin.inc

Generates a full drupal module for a custom entity, based on the structure proposed by the model module.

One issue I experienced is that if I select to “add bundles”, the Field API screen seems broken (doesn’t load). However, a general “fields” tab appears, but if you try to add a field, you get some errors and get redirected to a 404. So, bundles are offered on the plugin creation menu, but not really supported! Same for revisions. It’s asked on the command-line prompt, but doesn’t seem to do much. Not choosing bundles support, still lets you add bundles on the admin UI, and doesn’t seem to break anything, though.

In spite of the issues I had testing it (I didn’t bother much investigating what was the issue), it seems to me an useful generator. The only reason why I doubt I’ll be using it, is that it’s based, as mentioned, on the model project for Drupal, which is quite nice, but rather outdated now (4 years old), and doesn’t leverage some of the latest Entity API goodies. Also, I’ve developed some opinions and preferences around how to structure custom Entity Types, so starting to use the model approach would be, in a sense, a step backwards.

4.- Generator-ctools-layout

Github repository here. Creation date: 5 months ago. Latest commit about 14 days ago.

Structure created:

my_layout: |- admin_my_layout.css |- my_layout.css |- my_layout.inc |- my_layout.png |- my-layout.tpl.php 1 2 3 4 5 6 my_layout:|-admin_my_layout.css|-my_layout.css|-my_layout.inc|-my_layout.png|-my-layout.tpl.php

Generates a ctools layout plugin folder structure, with all the files needed to get it to work out of the box. It makes no assumptions about how the content will be displayed, so there’s no styling by default (which is perfect), and it allows to specify as many regions as desired. It’s quite likely that I start using this in my projects. No cons or negative aspects to mention!

5.- Generator-gadget

Github repository here. Creation date: 1 month ago. Latest commit about 1 month ago.

This one, rather than a code generator for Drupal elements, is a yeoman generator to serve as an scaffolding tool for another repo from Phase 2. While I didn’t get to test it out, the grunt-drupal-tasks repo really looked interesting (check the features here), and I might try to give that a go, although I’m familiar with Gulp and not with Grunt. Long story short: very interesting project, but it’s not meant to scaffold any code for your drupal modules.

6.- Generator-drupalformat

Github repository here. Creation date: 6 months ago. Latest commit about 3 months ago.

Structure created:

drupalformat: |- includes |- js |- drupalformat.settings.js |- theme |- drupalformat.theme.inc |- drupalformat.tpl.php |- drupalformat.api.php |- drupalformat.info |- drupalformat.install |- drupalformat.module |- drupalformat.variable.inc |- generator.json |- LICENSE.txt 1 2 3 4 5 6 7 8 9 10 11 12 13 14 drupalformat:|-includes  |-js    |-drupalformat.settings.js|-theme  |-drupalformat.theme.inc  |-drupalformat.tpl.php|-drupalformat.api.php|-drupalformat.info|-drupalformat.install|-drupalformat.module|-drupalformat.variable.inc|-generator.json|-LICENSE.txt

This one is very specific, tailored to provide views and field formatters for jQuery plugins, and it’s based on the owlcarousel module. It’s very useful if what you’re looking for is to easily integrate other jQuery plugins with your Drupal site. Very interesting generator, as it’s focused to scaffold the most repetitive parts for a very specific task, instead of trying to be a very generic solution that covers many things. You can see another great example leveraging this generator in Echo.co’s blog, for the jQuery Oridomi plugin. Not something that I have to pick up daily, but I’ll definitely have this plugin in mind if I have to integrate new Javascript libraries.

7.- Generator-drupal-component

Github repository here. Creation date: 6 months ago. Latest commit about 3 months ago.

Structure created:

drupal_component: |- ctools-content_types |- drupal_component.inc |- drupal_component.scss |- drupal_component.html.twig |- drupal_component.info |- drupal_component.js |- drupal_component.module |- drupal_component.tpl.php |- drupal_component.views.inc 1 2 3 4 5 6 7 8 9 10 drupal_component:|-ctools-content_types  |-drupal_component.inc|-drupal_component.scss|-drupal_component.html.twig|-drupal_component.info|-drupal_component.js|-drupal_component.module|-drupal_component.tpl.php|-drupal_component.views.inc

I found this one rather peculiar. The boilerplate code it produces is rather basic, yet offers options such as creating a views style plugin by default, or a ctools content_type plugin. The good thing is that each component can be generated individually, which is rather convenient. The only issue that keeps me from using it is that, again, none of the components offer any options particularly advanced that could benefit from having an interactive tool like Yeoman (e.g: asking whether the ctools content type plugin will need one or more settings forms). For my particular case, I can generate all of these easily with PhpStorm live templates or template files easily.

Is that all, folks?

Ye… no! There are indeed a few more generators thought around drupal projects in the Yeoman registry (click here and search by “Drupal”). Some of them are very interesting, to do things such as:

However, I decided to leave those out of an in-depth review because, as interesting as they are, they cover several aspects of Drupal development, and people often has very specific preferences about how to structure a theme, for example, or what tools to use in order to create a Headless Drupal. 

Since the goal of this article was to give a bird’s eye view of what generators Drupal developers can use right now without changing anything in the way they work, I preferred to describe mainly the generators done around drupal modules, and more specific components. Hope this blog post has saved you some time. Expect to see a new one on this topic as soon as I’ve written my first Yeoman plugin.

Apr 24 2015
Apr 24

Launching a website is just the beginning of a process. Websites need nurturing and care over the long run to stay relevant and effective. This is even more true for a service or tool such as LibraryEdge.org. Why would users come back if they can only use the provided service once or can’t see progress over time? And how can you put that love and care into the service if it is not self-funded?

This month, LibraryEdge.org released a number of changes to address just these issues.

Helping Libraries Stay Relevant

Before we dive into the release, here’s a bit on the Edge Initiative.

With the changes created by modern technology, library systems need a way to stay both relevant and funded in the 21st century. A big part of solving that problem is developing public technology offerings. Even in the internet-connected age, many lower-income households don’t have access to the technology needed to apply to jobs, sign up for health insurance, or file taxes, because they don’t have personal computers and internet connections. So where can people go to use the technology necessary for these and other critical tasks? Libraries help bridge the gap with computers and internet access freely available to the public.

It’s important that libraries stay open and are funded so their resources remain widely available. By helping library systems improve their “public access computers/computing,” the Edge Initiative and its partners have made major strides in making sure libraries continue to be a valuable resource to our society.

That’s where LibraryEdge.org comes in. The Edge Coalition and Forum One built LibraryEdge.org in 2013 as a tool for library systems to self-evaluate their public technology services through a comprehensive assessment – plus a series of tools and resources to help library systems improve their services.

New Functionality


The biggest feature update we recently launched was enabling libraries to retake the Assessment. They can see how they have improved and where they still need work compared to the previous year. To create a structure around how libraries can retake the Assessment, we built a new feature called Assessment Windows. This structure allows the state accounts to control when the libraries in their states can take the Assessment. States now have control over when their libraries conduct the Assessment and can track their libraries’ goals and progress on Action Items. This feature allows states to more accurately assess the progress of their libraries and adapt their priorities and programming to align with library needs.

Edge State

Results Comparison

The Edge Toolkit was initially built to allow users to view their results online, along with providing downloadable PDF reports so libraries can easily share their results with their state legislatures and other interested parties. Now that libraries can have results for two assessments, we’ve updated the online results view and the PDFs. Libraries can now see a side-by-side comparison of their most recent results with their previous results.



It’s common knowledge that people retain more of what they see, so we’ve also visualized important pieces of the results data with new graphs. If a library has only taken the assessment once, then the charts will only display its highest and lowest scoring benchmarks. However, if they’ve taken the assessment a second time, they can also see bar graphs for the most improved and most regressed benchmarks.


Improved User Experience


We made a number of enhancements based on feedback from libraries that have been using the tool for the past couple of years, as well as from interviews that we conducted with State Library Administrators. Starting with a series of interviews gave us great insight into how the tool was being used and what improvements were needed.

New Navigation

The added functionality of being able to retake the Assessment increased the level of complexity for the Edge Toolkit. So we redesigned the interface to guide users through this complex workflow. We split out the Toolkit into four sections: introduction/preparation, taking the assessment, reviewing your results, and taking action. This new workflow and navigation ensures a user is guided from one step to the next and is able to complete the assessment.

Notification Messages

Several dates and statuses affect a library system as they work through the assessment, such as how long they have to take it and whether it is open to be retaken. We’ve implemented notifications that inform the user of this information as they are guided through the workflow.


Automated Testing

When we release new features, we need to ensure other components on the site don’t break. Testing this complex of a system can take a long time and will get expensive over the lifetime of the site if it’s done manually. Furthermore, testing some sections or statuses involves taking a long assessment multiple times. In order to increase our efficiency and save time in our quality assurance efforts, we developed a suite of automated tests using Selenium.

What’s Next for Edge

The updated LibraryEdge.org now allows libraries to assess their offerings again and again so they can see how they are improving. Additionally, we’ve built a paywall so Edge can be self-supporting and continue to provide this valuable service to libraries after current funding ends. The launch of this updated site will help Edge remain relevant to its users and, therefore, ensure libraries remain relevant to our communities.

Previous Post

Is Your Site Telling You Something? Start Listening to the Analytics

Apr 24 2015
Apr 24

Design work is a lot of show-and-tell. It can be challenging to effectively communicate and collaborate on a distributed team. Join hostess Amber Matz, Lullabot Creative Director Jared Ponchot, Lullabot UX Designer Jen Witkowski, and Justin Harrell, Interactive Designer for Drupalize.Me, as they talk about the unique challenges, processes, and tools they use as part of a distributed team.

Apr 24 2015
Apr 24

At the time of this writing, just under 300,000 websites use the Drupal Backup and Migrate module.  It is an great tool for moving databases from production back to staging and development servers, and it is an essential tool for automatic backups of the database and files of the production server.

About a year ago, Version 3.0 was released, which integrated the offsite functionality from another module, and brought additional functionality, like files and code back ups.  This is what I would like to go through today in the steps below.

Why offsite backups?

I hope by now everyone has heard of the Backup 3-2-1 Rule.  If you haven't, it is a good thing to strive for in all things digital.  The rule mentions "In case your house burns down", but in our case, with web servers, there are a lot more risks.  The server could get hacked.  The developer or client could accidentally delete.  The hosting company could go out of business.  There are probably a lot more reasons that I sudder to think about!

Installing Backup and Migrate

  • Install the Backup and Migrate
    drush dl -y backup_migrate
  • Activate Backup and Migrate
    drush en -y backup_migrate
  • Set the modules permissions
    • Access Backup and Migrate
    • Perform a backup
    • Access backup files
    • Delete backup files
    • Restore the site
    • Administer Backup and Migrate
  • Configure Backup and Migrate at:
  • Configure Destinations, where the backup files are going to go at:
    • NodeSquirrel.com - By far the easiest and most professional solution.  It is premium service made by the people who developed the module to be safe, secure, and reliable.  Setup is easy, just enter the "Secret key" you get after signing up.  There are various pricing levels to fit budget and need.  This is what I use for most clients.
    • FTP Directory - Enter the credentials of an FTP server that to back up to.  Very straight forward.
    • Email - Email yourself the file.  I've never done this, but it says its possible =)
    • Amazon S3 Bucket - This requires a few additional steps.
      • An additional dependency is the s3-php5-curl project, which needs to be downloaded to:
      • In your Amazon account, Set up a IAM User (Key and Secret)
      • Set up a Bucket, and add Authorized User permission to bucket
      • I like to add a folder in the bucket called "backups" so I can use the bucket for other things, like PSDs, Docmentation, etc, and I can also set the folder to "Reduced Redundancy" to save a few pennies.
      • Come back and enter that all into Drupal:
        Add a Destination
      • Click Save
  • Under the Schedules tab, click "Add Schedule".
    • Select Default Database under Backup Source to backup the database; enabled on Drupal Cron every hour; automatically deleted with the "Smart Delete" settings.  That settings keeps  hourlies for a day, dailies for a month, and weeklies forever.
      Drupal Database Backups
    • Select the Destination you want the Backup to go.
    • Click Save
  • Confirm it all works by doing a "Quick Backup" to your offsite destination from the main tab of the config.

I keep my code in git on Github, so I don't need to backup my code, but I do need to backup my files, so I repeat the process for Backup Source = Public Files Directory.  Files are a lot bigger than the DB, so I just do a "Simple Delete" on this one, back up daily, and keep 2.

Downsides and alternative methods

I have been told that since it is PHP based, and uses Drupal to run, this can cause strain on Drupal/the server, and uses more resources than alternative methods like:

  • Bash Scripts
  • mysqldump
  • drush sql-dump
  • PHPMyAdmin**

To Sum up

Backup and Migrate offers Site Builders and Administrators a very powerful tool to protect and restore their content easily through the Drupal UI.

*   You should set up a server based cron/crontab to trigger Drupal cron.
** Be wary of using PHPMyAdmin on production servers.  I have been advised that it is notoriously vulnerable to attack.

Apr 23 2015
Apr 23

If only non-Finns could easily pronounce it, I think "yhteisöllisyys" would be a perfect motto for Drupal. To explain what it means, I dragged Lauri Eskola, Drupal Craftsman from Druid.fi, away from the contribution sprints at DrupalCamp Brighton 2015 long enough for him to fill me in on that, as well as his trip to Drupal Camp Delhi 2015, what he's excited about in Drupal 8, and how doing business in the Drupal world–based on values like sharing and openness–must seem strange and different to outsiders.

Global Drupal: DrupalCamp Delhi

"It was such a huge honor to me to be [at Drupal Camp Delhi]. I am so glad I had the opportunity to take part in that event. The community is way larger than any other community I have ever seen. Drupal Camp Delhi had more than 1000 registrants for the event, which is a huge amount of people. They don't get many international visitors; it's mostly just the local people there. It's weird people don't go there: it's really easy to get there from Europe. It's worth visiting. It's something you should do. It's an experience I will probably remember forever. Indian people are amazing. They are so friendly and easygoing. All the things I thought before I went there were completely wrong."

"There's a lot we can learn from India, especially what they think about community. They are very community-minded people. They appreciate community and Drupal a lot because Drupal creates possibilities for people there. Vijay [Vijaycs85 on Drupal.org] is a good example of a person who had gotten options to get out of his small village in India because of Drupal. Now he lives in London. That kind of good example in India empowers people to take part in community because they believe it can create great future for people there. Not everyone wants to move to London, but they get new opportunities even in India."

"They are actually going to [have] DrupalCon in India and that is one more example of how serious we are now taking India in our community. I hope that it will change what people think about India and that they realize how easy it is to access. For me, it was only six hours flight from Helsinki to Delhi. It was really easy."

Global Drupal: one big family

I asked Lauri why he stuck with Drupal. "Because I found a job immediately," was his first answer :-) ... but then he went on ... "At some point, I was thinking of moving somewhere else," to another technology, "but then I went to my first DrupalCon in Prague and it changed my thinking about Drupal. Drupal is more than just technology. It's like one big family. There, I met my good friend Ruben Tiejeiro and others."

Lauri is an active mentor for new Drupalists and got into it ... "Mentoring is one of the most important values we have in our community and it's one of the reasons why we are as successful as we are. New people are the power of the community. We have to ensure that the community keeps going. The fact is, there are people leaving all the time. Their life situations change, they get family, they burn out, whatever. People leave. And we need new people in the community all the time if we want to grow."


"My favorite thing about Drupal is people." I then asked Lauri to describe Drupal in a single word. He hesitated a while and then said "yhteisöllisyys", a Finnish noun defined as "communality", "sense of community", or just "community". If only non-Finns could easily pronounce it, I think "yhteisöllisyys" would be a perfect motto for Drupal. :-)

"One of the things that no one outside of the Drupal community can understand is that in Finland, we build projects together and there are friendships beyond business. That's not just one word, but that's what I could say about the Drupal community ... to describe how things are from my perspective. That's one of the reasons why we are as good as we are: When we do something together we are much more powerful then when we are alone."

Guest dossier

  • Name: Lauri Eskola
  • Drupal.org: lauriii
  • GitHub: lauriii
  • Twitter: @laurii1
  • Website: http://laurieskola.fi/
  • Work affiliation: Drupal Craftsman at Druid.fi
  • Drupal role: Community sprint mentor and Drupal 8 contributor.
  • LinkedIn profile: Lauri Eskola
  • 1st version of Drupal: 6
  • What are you most excited about in Drupal 8? ... "Of course the theme system! Everything is responsive. We have a new way of thinking about the front end."

Interview video

[embedded content]

Apr 23 2015
Apr 23


Drupal allows you to easily change the order of your displayed fields using the Manage Display option for your content types but it does not allow you to change the order of the title field (because this field is rendered directly from the node template). But there may be times that you want to display your custom field(s) before the title field. For example, if you have an image field that you want to float to the left of your title and remaining node content.

Displaying a field before node title in Drupal 7

Luckily, the solution is quite easy to implement.


Create a node template file for the content type you want to change the title order on. Note that if your theme already has a custom node template for the content type you want to change then you can skip to step 2.

If you don't already have a specific node template in your theme for this content type then you can simple copy the node.tpl.php from the /node directory in your root Drupal install file into your theme directory and rename it to node--CONTENT_TYPE_NAME.tpl.php.


If your content type's machine name is article then your copy of the node.tpl.php file will be named node--article.tpl.php.

If your content type's machine name is team_bio then the file will be named node--team_bio.tpl.php. Notice the underscore in the content type name are retained and not converted to dashed.

IMPORTANT: Be sure to clear your site caches so your new template file will be used.


Once you have a custom node template for your content type all we need to do is make a slight change to it. With Drupal 7 we can harness the power of render and hide within the template files.

This is a stripped down and simplistic version of what your current custom node template might look like:

  <h2><?php print $title; ?></h2>
  <?php print render($content); ?>

Now we can modify our template to manually output our image field before the title like so:

<?php print render($content['field_name_of_image']); ?>
  <h2><?php print $title; ?></h2>
  <?php print render($content); ?>

And when a node of the template's content type if rendered it will now print the image before the title and when the remaining content is output using print render($content); Drupal will know that the image field has already been output and will now leave this off when the rest of the content is rendered.

Displaying a field before node title in Drupal 7

Render and hide are really quite powerful features to use in your theme's template files. For example, you can use hide($content['comments']); to prevent the comments from being rendered with render($content) and then manually render them later in your template using render($content['comments']).

The most important concepts to grasp when dealing with render and hide are:

  1. render($content['FIELD_NAME']) will manually print a field in your template.
  2. hide($content['FIELD_NAME']) will prevent a field from being rendered with the content is printed using render($content)
  3. You can always manually print a field that has been hidden from the main content rendering
  4. If you manually print a field using render then it will not be output when using render($content)
Apr 23 2015
Apr 23

Another month, another swath of work to improve our favorite content management system.

The Usual (Contrib) Suspects

Once again some of our main achievements during March was on client-sponsored work, most notably:

  • Michelle Cox made headway on the D8 port of Metatag while I was preoccupied with some other things.
  • Michelle made a number of improvements and fixes to the XMLSitemap module’s D8 port via patches in the issue queue, and Paul McKibben was added as a co-maintainer which then gave him the chance to commit a good many fixes.
  • After her earlier work on porting the module to D8, Michelle joined as a co-maintainer of the CrazyEgg module.
  • Jason Want ported the Eloqua module to D8.
  • Paul continued making improvements to the er_viewmode module.

Along with the sponsored work, we had a low total of self-directed contributions for the month, just over 70 hours total, which is definitely lower than previous months. That said, we did make some definite progress:


As mentioned last month, with the Spring well underway many of our staff have been hard at work helping to plan various events around the country. During March we collaborated on several events:

DrupalCamp New Orleans

Held at the end of March, our own Jeff Diecks and Jason Want helped plan the second such event in New Orleans and had a small yet accomplished event.

DrupalCamp Florida

One of the longest running regional events, DrupalCamp Florida, had their seventh event during April and several of our busy beavers were toiling away on their presentations during March.

Upcoming Events


The NH Drupal group, which Matt Goodman and I are members of, are skipping their DrupalCamp for 2015 and are instead running a series of code sprints throughout the year, dubbed "NHDevDays". The first such event will be held in Keene, NH on April 26th and will be focused on the beginner who has never contributed to Drupal before but is interested in learning how.


In May a whopping fourteen of us will be attending the annual North American Drupalcon in Los Angeles, CA. We have four sessions planned on a variety of topics, and you'll always also be able to find us at our booth in the exhibition center. We hope to see many of our readers and friends there!


In July we're going to be descending upon New York City for NYCCamp's fourth annual event. Home to Mediacurrent’s new parent company, Code and Theory, we’re excited to have ten of us hit the Big Apple for one of the north east’s largest Drupal events.

Plans for April

As can be expected, with the continued lead up to several events our code contributions will continue being a little slow during April, but we’re still aiming for several stable module releases and continued progress on our several D8 ports.

Have a great month!

Additional Resources

Introducing Mediacurrent's Contrib Committee | Mediacurrent Blog Post
Contrib Committee Status Report, February 2015 | Mediacurrent Blog Post

Apr 23 2015
Apr 23

Posted Thursday, April 23 at 2:14 pm

Cathy Theys (YesCT), Blackmesh Community Liason and Drupal Community organizer extraordinare, joins Mike Anello and a hybrid Andrew Riley-Ryan Price on this 150th episode of the DrupalEasy podcast. We discussed Drupal code governance, Drupal Dev Days, Drupal 8 progress, DrupalCon Los Angeles, Drupal 8 page caching, and running testbot from the command line. We also brainstormed a new set of 5 questions, learned about Cathy's upcoming travel schedule, listened to Ryan perform the Zelda theme song, and wondered about some listener feedback.

Apr 23 2015
Apr 23

Use parallel processing to save time importing databases

Over the past few months I have been banging my head against a problem at MSNBC: importing the site's extremely large database to my local environment took more than two hours. With a fast internet connection, the database could be downloaded in a matter of minutes, but importing it for testing still took far too long. Ugh!

In this article I'll walk through the troubleshooting process I used to improve things, and the approaches I tried — eventually optimizing the several-hour import to a mere 10-15 minutes.

The starting point

This was the set up when I started working on the problem:

  • The development server has a 5GB database with 650 tables 27,078,694 rows.
  • Drush 6.3.0 with sql-sync command adjusted to skip data from unneeded tables.
  • My local laptop’s CPU has an Intel Core i7-3630QM @ 2.40GHz and a Solid State Drive.

The output of importing the development environment’s database in my local environment showed how serious the problem was:

juampy@juampy-box: $ time drush -y sql-sync @msnbc.dev @self --create-db
You will destroy data in msnbc and replace with data from someserver/msnbcdev.
Do you really want to continue? (y/n): y
Command dispatch completed. [ok]
real 120m53.545s

Wall-clock time was two hours, and it was causing a lot of frustration within the team. Having such a slow process was a big limitation when someone wanted to peer review pull requests. To avoid the delay, people were only updating their local database once a week instead of every couple days — making regressions much more likely when exporting configuration from the database into code.

Analyzing the database

My first attempt at a solution was reducing the content in the development database. If the size of the database could be reduced to about 1GB, then the import process would be fast enough. Therefore, I inspected which tables were the biggest and started analyzing what I could trim:

mysql> SELECT TABLE_NAME, table_rows, data_length, index_length,
round(((data_length + index_length) / 1024 / 1024),2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = "msnbc"
ORDER BY round(((data_length + index_length) / 1024 / 1024),2) DESC
LIMIT 0,15;
| TABLE_NAME                                  | table_rows | data_length | index_length | Size in MB |
| field_revision_body                         |     362040 |  1453326336 |    104611840 |    1485.77 |
| field_revision_field_theplatform_transcript |      45917 |  1241251840 |     49938432 |    1231.38 |
| field_revision_field_homepage_item_text     |    1338743 |   436125696 |    580747264 |     969.77 |
| field_revision_field_homepage_item_headline |    1254389 |   264192000 |    641843200 |     864.06 |
| field_revision_field_homepage_link          |     947927 |   333365248 |    512671744 |     806.84 |
| field_revision_field_homepage_reference     |    1335704 |   177143808 |    629227520 |     769.02 |
| field_revision_field_homepage_item_template |    1243619 |   177176576 |    598425600 |     739.67 |
| field_revision_field_issues                 |     757708 |    90931200 |    295092224 |     368.14 |
| field_revision_field_homepage_item_image    |     584289 |    75218944 |    263585792 |     323.11 |
| field_revision_field_homepage_items         |    1041752 |    72515584 |    239206400 |     297.28 |
| metatag                                     |     241023 |   311394304 |            0 |     296.97 |
| field_revision_field_short_summary          |     467445 |   144556032 |    162529280 |     292.86 |
| field_revision_field_original_topics        |     387105 |    82477056 |    189874176 |     259.73 |
| field_data_body                             |      91027 |   254656512 |     15171584 |     257.33 |
| field_data_field_theplatform_transcript     |      18638 |   241745920 |     11911168 |     241.91 |
15 rows in set (2.52 sec)

What did I learn from that data?

  • field_body_revision was the biggest table in the database. There were many revisions, and many of them had a considerable amount of HTML. I thought about simply trimming this table and field_body, but doing it without breaking HTML would be tricky.
  • field_revision_field_theplatform_transcript was the second biggest table. I looked at the source code to see how it was being used and asked the development team if it was needed for development and found out that I could trim this value without damaging the development experience. An easy win!
  • Fields used on the homepage had tons of revisions. One reason was heavy use of the Field Collection module on a multi-value Entity Reference field. Each node save created a cascade of new revisions that were good candidates for removal.

Trimming long tables

All of this was promising, and I set up a standard process for slimming down our development databases. Every night, the production database is copied into the development environment. MSNBC is hosted at Acquia; which offers a set of hooks to operate on environment operations such as copying a database, files or deploying a release. I wanted to trim down the size of the table field_revision_field_theplatform_transcript so I added the following statements to the post-db-copy hook for the development environment:

# Truncate long strings for field_theplatform_transcript.
drush6 @$site.$target_env -v sql-query \
  'update field_data_field_theplatform_transcript
   set field_theplatform_transcript_value =
   substring(field_theplatform_transcript_value, 0, 200)'

drush6 @$site.$target_env -v sql-query \
  'update field_revision_field_theplatform_transcript
   set field_theplatform_transcript_value =
   substring(field_theplatform_transcript_value, 0, 200)'

The above queries trim a field of a couple tables which contains very long strings of plain text. Those steps consistently reduce the database size by 1GB. Here is a sample output when importing the development database into my local environment after this change was made:

juampy@juampy-box: $ time drush -y sql-sync @msnbc.dev @self --create-db
You will destroy data in msnbc and replace with data from someserver/msnbcdev.
Do you really want to continue? (y/n): y
Command dispatch completed. [ok]
real 105m32.877s

That reduced the total import time to an hour and forty five minutes. It was a step forward, but we still had more work to do. The next thing to try was slimming down revision data.

Cutting down revisions

Some nodes in MSNBC’s database had hundreds of revisions. developers and editors don’t need all of them but content is gold, so we can’t just go wipe them out. However, If we could cut down the number of revisions in development, then the database size would go down considerably.

I looked for modules at Drupal.org that could help me to accomplish this task and found Node Revision Delete. It certainly looked promising, but I realized that I had to put a bit of work into it so it could delete a large amount of revisions in one go. I added a Drush command to Node Revision Delete which used Batch API so it could run over a long period of time deleting old revisions. When I tested the command locally to keep just the last 10 revisions of articles, it ran for hours. The problem is that the function node_revision_delete() triggers several expensive hooks, and slows the process down quite a bit.

This made me to look at the production database. Did we need that many revisions? I asked the editorial team at MSNBC and we got confirmation that we could stop revisioning some content types. This was great news, as it would slow the database's future growth as well. I went one step further and configured Node Revision Delete so it would delete old revisions of content on every cron run. Unfortunately, our testing missed a bug in Field Collection module: deleting a revision would delete an entire field collection item. This was one of the most stressful bugs I have ever dealt with: it showed up on production and was deleting fresh content every minute. Lesson learned: be careful with any logic that deletes content!

Because of the concerns about lost content, and the fact that Node Revision Delete was still slow to do its work, we uninstalled the module and restored the deleted data. Reducing the number of revisioned content types would slow its growth, but we wouldn't try to remove historical revisions for for now.

Deleting entire nodes from the development database

Our next idea was deleting old nodes on a development environment, and sharing that slimmed down database with other developers. After testing, I found out that I could delete articles and videos published before 2013 while still leaving plenty of content for testing. To test it, I wrote a Drush script that picked a list of nids and used Batch API to delete them. Unfortunately, this was still too slow to help much. Each call to node_delete() took around 3 seconds. With hundreds of thousands of nodes this was not a valid option either.

At this point, I was out of ideas. Throughout this effort, I had been sharing my progress with other developers at Lullabot through Yammer. Some folks like Andrew Berry and Mateu Aguiló suggested I take a look at MySQL parallel, a set of scripts that break up a database into a set of SQL files (one per table) and import them in parallel using the GNU Parallel project. Several Lullabot developers were using it for the NBC.com project, which also had a database in the 5-6 GB range, and it looked promising.

Importing tables via parallel processing

Mateu showed me how they were using this tool at NBC TV, and it gave me an idea: I could write a wrapper for these scripts in Drush, allowing the other members of the development team to use it without as much setup. Coincidentally, that week Dave Reid shared one of his latest creations on one of Lullabot's internal Show & Tell talks: Concurrent Queue. While examining that project's code, I discovered that deep down in its guts, Drush has a mechanism to run concurrent processes in parallel. Eureka! I now had a feasible plan: a new Drush command that would either use GNU Parallel to import MySQL tables, or fall back to Drush’s drush_invoke_concurrent(), to speed up the import process.

The result of that work was SyncDb, a Drupal project containing two commands:

  • drush dumpdb extracts a database into separate SQL files; one per table. Structure tables would be exported into a single file called structure.sql.
  • drush syncdb downloads SQL files and imports them in parallel. It detects if GNU Parallel is available to import these tables using as much CPU as possible. If GNU Parallel is not available, it falls back to drush_invoke_concurrent(), which spins up a sub-process per table import.

Here is a sample output when running drush syncdb:

juampy@juampy-box: $ time drush syncdb @msnbc.dev
You will destroy data in msnbc and replace with data from someserver/msnbcdev.
Do you really want to continue? (y/n): y
Command dispatch completed. [ok]
real 13m10.877s

13 minutes! I could not believe it. I asked a few folks at MSNBC to test it and the time averaged from 12 to 20 minutes. This was a big relief: although trimming content helped, making better use of CPU resources was the big breakthrough. Here is a screenshot of my laptop while running the job. Note that all CPUs are working, and there are 8 concurrent threads working on the import:

CPU load while drush syncdb is running

Next steps and conclusion

There is still a chance to optimize the process even more. In the future, I'll be looking into several potential improvements:

  • My friend Pedro González suggested me to look at Drush SQL Sync Pipe, which could speed up downloading the database dump.
  • GNU Parallel has many options to make an even better use of your CPU resources. Andrew Berry told me that we could try using the xargs command, which supports parallel processing as well and is available by default in all *nix systems.
  • Get back to the root of the problem and see if we can reduce production’s or development’s database size.
  • Upgrade to Drush 7 in the server. Drush 7 removed the option --skip-add-locks when dumping a database, which speeds up imports considerably.

Does importing the database take a long time in your project? Then have a look at the approaches I covered in this article. Optimizing these time consuming development steps can vary widely from project to project, so I am sure there are many more ways to solve the problem. I look forward for your feedback!


Want Juan Pablo Novillo Requena to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Go to Drupalize.Me
Apr 23 2015
Apr 23

You’ve heard about it, read about it, and – if you’re like me – dreamed about it. Well, its time to stop dreaming and start doing.

Drupal 8!

If you have experience building sites using Drupal 7, you’ll be pleased to see that from a site building and administration perspective, things are nearly the same.

And if Drupal 8 is your first Drupal experience, you will be pleasantly surprised at how easy it is to build an amazing site.

Installing Drupal

First things first.

You’ll need a basic set of software installed and operational on your laptop, desktop, or server before proceeding with the Drupal 8 installation. Drupal requires that Apache, MySQL, and PHP are installed and working before beginning the installation process. There are several ways to easily install the required software using LAMP (Linux, Apache, MySQL and PHP), WAMP (Windows), or MAMP (Mac) solutions. Grab Google and do a quick search.

Got it?

Good. Now there are five basic steps to install Drupal:

  1. Download the latest version of Drupal 8
  2. Extract the distribution in your Apache
  3. Create a database to hold the content from the site
  4. Create the files directory and settings.php
  5. Run the installation process by visiting your website in a browser

For details on the installation process visit http://wdog.it/4/1/docs.

These are the basic building blocks that will provide the foundation for your Drupal 8 site:

  1. Users
  2. Taxonomy
  3. Content types
  4. Menus

Creating Users

If your site is simple and you’re the only one who will be authoring, editing, and managing content, then the admin account you created during the installation process may be all that you need. In situations where you want to share the content creation and management activities with others, you need to create accounts for those users.

To create a new user, log in – using the admin account you created – via the User login form on your site’s homepage. After logging in, you’ll see the admin menu at to the top of the page. Click on the Manage link, followed by the People link in the secondary menu which then appears.

One of the first tasks before creating a new user account is to define the user roles that you wish to have on your site. A user role represents a set of permissions to features and functionality on your site. You may have a single user role (e.g., author) or multiple roles.

The next step is to assign permissions to the role(s). To do so, click on the Permissions tab near the top of the page. On the Permissions page, you’ll see a list of all the permissions that are available to assign to user roles: Enable or remove permissions for each role.

With roles and permissions set, the next step is to create user accounts. Click on the List tab and then
click the Add user button on the User list page. Enter the values requested – including assigning the user to a role. If you checked the box to notify the user of their new account, an e-mail is generated and sent to them with instructions on how to access their account on your site.

Creating Taxonomy

One of the most misunderstood and underutilized capabilities in Drupal is taxonomy. Taxonomy, in simple terms, is the ability to categorize content by common terms or phrases. For example, if your site focuses on sports news, you could use taxonomy to identify content by football, basketball, soccer, baseball, tennis, golf, etc. With content tagged by term, you can grab all related content and easily display it.

You’ll find the ability to create vocabularies – and within vocabularies, taxonomy terms – by clicking the Manage link on the admin menu, followed by the Structure link on the secondary menu. Finally, click on the Taxonomy link. Create one or more vocabularies (categories) and assign one or more terms in each vocabulary.

Creating Content Types

Content types in Drupal represent templates that authors use to create and publish content. Think of them as a form that someone fills in with all the pieces of information you want to store and show for things such as events, blog postings, locations, courses and classes, movie reviews, or any other types of information where collecting exactly the same informational elements – in the same format and in the same order – is important.

Drupal 8 provides two basic content types: an article and a basic page. Both are great starting points for lots of different types of content. But sometimes we need more than just a title and body.

The interface for creating content types can be found under the Structure link. Click on the Content types link on the Structure page and you’ll see a list of existing content types as well as a link to create new ones.

Linking Content and Taxonomy

How do you connect taxonomy with content?

No problem.

Add a new field to your content type and for field type select term reference. Click the Save button to continue the process. Follow the steps, ensuring that you pick the right vocabulary and the number of terms that can be selected for a content item.

With the basics of taxonomy and content types, you’re ready to venture off into the land of building an amazing Drupal site. But before you do there are a few other things I’d like to share that will make your life easier.


A site without a menu is like a car without a steering wheel. It’s best to provide visitors with the means for getting to where they want to go.

To create and manage the menus on your site, click on the Structure link, and on the Structure page click on the Menus link. The Menu page lists all the menus defined by default when installing Drupal 8. The Main navigation menu is typically set as the default primary navigational menu.

Adding items to a navigational menu is an easy task. From the Menus page, click on the Edit menu link in the Operations column and click the Add link button on the Edit menu page. You will be taken to the Add menu link page. From there, enter a title for the menu item (e.g., Google), and enter the URL for that link (e.g., https://www.google.com).

That’s all you need to do to create a new menu item!

Utilizing Blocks

Another power feature of Drupal is blocks.

A block provides a means to create a “widget” that displays something that can be placed on a page.

The search form and login form are both examples of blocks that come with Drupal 8. There are no hard-and-fast rules as to what a block can display; typically, it is information or images that are not related to content. However, you can display a list of content as a block.

To see the list of blocks that are part of a standard Drupal 8 installation, click on the Structure link. On the Structure page, click on the Block layout link. On the Block layout page, you will see a list of regions in the left column. Each theme defines a set of regions or areas on a page where you can place content, blocks, and menus. In the right column you’ll see a list of all the blocks that are defined by Drupal 8 core and that can be placed in any of the regions on your site. To assign a block to a region, just click on the block name; then, on the block configuration page select the region where you want the block to appear.

There are several ways to create blocks. A core, contributed, or custom module can create a block programmatically, or you can create a block through the Block layout page: Click on the Add custom block button and fill out the form, assign the block to a region, and you are on your way.

Expanding the Functionality of Your Drupal 8 Site

You’ve installed Drupal 8, created menus, content, content types, and taxonomy but you want more. Fortunately for you, there are thousands of very talented developers who have created thousands of modules that expand upon the functionality present in Drupal 8 core.

To find a contributed module, visit https://www.drupal.org/project/project_module; select 8.x from the Core compatibility select list and click search. You’ll see a list of all the modules that are supported on Drupal 8. Click on the title of any of the modules and you’ll see a detailed description of the module – and the installation instructions.

Changing the Look and Feel of Your Drupal 8 Site

The visual representation of your site is controlled through Drupal’s theming system. In Drupal 8 the “engine” that drives the theming capabilities is the Twig templating engine. Twig is new to Drupal 8 and it is awesome. [Editor’s note: flip to page 86 to learn more in Gettin’ Twiggy With It by Morten DK.]

You have several options when it comes to theming your site:

  1. Use a pre-built theme that has the look and feel you’re looking for. Remember to filter the list of themes by core compatibility, selecting 8.x, or
  2. Use one of the many starter themes, such as Zen. A starter theme provides the structure and, in many cases, a wealth of pre-built capabilities that makes it easy to spin up a new theme with minimal effort, or
  3. You could create a new theme from scratch by creating all the HTML, CSS, Javascript, and Twig related configuration files. This is a great option for those who want ultimate control over everything theme-related, but it involves the most work of any of these options.

All three approaches work great, it’s up to you what approach you’re comfortable with, and that achieves your goals. Installing a new theme can be as easy as clicking on the Appearance link in the admin menu, clicking on Add a new theme button, and following the directions for downloading a new theme to your site. After downloading, enable and set your new theme as the default theme, and there you have it; a whole new look and feel.

Managing Your Site

With a few clicks and a bit of configuration, you’ve created your new Drupal 8 site. Now you’re ready to “open the doors” and show the world what a terrific site you’ve built.

Of course, there are maintenance tasks you’ll need to perform to keep your site in tip-top shape. As your cheerful flight-attendant says: “In the unlikely event of a water landing...” That is, you need to be prepared for those unexpected bumps in the road by proactively administering your new site.

There are a number of common tasks you should put on your calendar, including:

  • Backing up your site.. One of the easiest tools is the Backup and Migrate module.
  • Reviewing the log files. While logged in as an administrator, click on the Manage link in the admin menu followed by the Reports link in the secondary menu. On the Reports menu, click on the Recent log messages link to see a list of errors and warnings that have been logged by your site.
  • Checking for updates. On the Reports page, click on the Available updates link. If there are security fixes or updates to modules on Drupal.org they will be listed here.
  • Check the status report. On the Reports page, click on the Status link. This is a real time peek into the current state of your site. If there are any configuration errors, they’ll show up on this page, as well as all the things that are correctly configured.
  • Creating and disabling user accounts. Depending on whether you allow others to author and administer content, you may need to create new user accounts and disable accounts of people who are no longer associated with your site.

There are other maintenance tasks that you may wish to perform. For more information about maintaining your Drupal site, visit http://wdog.it/4/1/docs.

Next Steps

In a few basic steps you’ve taken a leap towards creating an awesome Drupal 8 site. Now you are only limited by time, your imagination, and your desire to learn. Drupal 8 is a fantastic platform for developing a wide range of sites, from simple to extremely complex.

Congratulations on taking the first few steps towards creating something amazing.

Remember to give back to the Drupal community by sharing your discoveries and creations!

Image: ©iStockphoto.com/lofilolo

Apr 23 2015
Apr 23

The Drupal Simplify Module is a big help removing cruft from the eyes of the administrator in the Drupal UI.  Simplify allows you to hide certain fields from the user interface on a global basis, or configured for each node type, taxonomy, block, comment, or user.

What sent me looking for a module like this was the "Text Format" selection beneath every single WYSIWYG on the site.  While I think Drupal is incredible for allowing multiple input formats, 99 out of 100 times, I define which ONE input format a user can use per role.  So having this as an option beneath every rich text editor on the site just became wasted space that I wanted to remove.  And so I did!

But wait, there's more!  Simplify lets you hide so much more than that!  The following items can be hidden:

  • Administrative overlay (Users)
  • Authoring information
  • Book outline
  • Comment settings
  • Contact settings (Users)
  • Menu settings
  • Publishing options
  • Relations (Taxonomy)
  • Revision information
  • Text format selection
  • URL alias (Taxonomy)
  • URL path settings
  • Meta tags
  • URL redirects
  • XML sitemap
  • Install Simplify
    drush dl -y simplify
  • Activate Simplify
    drush en -y simplify
  • Set permissions for the module at /admin/people/permissions
  • Configure global Simplify configs at /admin/config/user-interface/simplify
    Simplify Module Configuration Screen
  • Visit each Content Type that you want to configure to customize.
    Simplify Module Node Configuration
  • Repeat for each Node type, and Taxonomy Vocabulary you would like to customize.

That's all there is to it.  The hard part is going to be defining what your users need to see on each add/edit screen, and what they can live without.  I ofter err on the side of hiding as much as I can, and adding back in as things get requested.  The easier we can make the administrators job, the better!

Apr 23 2015
Apr 23

How to Select Modules

  • So, let’s start out by talking about the genesis of your session. What made you think this topic needed to be covered?
  • What’s the problem with just installing another module?
  • What if I don’t program?
    • Reuse modules
    • Push back on requirements
    • Make sure the cost of adding another module is not just the cost of the time it takes to install it
    • Simple modules really aren’t programming
  • What are the potential problems with custom code?
  • How do you determine if you should install a module or write some custom code?

Specific Modules You Can (or Should) Avoid

Not really telling people to avoid specific modules more about thinking twice when they select them.
* Entityform vs. Webform
* Entity View Modes
* Page title
* Commerce Custom Order Status

Apr 23 2015
Apr 23

We are growing, we are hiring... well not in a sense of spending money, but the #d8rules initiative is proud of already having 26 contributors who got their pull requests merged on github. But how did that happen and what exactly happened since our last update by the end of last year? Drupal Dev Days Montpellier are definitely the most exciting part of this list so we'll keep it for the end :)

#d8rules Presentation

We travelled all around South America and Europe to give people an update on the initiative, including DrupalCon Latin America BogotáDrupalCamp LondonEuropean Drupal Days Milano, Drupal Dev Days Montpellier.

Workshops & Sprints

#d8rules sprinters at Global Sprint Weekend Zurich 2015

At Global Sprint Weekend Zurich we sprinted with some first-time #d8rules contributors mainly on porting actions. For example, vasi worked on porting some actions but also dermario and me started working on a Rules component admin UI

#d8rules sprinters at European Drupal Days Milano 2015

At European Drupal Days Milano, we did a full workshop to teach new contributors the underlying concepts of Drupal 8: dependency injection, plugins, typed data, unit tests etc. With the following sprints, we were able to make good progress with porting actions. Special thanks to bojanz for writing the first derivative plugin: The Entity create action plugin will generate derivatives based on the entity types available, for example "Create a new user", "Create a new node".

Drupal Dev Days Montpellier

We have counted at least 20 people involved in the #d8rules sprints last week in Montpellier, France which is well - awesome and a bit overwhelming at the same time :) Thanks to the great excitement of so many new and recurring contributors, we were able to make major progress not only by finishing most of the action ports but also by starting work on other areas of the Rules module and its integrations. You can find a more detailed summary of everyone has been working on in our meeting minutes. Let me name a few examples:

  • mariancalinro wrote the first automated tests for derivative plugins, picking up bojanz' work, investigated about the extracting the Views fields & filters selection widget and started mentoring other sprinters on that topic. 
  • a.milkovsky ported various actions, started working the Rules settings UI and also mentored other contributors with the gained knowledge.
  • czigor ported various actions and started the porting the first Flag action. I won't repeat has mentored others because one of the most exciting parts of this sprints was seeing everybody mentor each other on what they know.
  • katia and pjezek also dove deep into porting the flag and unflag actions.
  • Steve Purkiss amongst porting actions started extending the Rules documentation.
  • nielsdefeyter picked up the work on the Rules component UI and helped us to get it together with a Rules UI skeleton committed during dev days. 
  • lewisnyman helped us define personas, user stories & user journeys in order to validate the usabliity of the Rules 7.x UI and target improvements for the Rules UI in 8.x
  • m1r1k did reworked the logging service and even started integration with webprofiler 
  • xano joined us for the discussion on implementing a generic plugin selector widget and even created Plugin Selector as a spin-off from what he implemented for payment.
  • fubhy started implementing a lexer/parser for mathematical expressions
  • nlisgo, claudine, branislav, mikl & martin also joined us for porting actions & cleaning up inconsistencies in the code base
  • klausi, fago & fubhy provided guidance, reviews & were able to merge many pull requests besides working on improvements of the Rules engine

#d8rules sprinters at Drupal Developer Days Montpellier 2015

Thank you so much everyone for participating and helping out! I hope I covered most of the things. Personally, I was really glad to be able to focus on motivating people. Somehow it felt like we all got into a good flow with a self-organising team that started mentoring & reviewing each others work.

It isn't over yet

As we gained so much good momentum during the recent sprints, we would like to invite everyone interested in joining our weekly calls on Google hangout: every Thursday at 4:30pm CEST we announce them on IRC: #drupal-rules. Also check out the meeting notes.

Apr 22 2015
Apr 22

I'm running endlessly through the woods, as far away from Silicon Forest/Valley as I can. Facebook has collapsed, so has Google, so has everything. A post-social media, free internet, apocalyptic world has ensued and all of the developers have gone into the woods to escape the aftermath clutching their now worthless laptops. In my dream all the CEOs of the software companies are in a room patting themselves on the back for giving it a good go, while the world falls apart in the aftermath of their self-focused existence.

This may have just been a crazy post White God viewing dream I had, but it touches on something real. Something that as a consultant for multiple companies I see all the the time.

And it's killing me. It's killing your company, and you're too busy getting beers with your funders to notice. Meanwhile your developers, project managers, and everyone else that is the core of your business is slowly burning out until they rage quit and go somewhere else to start it all over again.

And why? Because you as a business owner missed a critical transition in the business. As did your VC funders, and just about everyone. Silicon Valley is no longer about hardware, it's mostly about services, and in services the human element is the foundation of your business not just a commodity to get the chips out the door.

It matters how you treat people, your employees overall health and happiness directly affects your bottom line. It matters in blue collar work too but it takes longer to see the issues in something that is just pressing out the same computer chip every day.

As a services software based company you are not in the business of providing a physical piece of product. Firing/not-hiring people who use the words "work/life balance" is a fatal mistake. That person probably is more focused when they work than the person that knows how to brown nose you and make you feel good about your quarterly goals.

Working your workers to the bone to churn out reproducible pieces of product is BAD BUSINESS for you when the workers are producing code or design, and the customers are buying a service.

Code for all of it's reproducibility is a creative endeavor. I will not write code exactly as Michael, even when we use the same approaches. Coding standards matter, and exist, but outside of those there is still a lot of room for interpretation. Software changes so rapidly you can't possibly make a handbook for everything. So what are you to do?

Do what every creative industry has to at some point. Build in breaks, build in autonomy or you WILL BE BYPASSED in the future. Because some companies are catching on, and it's only a matter of time before they start to bypass commodity focused software companies.

Did you know that most of Hollywood literally shuts down in December? The industry that works 50hr weeks minimum, and for the most part in my opinion (as a former employee) sucks at understanding it's human element SHUTS DOWN IN DECEMBER and essentially parties the whole month.

This means that even some of the most overworked creatives in the world stop working to take a breather. At agencies especially, but in tech in general, we don't have that. And before you protest that you have enough benefits, a month of vacation is pittance for someone who has to be constantly creating a minimum of 6-7hrs a day. I've worked 14hr days as a Server, and was less tired than after solving problems with web-apps in half the time. Plus whatever meetings, time-tracking, and whatever else you as a business owner need for accounting is often in addition to your XX billable.

As a CEO your employees are my friends, it's hard watching you waste their talent, and everyone's time on earth just running through the motions your VCs layout. It's even harder watching the kind of people that thrive in an environment based on numbers, not innovation, suck the life out of your business and your employees that actually do the work well.

Those that are the best at surviving company politics, are also often the worst at building good software. They are too distracted by power playing everything around them to do the heads down work (yes project management, and UI design takes heads down work) necessary to produce quality software/websites. They also tend to poison the well for everyone else in an effort to cover up their lack of skills. In part 2 I will talk about politic prevention, and how to deal with it within your own company without firing everyone.

As someone who has run a small company, I know how simple adjustments make a huge difference in what your company is able to produce. Things like just raising rates a little, or taking a few hours off the availability calendar. Cutting out a couple of meetings that we thought we had to have the developer at. All of these things add up, and because a two person company is easy to adjust I can see the results almost instantly. It may take more time for larger companies, but Treehouse is proof that 4-8hr days are possible, and can have good results.

What doesn't work. Rewarding those that are present but not productive longer, billing hourly, paying for lunches so no one can leave without standing out, and then patting ourselves on the back for taking care of everyone AND our business. As if the two are constantly opposed to each other. When it's actually the opposite.

If your coders, project managers, and everyone else are able to take care of their day to day lives for themselves, do their side-projects, and feel free to not be in every single fucking meeting, a change will happen in your company. Better solutions for difficult problems will come easier, your clients will be happier, you'll be able to charge more money, or not charge hourly at all (more to come on why hourly billing is death for agencies).

In short remember this. Mentally and emotionally healthy humans = healthy businesses.

This is a difficult concept to get in the US, we are so used to assuming that we must trample on people to get ahead, but if you can make this paradigm shift it will revolutionize your life and your business and make the world a much better place for everyone you touch. We are stronger when we value each other equally instead of building a hierarchy that rewards mediocrity because we still imagine ourselves to be a factory not the creative entity we actually are.

Part 2 of this article is going to be action focused. I'll show you ways to get the most of out your humans and your business in a way that benefits everyone.

In the meantime if you or your devs are using Drupal to sell/build/market anything checkout Mike's book on how to model data so that your Drupal site can grow with your client's data, and everyone stays happy. FrontEnd devs if you want an easy repeatable way to setup your Foundation design framework in Drupal checkout my quickstart guide, it's short to the point, and gets you to designing in snap. Because spending less time on the boring stages is always a good thing.

Photo credit: Make Your Own Path by @pjrvs

Apr 22 2015
Apr 22
Recently, a community member drew our attention to an article on alcohol and inclusivity at tech events. In it, author Kara Sowles talks about the strong focus on social drinking and points out that people who choose not to drink often feel uncomfortable or experience unpleasant peer-pressure — and that got us thinking.

DrupalCon is all about fun, inclusivity, and self-expression, and if that means not drinking, we want to make that not only easy, but cool to boot.  At DrupalCons, people from various backgrounds come together and our social events are intended to create a welcoming atmosphere for everyone to meet old friends and make new connections. No one should feel pressured or even “gently nudged” to consume alcohol at our functions.

We do our best to meet the needs of our community, and while the scope of DrupalCon makes it impossible to accommodate every single request that comes through the door, we feel very strongly that offering more drink choices is a no-brainer. That’s why, at the Opening Reception on Monday night, we will have a number of non-alcoholic options that are equal in value to alcoholic drinks. Attendees can choose from a variety of delicious craft sodas, or try our tangy alcohol-free cocktail “Don’t call me Shirley.”

We hope that offering a broader spectrum of nonalcoholic drink choices will help all attendees enjoy the social events at DrupalCon, and will empower them to make the choices that feel right for them. After all, it’s not the drink in your hand, but the people you are with that will make the evening fun and memorable!

Apr 22 2015
Apr 22

Nedjo Rogers is a Senior Performance Engineer with Tag1 based out of Victoria, Canada. He’s been an active Drupal contributor since 2003, has served as an advisory board member of the Drupal Association, and has led Drupal development projects for clients including Sony Music, the Smithsonian Institute, the Linux Foundation, and a number of nonprofit organizations. He’s also the co-founder of Chocolate Lily, where he builds web tools for nonprofits, including the Drupal distribution Open Outreach.

Nedjo and I chatted shortly after he returned from a bike trip, touching on a range of topics including his entry into Drupal, working at Tag1, his nonprofit distribution, Drupal 8, corporate influence, CMI, Backdrop, and mining activism.

Dylan: How did you get started in web development and Drupal?

Nedjo: In 1996 I was working at an environmental group focused on the impacts of mining. My first work was mapping--providing communities with information about proposed mining projects in their area and the impacts of existing mines. We also had an international project, working in solidarity with organizations and communities impacted by the activities of Canadian mining companies in Peru and other Latin American countries.

In 1997 I started building an organizational website using Microsoft Front Page. Then, working with an international network of mining activist organizations, I produced a complex relational database application to track the worldwide activities of mining companies. Our ambitious aim was to enable communities to pool their information, supporting each other. I built the database using the software I had installed on my work computer: Microsoft Access.

In short: I was trying to build liberatory, activist internet resources, but using proprietary tools. I didn't know better. I'd never heard of free software.

By 2003, when I was working at a small technical support NGO, I'd had my introduction to the free software community, thanks initially to Paul Ramsay of PostGIS renown. At the time, Drupal was a small and relatively little known project. The available contributed modules numbered in the low dozens. Compared to other options I evaluated, the core software itself did very little. But what it did do, it did well and cleanly. I set about writing and contributing modules and dabbling in Drupal core.

Dylan: Your Drupal bio says you have a masters degree in geography. When you started working in technology was it an explicit part of your job responsibilities or was it more of something you fell into because someone needed to do it?

Nedjo: Decidedly the latter. In my MA, I took exactly one course in anything related to technology: geographic information systems (GIS). I was teased initially when I started hacking code. "Nerdjo" was my nickname of that time.

I've always had a love-hate relationship with the idea of working in IT. In many ways, I love the work. At its best, it's a constant source of intellectual challenge. But technology per se I don't care about at all. Outside of the Drupal community, few of my friends know in any detailed sense what I do.

Dylan: Yet based on what I’ve read about you, I'd venture a guess that your education, your intellectual interests, your views have had a big influence on the work you’ve done in the Drupal community. Is that true?

Nedjo: Very much so. Software, code, documentation--my sense is that all of it begins with the basic questions we ask ourselves.

Like many others, I came to Drupal with a purpose: to build tools that empower local, change-making organizations and movements. To put emerging communication technologies into the hands of activists and community builders. To do so, we need to work with those communities to shape technology to the needs and abilities of those who will use it.

I believe of Drupal that it retains, at least latently, a lot of the original principles of its genesis as a student project: a core focus on communities and collaboration. So, for me, working to bring a grounded and critical perspective to topics like structure and architecture has been a core part of my service to the project.

Working at Tag1

Dylan: Can you tell me about how you started working with Tag1 and what your experience working here has been like?

Tag1 partner Jeremy Andrews and I were longtime colleagues at CivicSpace, and I worked closely with Tag1 manager Peta Hoyes when we were both at CivicActions, so when Peta reached out to me with the opportunity to work with Tag1 I already knew it was going to be a great place to work.

My first assignment was on archetypes.com, a complex custom build that included some intriguing challenges. The project came to Tag1 through a route that’s now become familiar. We were brought on to work on performance issues on the site, which was built originally by a different team. We recommended architectural improvements that would address performance bottlenecks and then were engaged to guide that process, working with the in house Archetypes dev team.

It was my first chance to work with Tag1 engineer Fabian Franz, and I was immediately struck by his creative genius. I also got to collaborate again with Károly Négyesi (ChX), who years earlier was an important mentor for me, encouraging me to participate in the planning sprint for fields in Drupal core. Károly was also on hand to witness my near arrest by a half dozen of Chicago’s finest on Larry Garfield’s mother’s back porch while attending a Star Trek meetup, but that’s a longer story ;)

Recent highlights at Tag1 include collaborating with staff at the Linux Foundation to upgrade and relaunch their identity management system and working with Mark Carver and others to complete a redesign for Tag1’s own Drupal Watchdog site.

Tag1 is a good fit with my interests and commitments and I love the chance to work as part of a creative team where I’m always learning.

Nonprofit Distribution

Dylan: You’ve developed a Drupal distribution for nonprofits, Open Outreach and a set of features called Debut. Tell me about those projects.

Nedjo: In 2010 or so, while working at CivicActions, I completed six months of intensive work as tech lead of a project for the Smithsonian, a site on human origins for the US National Museum of Natural History.

We'd repeatedly worked 12 to 16 hours days to complete a grueling series of sprints. The site was a resounding success, but for me it sparked a crisis of faith. If we were here for free software, what were we doing pouring our efforts into what was essentially closed, proprietary development?

More troubling for me personally was the realization that I'd steadily lost touch with the reasons that brought me to Drupal in the first place: the small, activist groups I care about and am part of. If I wasn't going to focus solidly on building technology specifically for them, exactly who was?

I left CivicActions and soon afterwards, my partner, Rosemary Mann, also left her job of ten years as the executive director of a small group working with young, often marginalized parents. With our two kids, we pulled up roots and spent five months in Nicaragua. While there, I volunteered with an activist popular health organization.

When we returned, both at a crossroads, we decided to start up a Drupal distribution for the activist and community groups that we cared about. Groups that, we felt, were increasingly being left behind as Drupal shops pursued high-paying client contracts.

When we started into the work, via a small partnership, Chocolate Lily, several people said they didn’t understand our business model. By focusing on small, low resourced organizations, we'd never generate the lucrative contracts that Drupal shops depend on.

And we said, yeah, that's kind of the point.

Our first Open Outreach site was for the organization that Rosemary worked for for ten years, the Young Parents Support Network.

When building Open Outreach, we put a lot of energy into interoperability. At that time, the Kit specification, aimed at enabling the features of different distributions to work together, was a recent initiative. We very intentionally divided our work into two parts. Everything that was general beyond our specific use case (small NGOs and activist groups) we put into a generic set of features, Debut. The idea was that these features could be used independently of our distribution.

We also put energy into various other efforts that seemed to be oriented towards interoperability. This included the Apps module, which, at least in theory, included interoperability in its mandate. Why put energy in this direction?

Features itself - focused on capturing configuration in a form that can be reused on multiple sites - can be seen as extending the "free software" ethos from generic platform tools (modules) to applied use cases. But so long as features were limited to a particular distribution, they were in essence crippled. They were proprietary, in the sense that they were tied to a given solution set. Only by explicitly tackling the problem of interoperability could we truly bring the powerful tools being developed in Drupal to organizations other than those that could pay tens or hundreds of thousands of dollars on custom development.

The original authors of Features at Development Seed recognized this problem and took steps to address it. But when they largely left Drupal development, their insights and efforts languished.

I won’t make any great claims for what we achieved with Debut. To say that it was not broadly adopted is definitely an understatement ;) But in my more optimistic moments I like to think of it as a step in a broader process that will take time to bear fruit.

As Victor Kane recently put it, "We truly need open source propagation of reusable sets of Drupal configuration, and to reclaim the power of generically abstracting solutions from single site projects with the aim of re-use, with the creation of install profiles and distributions."

Drupal 8

Dylan: You are getting at some issues that I want to circle back to shortly. I’d like to shift to Drupal 8. What do you find most exciting or interesting about Drupal 8?

Nedjo: I've written or drafted several contributed modules now with Drupal 8. Coming from previous Drupal versions, I found it challenging at first to wrap my head around the new approaches. But as I got comfortable with them, I gained a lot of appreciation for the basic architecture and the flexibility and power that it enables.

Only a couple of months in, I already feel like I'm writing my cleanest Drupal code ever, thanks to the solid framework. While the previous procedural code left a lot up to the individual developer in terms of how best to structure a project, Drupal 8's well developed design patterns provide a lot of guidance.

One key feature of Drupal 8's architecture is "dependency injection", which roughly means that the elements (classes) that a particular solution needs are passed in dynamically as customizable “services” rather than being hard-coded. This design choice means a considerable leap in abstraction. At the same time, it opens the way for very specific and efficient customizations and overrides.

A concrete example, returning to the question of interoperability. A key barrier to features being interoperable is conflicting dependencies. This issue arises from the fact that some types of components need to be supplied once and used in many different features. Examples are user roles and "field bases" in Drupal 7. (In Drupal 8, what was a field base is now a "field storage".) In practice, most distributions put these shared components into a common feature that becomes a dependency for most or all other features in the distribution. For example, two different distributions might both include a core feature that provides a “body” field and an “administrator” role and that is a dependency of other features in the respective distributions.

This means you can’t easily mix features from one distribution with those of another, because their dependencies conflict.

In Drupal 7, I wrote the Apps Compatible module to try to address this problem. Apps Compatible makes it possible for multiple features and distros to share a certain limited set of shared configuration without conflicting dependencies. However, due to architectural shortcomings in Drupal 7, it does so via a bunch of awkward workarounds, and only for a very limited set of types of components (entities).

In Drupal 8, I've drafted an equivalent, Configuration Share. Using the dependency injection system, I was able very lightly to override a single method of the 'config.installer' service. In doing so, I achieved the same aim I’d attempted in Apps Compatible--except that this time it was done cleanly, in a few short lines of code, and for all types of configuration types at once.


I also wrote the initial architecture of the Drupal 8 Features module, which - thanks to some fresh approaches building on Drupal 8 architectural improvements - promises to be a vast improvement over previous versions. Mike Potter at Phase2 is taking my start and adding on a lot of great ideas and functionality, while respecting and extending the basic architecture.

So, yeah, so far I’ve loved working in Drupal 8. I honour the creative energy that's gone into it. At the same time, I have a critical perspective on some directions taken in the version and the project as a whole. To me, those perspectives are not in any way contradictory. In fact, they stem from the same set of values and analysis.

Corporate Influence

Dylan: That’s a good segue back to an earlier point you made that is worth revisiting. A not-insignificant portion of early adoption and funding of Drupal work was driven by small, scrappy nonprofits, working with dev shops with social change in their missions. Many of those early adopting orgs were attracted to Drupal not only for the economic accessibility of an open source CMS but for the philosophical values of community and collaboration and mutual empowerment through shared interest. You’ve been vocal about your concerns regarding enterprise influence. Assuming that Drupal does not want to lose these users, what can or should be done?

Nedjo: You're correct to highlight the important and ongoing contributions of contributors and groups motivated by the ideals and principles of free software and more broadly of progressive social activism. We can think here of CivicSpace, a social change project that was a key incubator for a lot of what shaped Drupal core.

Back in 2009, working at CivicActions, I had the rare opportunity of doing six months of solid work on Drupal core and contrib for a corporate sponsor, Sony Music.

As technical lead of a fantastic team including Nathaniel Catchpole and Stella Powers, and in close collaboration with great folks at Sony Music and Lullabot, I got to dig into practically every area of Drupal code related to language support and internationalization. At the end of the project, on my own time, I wrote a detailed case study crediting Sony Music with their amazing contribution.

So, when it comes to corporate sponsorship, you could say I'm an early adopter and promoter.

Just not an uncritical one.

In a thoughtful 2008 post, Acquia co-founder Jay Batson wondered whether the balance between volunteer and commercially sponsored development “will change for Drupal as Acquia appears.” His starting point was a blog post on “The effects of commercialization on open source communities”.

Jay bet that sponsored development would not become “the most”, while, commenting on Jay’s post, Drupal founder and Acquia co-founder Dries Buytaert was “100% confident that the Drupal community will continue to do the bulk of the work.”

Dries noted though that “What is important to company A, is not necessarily important to company B. … At Acquia, we have our own itch to scratch, and we'll contribute a lot of code to scratch it properly ;-)”

I recently read a relevant article from 2011, “Control in Open Source Software Development”, in an industry journal. The authors outlined a series of strategies that companies could follow to ensure they achieved significant control over open source projects and thus "help companies achieve their for-profit objectives using open source software."

Looking at the strategies examined, we can see many of them at work in the Drupal project, including sponsoring the work of lead developers.

The question is not whether the “itch scratching” Dries described affects the functionality and focus of the project. Of course it does. That’s the whole point.

It’s how.

Besides sponsorship, there are also indirect but potentially equally powerful influences. Through our work, we're exposed to certain types of problems. If, as our skill level and reputation rise, we end up working on larger budget projects for large corporations, governments, and enterprise-level NGOs, we develop expertise and solutions that address those clients' priorities. This focus can carry over in both explicit and more subtle ways to our work in other areas, even if we're doing that work as volunteers.

I see this influence on a daily basis in my own work. I couldn’t count the number of customizations I’ve done for paid client projects and then carried forward as a volunteer, putting in the unpaid time to turn them into a contributed patch or module.

So, seven years on, it’s worth returning to the questions Jay raised. Have we as contributors in effect been relegated to a role of outsourced and unpaid bugfixers, or do we retain a central role in shaping our collective project? What is our project? What do we want to achieve with it--ultimately, who it is for?


Dylan: You’ve raised some concerns about the direction that the Configuration Management Initiative (CMI) in D8 is going...

Nedjo: Yes, I think we can see a lot of these questions - of influence, target users, and enterprise interests - at play in Drupal 8 configuration management approaches.

When he introduced Drupal 8 initiatives in his 2011 keynote address at DrupalCon Chicago, Dries explained how he’d consulted twenty of the largest companies and organizations using Drupal about their pain points, and configuration management was one of two things that stood out. So from the start, the focus of the CMI was, as Dries put it, to fix “the pains of large organizations adopting Drupal”.

The CMI began with a particular limited use case: the staging or deployment problem. Initiative lead Greg Dunlop was explicit about this focus. Greg hoped that further use cases - like those of distributions - could and would be added as we moved forward.

Recently, I began tackling in a serious way the possibilities and limitations of Drupal 8 configuration management when it comes to distributions. My starting place is a very practical one: I co-maintain a distribution, Open Outreach, that's aimed at small, low-resourced organizations. I need to know: will Drupal 8 be a sound basis for a new version of our distribution?

As I dug into the code, two things happened. First, as I mentioned before, I was impressed with the architecture of Drupal 8. Second, I was concerned about implications for small sites generally and for distributions in particular.

Briefly (and please bear with me, this gets a bit technical): In Drupal 7, the system of record for exportable configuration like views or image styles provided by a module is, by default, the module itself. If a site builder has explicitly overridden an item, the item’s system of record becomes the site (in practice, the site database).

An item that has not been overridden is on an update path and automatically receives improvements as they are added to the original module. An overridden item is in effect forked--it no longer receives updates and is maintained on a custom basis. This model prioritizes the free software sharing of updatable configuration items among multiple sites, while enabling customization via explicit overriding (forking).

In Drupal 8 as currently built, the system of record for all configuration is the site. There is no support for receiving configuration updates from modules. In effect, the free software use case of sharing configuration among many sites has, for the moment, lost out to the use case that's built into Drupal core: staging configuration between different versions of the same site.

While staging can, in theory, benefit anyone, in practice, there are pretty steep technical hurdles. To fully use the staging system as currently built into Drupal 8 core, you need to have access to a server that supports multiple databases and multiple install directories; know how to set up and manage multiple versions of a single site; move configuration and modules and files between different versions of the site; and make sense of a UI that presents machine names of configuration items and, even more dauntingly, line-by-line diffs of raw data representations of arbitrary types of configuration.

In other words, the configuration management system - which includes a lot of great ideas and improvements - serves a staging use case that has a lot of built-in assumptions about the technical capacity of the target users, and so far lacks support for the more broadly accessible use case of receiving updates to configuration supplied by extensions (modules and themes).

It looks to me like a case where focusing too closely on a particular use case (staging) has raised some unanticipated issues for other use cases, including distributions.

None of this is in any way an irreversible problem. The use case problem is one that Greg Dunlap pretty much foresaw when he commented on his initial limited focus. He noted it was only a starting place, with lots of followup to do. The contributed Configuration Update Manager module provides a good start. I've sketched in potential next steps in Configuration Synchronizer.

But my view is all of this belongs in core. Only there can we ensure that the free software use case of sharing configuration among multiple sites is on at least an equal footing with that of staging config on a single site.


Dylan: Is Backdrop part of the answer?

Nedjo: It's a welcome addition and a healthy sign. For Open Outreach, we're evaluating Backdrop alongside Drupal. We're even considering building versions for both.

Backdrop inherits some of the same issues as Drupal 8 when it comes to configuration management. But there are encouraging signs that the lead Backdrop developers are both aware of these issues and open to fixing them in core.

Mining Activism

Dylan: You mentioned that mining activism was your entry point into web technologies. Are you still involved in activist work?

Nedjo: I was out the loop for several years in terms of mining activism, but recently I've gotten involved again. Three years ago I co-founded a small organization in my hometown of Victoria, the Mining Justice Action Committee (MJAC). Last month I coordinated a visit by a Guatemalan indigenous leader, Angelica Choc, whose husband was killed in connection with his resistance to a Canadian mining company's operations. Angelica is part of a groundbreaking legal case to hold the company HudBay Minerals accountable for impacts related to its presence in Guatemala. She was a focus of the film Defensora.

At the event, I provided interpretation from Spanish to English as Angelica shared her story with the more than 100 people who filled a church hall. As I spoke her words, I was in tears. From the pain that she and family and community have suffered, but just as much from her courage, vision, and resilience.

My partner, Rosemary, is also an active member of MJAC, and last year she built its organizational website using our distribution, Open Outreach. In small ways, circles are completed.

Dylan: Last question. How was your bike ride?

Nedjo: My younger child is eighteen now and plans to leave home this summer, so these last months with them at home have a special intensity. Ardeo and I biked along a former rail line dotted with trestles that carried us high above the Cowichan River, swollen with spring runoff. In several places, winter storms had brought down trees that blocked the trail and we had to unhitch our panniers and work together to hoist bikes over the sprawling trunks. Dusk was falling as we reached our campground beside the river.

These times of connection - personal and ecological - are so essential. They remind us that technology, or our work in the world, is at most a detail, a small means to help us realize what really matters. What truly has value.

You seem to have caught me in a philosophical mood. We'd better stop here before I sound more sappy than I already do!

Dylan: Thanks for your time, Nedjo!

Apr 22 2015
Apr 22

As the potential release date for Drupal 8 slowly creeps up we've launched our first Drupal 8 site and are planning to kick off several more in the next few months. Through this process we've learned a lot about the reality of what it means to launch a site on beta software and what that means for your next project.

When do you need it done by?

Drupal 8 will be here soon, but your project may not need to be. If you are just starting to think about the strategy for your project now, and aren't planning on going into heavy development until later this summer, you should definitely be considering Drupal 8 as an option.

Do you REALLY need that feature?

Many custom features are provided by the 20,000 plus contributed modules in Drupal 7. When faced with the options of: 1) helping port a module to Drupal 8, 2) writing a new module that delivers that functionality, or 3) cutting the feature, option 3 probably starts to seem like a really good idea. If you have a large complicated, inflexible scope, or like knowing that you can have your cake and eat it too then 8 is probably not the best path for you.

Don't run back to Drupal 7 just yet, many Drupal 7 modules have already been included in Drupal 8 core. Foremost in this list is Views which opens up a whole lot of advanced functionality without installing a single module. If you feel like your project is simple enough to be build with views, content types, taxonomy, custom block types (yay!) you might want to try it out.

How is your pain threshold?

Does the possibility of losing a few days or even a week of development time chasing down a core bug or upgrading from one beta to the next make you feel dizzy? Drupal 8 is still rough around the edges and there will be frustrating moments ahead. Make sure you, your stakeholders, and your development team are ready to roll with the punches and adapt in order to make your project a success.

Why would I even think about it?

All of the best developers I know are always looking for new things to learn and new tools to try. This often leads the best Drupal developers to begin to explore Ruby, Node.js, and other web frameworks and languages. The best part of Drupal 8 is you get access to what feels like a whole new framework to experiment with, best practice theme layer implementation, consistent use of object-oriented code, and it is all available to you as part of a project you already know. If you've been itching to try something new give Drupal 8 a try. That familiar territory will carry you a long way when things start to get frustrating (see: pain threshold above).

If you are a site owner and not really interested in learning Drupal 8 for your own edification what do you stand to gain? Your main benefit is that you get to be in at the very beginning of a brand new release cycle. If your project is simple now, but may grow into much more you will already be on a platform which will be gaining many great new features and enhancements for many years to come. A little pain now may pay off big in a year or two.

If you haven't yet, spend a couple days giving Drupal 8 a try on your next project. If the worst case happens and everythign comes crashing down around you, Drupal 7 will still sitting there waiting for you.

Apr 22 2015
Apr 22

Author and software consultant LORNA JANE MITCHELL, fresh from her DrupalCon book-signing (PHP Web Services) and talk, is a frequent visitor to Amsterdam – she lives in Leeds, an hour flight away, and loves the city.

Tags:  Video DrupalCon DrupalCon Amsterdam Video: 
Apr 22 2015
Apr 22

What's new with Drupal 8?

Happy Earth Day! Since the last Drupal Core Update, the Drupal Developer Days event brought lots of exciting progress: we (briefly) reduced the number of critical issues to 35, and a week-long performance sprint made Drupal 8 2—20 times faster! Also, Gwendolyn Anello at DrupalEasy announced that DrupalEasy is partnering with Stetson University to offer Drupal courses!

Some other highlights of the month were:

How can I help get Drupal 8 done?

See Help get Drupal 8 released! for updated information on the current state of the release and more information on how you can help.

We're also looking for more contributors to help compile these posts. Contact mparker17 if you'd like to help!

Drupal 8 In Real Life

Whew! That's a wrap!

Do you follow Drupal Planet with devotion, or keep a close eye on the Drupal event calendar, or git pull origin 8.0.x every morning without fail before your coffee? We're looking for more contributors to help compile these posts. You could either take a few hours once every six weeks or so to put together a whole post, or help with one section more regularly. If you'd like to volunteer for helping to draft these posts, please follow the steps here!

Apr 22 2015
Apr 22

April 22, 2015

Ask This Flowchart: Should I take an Acquia Certification Exam?

If you are wondering which Acquia Certification is best for you, ask this handy flowchart to find your answer!

Should I take an Acquia Drupal Certification Exam- 2.0.png

Still Not Sure?

We've been riding the Acquia Certification train for a while now. We're seasoned vets, so feel free to ask us any questions using the form below or check out Doug's tips to help prepare for the exam.


Related Posts

Apr 22 2015
Apr 22

So there I was, sitting in my batcave, minding my own business, wondering what to do with an afternoon. In truth I had plenty of work to do, but since I got paid yesterday, I felt as rich as Bruce Wayne himself and decided to take an afternoon off and splash out on an Acquia Certified Drupal Examination. Which one? Well, to start off, the site builder one. Did I pass? Read on, my friends.

What I expected

I wasn't expecting the exam to be too difficult. You need to get over 68% to pass the exam, but I've been building sites with Drupal since Drupal 6.3. It can't be that hard, can it? My presumption was that the questions might be a little tricky in parts, very easy in other areas, but nothing that I wouldn't be able for. I was aiming for in excess of 90%.

The exam I took was the online one where you register in advance and set up Sentinel on your computer. That was quite unnerving and invasive I thought. I signed up, created my Sentinel account, installed the software, then had to go through two recognition items. The first was to type my name 10 times to get a record of my typing style. The second was to take shots of my face to get a record of that. In seriousness, I wasn't happy about this and certainly worry about the NSA and other bodies (inadvertently) having access to my personal data (as well as credit card details etc).

Timewise, I didn't see a problem. I had an hour and a half to get two thirds of 50 site building questions correct.

How it played out

In reverse order - you don't have an hour and a half, you have an hour and a quarter (though I only realised that after the exam - during it, I just thought the first 15 minutes went very quick).

Doing an exam online with Sentinel is very unnerving. You are told it is a proctored exam and I never knew if I was being watched or not, or if all the facial recognition stuff was doing the watching of me. Then if I tried to scratch my back would a bot think I was trying to cheat. If I looked at the corner of my screen to see the time (Sentinel takes over your whole screen so you can't), would it be construed that I was trying to look at a second monitor. All-in-all, not a pleasant experience.

The exam is not "difficult" in the traditional sense, but it is very tricky. At times I thought the questions were not very clear and on a number of occassions more than one response could have been correct.

What do you need to know? You need to know the admin pages of a Drupal website very well. Most importantly, you need to know the exact phrases that are used on admin pages and on what exact page they are on. You will not have the option to check.

You should also have a good understanding of the views module - how it works, what it does, when you might use it - especially in relation to contextual filters and relationships.

One question about stopping spam accounts had 4 options, two of which were: remove "create new account" permission from anonymous role; and, on account settings page set who can create accounts to administrator only. Both of these looked like likely answers to me (until I remembered there is no "create new account" permission). There were a lot of questions like that - be careful, read the question and the answers very closely. (Once or twice I tried to read out loud, but then got worried that the damn Sentinel would construe this as trying to communicate with someone.)

As mentioned above, there were a number of questions which I thought more than one answer could have been correct. For example, how would you install a new theme. Options included: 1) Go to the "Appearance" page, select "install a new theme" and follow the instructions; 2) Download the theme, place it in sites/default/themes and clear the cache. I went with the second option. I don't know if I was right, but either of those methods seems workable to me. Given my low score in that area, I've a feeling I was wrong. (Pity there wasn't a drush dl <project-name> option for that question!)

The result

I scored a decent 88% (which means I got 44 out of 50 questions correct). I'm happy(ish) with that and it's certainly spurred me on to do the rest of the exams. I'll take the front-end exam next, and then the backend one. I wouldn't be happy to use Sentinel again, so I think I'll sign up for the front-end exam at DrupalCon Barcelona.

Here are my exact results:

  • Overall Score: 88.00%
  • Result: Pass
  • Topic Level Scoring:
    • Section 1 - Drupal Features: 100.00%
    • Section 2 - Content and User Management: 100.00%
    • Section 3 - Content Modeling: 91.66%
    • Section 4 - Site Display: 70.00%
    • Section 5- Community and Contributed Projects: 80.00%
    • Section 6 - Module and Theme Management: 75.00%
    • Section 7 - Security and Performance: 100.00%
Apr 22 2015
Apr 22

This week we’ll talk about Block Class, a very cute module to insert custom classes for every block we create.

Working on a project, in these days, , I had the need to have different classes for every block available on the layout.

Searching on a drupal.org, I found this module, that suits to our case!

This module is very easy to use and, most important thing, it doesn’t need any competence from the programming side.

Block Class is a contrib. module of Drupal, so to download it is enough to go to the download page of Drupal modules and search Block Class (https://www.drupal.org/project/block_class).

The installation of the module is traditional, so I do not stop too much.

Once downloaded the module from the link gave above, put it on the sites/all/modules/contrib folder and enable it on the modules page, that you can find on the following link: yourwebsite.it/admin/modules.

After having activated it and going to the change’s interface of a block, we will find a new filed, as show on the image:

Inserting the desire class, it will be possible show it inside the page’s source, as show on the image:


  • Remember to save the block!
  • If you don’t display the class, don’t panic, try to clear the cache!

And even for this article is all! What do you think about this module? Will you use it? Write me on the comments!

Apr 22 2015
Apr 22

When you save (precisely for an update) an entity Drupal does a massive job:

  • Retrieve an unchaged copy of the original entity (entity_load_unchanged)
  • Update the entity and entity_revision table.
  • Issue and update to bind the revision id in the main entity table (even if unchanged)
  • Update all fields, with their revisions.
  • Invalidate the cache for the entity.
  • Trigger many, many hooks in the way

Our sample entity for this article has 12 fields and the previous adds up to a total of (estimate):

  • Loading the unchanged entity, about 13 select statements (for fields + entity table)
  • Updating the entity and the revision, 2 update statements.
  • Updating the fields (with revisions disable): 12 update statements.
  • Additional modules whose hooks are triggered on an entity update such as the metatag, search api or print module.

We profiled a call to entity_save() for the code example you will find below in this article, with the following results (real):

  • Total of 191 statements issued against the database. Really? This entity only has 12 fields....
  • 35 of them for transaction management (SAVE TRANSACTION SAVEPOINT)
  • 36 of them SELECT 1 - I'd like to find out where these are comming from
  • This leaves us with only 120 "real" statements, still too much for a 12 field entity.
  • About 20 as a result of calling the entity_update hook and performed by 3d party modules (metatag, print, search api, node block and views_content_cache)

The complete statement trace of the Database Engine is provided as an attachment to this article to prove I'm not making this up and that they are real numbers.

This means that you cannot rely on using default entity storage techniques if you plan to produce maintainable, fast and scalable code.

On a more straighforward and simple database design all this could be all done in a single insert statement.

The first thing to consider is installing and setting up the Field SQL No revisions module. On 99.99% of use cases, field revisions are not going to be used. By installing and configuring this module you are overriding the default Field SQL storage engine with one that does not store revision data for fields. 

What if you just wanted to update one or two fields - or properties - from an entity?

We scouted the internet and found solutions that did not live up to our expectations, such as this one:

//Get the id of your field
$name = 'name_of_your_field';
$info = field_info_field($field_name);
$fields = array(info['id']); 

//Execute the storage function
field_sql_storage_field_storage_write('model', $entity, 'update', $fields);

Or this one:

$field = new stdClass();
$field->type = 'story_cover'; // content type name
$field->nid = $node->nid; // node id
$field->field_number_of_pages[LANGUAGE_NONE][0]['value'] = 'YOUR_VALUE'; // field name
field_attach_update('node', $field);

You can clearly see (and not worth dicussing) that these are umantainable, non intuituitive, messy and error prompt approches. Reminds me of the whole PHP language itself or a big chunk of the code in the PHP ecosystem: a fractal of bad design.

What we need is a solution that:

  • Is transparent, consistent and easy to use for the developer.
  • Perfectly integrates with current Entity manipulation tools.
  • Is as lightweight as possible, yet still does not completely drill through the different abstraction APIs.
  • Ideally, will only update the information that needs to be updated - only  update what has changed.

Consider the following piece of code:

// Vamos a manipular la inscripción.
$inscription = UtilsEntity::entity_metadata_wrapper('node', (int) $inscription_id);
// La propia matrícula debe estar vinculada al pedido a través
// de un campo de referencia.
$vinculado = false;
foreach($inscription->field_referencia_pedidos as $pedido) {
  if ($pedido->getIdentifier() == $order->order_id) {
    $vinculado = true;
if (!$vinculado) {
  // Lo vinculamos si no lo estuviera.
  $inscription->field_referencia_pedidos[NULL] = $order->order_id;
if (in_array($order->order_status, array('completed', 'payment_received'))) {
  if ($inscription->field_estado_insc->value() != 'paid') {
     $inscription->field_estado_insc = 'paid';
else if (in_array($order->order_status, array('canceled', 'pending' , 'abandoned', 'in_checkout'))) {
  if ($inscription->field_estado_insc->value() == 'paid') {
    $inscription->field_estado_insc = 'pending';
// Sleek save

This is more or less good practice code regarding entity manipulation in Drupal. But no matter what happens in that code, when save() is called the whole entity save process is triggered, issuing a mind blowing number of database statements.

Before going any further, if we wanted to only update what has changed we would need to either:

  • Keep track of what has changed, and only update that.
  • Compare the current entity with the original one and detect all potential changes.

Each one of these has different implications:

Keeping track of changes: requires using a consistent way of manipulating the entity (possibly a wrapper) and any change done to the entity outside the wrapping mechanism will not be detected.

Compare to original: requires either detaching (clone) the entity instance before manipulating it or retrieving a fresh entity (entity_load_unchanged) from the database before the comparison. From a performance point of view the first one is the most feasible, but requires the coder to always make sure that the entities are detached (cloned) before being tampered with because that is the only way we can retrieve the original entity form cache without having to fully reload from storage.

We implemented sample approaches for each one of these, and finally concluded that the more natural and maintainable way to have sleek entity updates was to extend the EntityMetadataWrapper to track property and field changes, and to only update the data that has changed.

With this approach, the above sample piece of code will need no (nearly) changes at all. The metadata wrapper will detect what has changed and only update what is required. You can see the complete implementation at the end of the article (it's a trimmed down version of the actual code, this is part of Fdf - FastDevelopmentFramework).

We had to:

  • Implement a custom version of entity_metadata_wrapper() that returns an instance of our derived version of EntityDrupalWrapper
  • Implement a derived class of EntityDrupalWrapper (FdfEntityMetadataWrapper) that keeps track of changes and overrides the default save() function to only update what is needed.
  • Not in the example: override EntityListWrapper, EntityStructureWrapper and other methods so that our FdfEntityMetadataWrapper is used consistently when interacting with the wrapper.

Don't worry, all this is just about 75 lines of code.

What are the advantages of this strategy?

  • Very little disruption to actual code (if already using the EntityDrupalWrapper)
  • Coder needs not to think about how storage is managed, but must be aware of the internal behaviour of selective updates.
  • The entity and revision table are only updated if any of the properties of the entity have been set through the wrapper, and only the changed table fields (properties) will be updated.
  • Only fields that have been set will be updated
  • We are not drilling through the abstraction layer and into the storage engine like the approaches that use field_sql_storage_field_storage_write
  • This manipulates consistently the entity as a whole making the update transactional

Remember that at the start of the article we traced a total of 191 statements - only 120 of them real - when performing the save() call, even when no data has changed in the entity.?

Changing the EntitWrapper for our custom wrapper with field/property change detection lead to:

  • 0 statements if the entity was not manipulated.
  • 9 statements (4 real) if 1 single value field was updated.
  • 23 statements (15 real) if 2 fields were updated (one of them is a multivalue field with 4 values at the time of the insert)
  • 2 statements (2 real) if we changed any (or all) of the properties of the entity such as title, timestamp, etc.

With the change, the above logic has moved from issuing always 191 statements to regularly issuing 0 (the logic is called on every order update, but usually no changes are made to the $inscription entity) and issuing at the most 23 statements in the worst case where the 2 fields present in the snippet have been tampered with.

Of course depending on your priorities you could make different changes such as updating the entity's timestamp when fields are updated, or triggering some of the hooks that have been omitted in the implementation. We decided to keep as many hooks as possible away from here because if any of them relies on changing anything in the entity the changes will not be persisted as they will not have been detected by the FdfEntityMetadataWrapper.


namespace Drupal\fdf\Entity;

 * Extends EntityDrupalWrapper to provide property and field
 * change tracking.
class FdfEntityMetadataWrapper extends \EntityDrupalWrapper {

  // @var $changed_fields string[]
  private $changed_fields = array();
  // @var $changed_properties string[]
  private $changed_properties = array();
   * Permanently save the wrapped entity.
   * @throws \EntityMetadataWrapperException
   *   If the entity type does not support saving.
   * @return \EntityDrupalWrapper
  public function save($fast = TRUE) {
    // Only save if fields or properties have changed.
    if (empty($this->changed_fields) && empty($this->changed_properties)) {
      return $this;
    if ($this->data) {
      if (!entity_type_supports($this->type, 'save')) {
        throw new \EntityMetadataWrapperException("There is no information about how to save entities of type " . check_plain($this->type) . '.');
      if (empty($this->getIdentifier()) || !$fast) {
        entity_save($this->type, $this->data);
      else {
        static::UpdateEntityFast($this->type, $this->data, array_keys($this->changed_fields), array_keys($this->changed_properties));
      // On insert, update the identifier afterwards.
      if (!$this->id) {
        list($this->id, , ) = entity_extract_ids($this->type, $this->data);
    // If the entity hasn't been loaded yet, don't bother saving it.
    return $this;
   * Magic method: Set a property.
  protected function setProperty($name, $value) {
    $info = $this->getPropertyInfo($name);
    if (isset($info['field']) && $info['field'] == TRUE) {
      $this->changed_fields[$name] = TRUE;
    else {
      $this->changed_properties[$name] = TRUE;
    parent::setProperty($name, $value);
   * Update only the specified fields and properties for the entity
   * without triggering hooks and events.
   * @param string $entity_type
   * @param mixed $entity
   * @param array $fields
  public static function UpdateEntityFast($entity_type, $entity, $fields, $properties) {
    $transaction = db_transaction();
    global $user;
    try {
      $info = entity_get_info($entity_type);
      $id = entity_id($entity_type, $entity);
      if (empty($id)) {
        throw new \Exception("Este método solo puede usarse para actualizar entidades.");
      // Extract the ID
      $key_name = $info['entity keys']['id'];
      $key_revision = $info['entity keys']['revision'];
      if (!empty($fields)) {
        // Instance and type.
        $update_entity = new \stdClass();
        $update_entity->type = $entity->type;
        // Set the ID.
        $update_entity->{$key_name} = $id;
        // Copy the fields that we want to attach.
        foreach ($fields as $field) {
          $update_entity->{$field} = $entity->{$field};
        // Update the field.
        field_attach_presave($entity_type, $update_entity);
        field_attach_update($entity_type, $update_entity);
      if (!empty($properties)) {
        // Update the main record.
        $record = new \stdClass();
        $record->changed = REQUEST_TIME;
        $record->timestamp = REQUEST_TIME;
        $record->{$key_name} = $id;
        foreach ($properties as $property) {
          $record->{$property} = $entity->{$property};
        drupal_write_record($info['base table'], $record, $key_name);
        // Update the revision.
        $record->uid = $user->uid;
        $record->{$key_revision} = $entity->{$key_revision};
        drupal_write_record($info['revision table'], $record, $key_revision);
      // Invalidate this cache entity.
    catch (Exception $e) {
      watchdog_exception('Fdf', $e);
      throw $e;


namespace Drupal\fdf\Utilities;

use \Drupal\fdf\FdfCore;
use \Drupal\fdf\Entity\FdfEntityMetadataWrapper;

class UtilsEntity {
   * Summary of entity_metadata_wrapper
   * @param mixed $type 
   * @param mixed $data 
   * @param array $info 
   * @return \EntityDrupalWrapper|\EntityListWrapper|\EntityStructureWrapper|\EntityValueWrapper
  public static function entity_metadata_wrapper($type, $data = NULL, array $info = array()) {
    if ($type == 'entity' || (($entity_info = entity_get_info()) && isset($entity_info[$type]))) {
      // If the passed entity is the global $user, we load the user object by only
      // passing on the user id. The global user is not a fully loaded entity.
      if ($type == 'user' && is_object($data) && $data == $GLOBALS['user']) {
        $data = $data->uid;
      return new FdfEntityMetadataWrapper($type, $data, $info);
    elseif ($type == 'list' || entity_property_list_extract_type($type)) {
      return new \EntityListWrapper($type, $data, $info);
    elseif (isset($info['property info'])) {
      return new \EntityStructureWrapper($type, $data, $info);
    else {
      return new \EntityValueWrapper($type, $data, $info);

Apr 22 2015
Apr 22

The next beta release for Drupal 8 will be beta 10! (Read more about beta releases.) The beta is scheduled for Wednesday, April 29, 2015.

To ensure a reliable release window for the beta, there will be a Drupal 8 commit freeze from 00:00 to 23:30 UTC on April 29.

Apr 21 2015
Apr 21

28 pages of unmarred perfection. This book is pure unadulterated genius

- Chris Arlidge

Never Be Shocked Again! - Budgeting your Web Project

Are you having trouble figuring out an appropriate budget for your next web project? Our whitepaper can help!

Download your FREE COPY, to learn the different types of website projects, understand the factors that play a role in the budgeting process, and determine where your web plans will fit when it comes to costs!

Don’t ever be shocked by web costs again! A clear guide to help you plan the budget for your next web project.

Drupal 7 is by far my favorite CMS to date and Zurb Foundation is currently my go to theme. Although, I wouldn't really call Foundation a theme, but more of a responsive front-end framework that you can use to build your themes from.

Here is how to setup a fresh copy of Drupal 7 and configure a Foundation sub-theme quickly to get your project up and running:

Install Drupal using Drush

Although you can do this all the old fashion way, I prefer to use drush for this. Here are the drush commands to make this all happen:

drush dl drupal --drupal-project-rename=drupalsitename

This command will download the latest version of drupal and rename the directory to “drupalsitename”. Rename “drupalsitename” to whatever you want to call your new drupal project.

cd drupalsitename

This command is straightforward - it puts you inside the drupal root directory, which is where you need to be for the next few commands.

drush site-install standard --account-name=username --account-pass=password --db-url=mysql://username:password@localhost/drupalsitedb

This command will configure drupal and create a new database. Replace the account username and password accordingly to what you want your drupal root user to be. Replace the mysql username and password with your proper mysql credentials and rename “drupalsitedb” to whatever you want to call your new database.

Foundation Sub-theme using Drush

Now that drupal 7 has been installed, we need to download the latest Zurb Foundation theme. To do this, run the following command:

drush dl zurb_foundation

This will download the latest stable release of Zurb Foundation into the sites/all/themes directory.

At the time of writing this post, the latest release is 7.x-4.1 and uses Foundation 4. This will work just fine, however, I prefer to use Foundation 5, so you may want to download the latest version that runs on Foundation 5 (7.x-5.0-rc6).

To do this, you need to specify the version number when you download the theme like so:

drush dl zurb_foundation-7.x-5.0-rc6

Next we need to create a foundation sub-theme. This is what we will use to build our new theme from. Run the following command:

drush fst drupalthemename

This will create a proper Foundation sub-theme and place it it the sites/default/themes directory. Replace “drupalthemename” with the name of your theme.

Finally, navigate to /admin/appearance and make sure your new theme is enabled and set to default. You can also do this in drush using the following command:

drush vset theme_default drupalthemename

At this point drupal is now setup and configured with your new custom Foundation sub-theme. It’s now time to roll up those sleeves and start coding!

Apr 21 2015
Apr 21

I’m turning 32 today. People love birthdays, to me it’s just another line number in a messed stack trace output (philosophy mode enabled).

Two years ago I released a drupal module called Get form id (deprecated from now on) that does one small task - it tells you any form's id and lets you copy & paste alter hook for that form. With time I created an improved version of the module for my personal usage and even wanted to release it as a “premium” module on CodeCanyon (yes, I know I know ;), but then changed my mind and kept using it privately.

On birthdays I love to give presents instead of getting. And today I felt that it would be great to release that code as a contrib Drupal module, so you all could use it. Since functionality of the module is a bit wider than of the initial version, I decided to rename it to “Devel Form Debug”. So please welcome the “Devel Form Debug” module that can be described in one simple line “Find out form ID and print out form variables easily”. The module relies on a Devel module in some part so I decided to mention it in the title. Below you’ll see a quick video showing you what this little module can do for you:
Apr 21 2015
Apr 21

By default Search API (Drupal 7) reindexes a node when the node gets updated. But what if you want to reindex a node / an entity on demand or via some other hook i.e. outside of update cycle? Turned out it is a quite simple exercise. You just need to execute this function call whenever you want to reindex a node / an entity:

  1. search_api_track_item_change('node', array($nid));

See this snippet at dropbucket: http://dropbucket.org/node/1600 search_api_track_item_change marks the items with the specified IDs as "dirty", i.e., as needing to be reindexed. You need to supply this function with two arguments: entity_type ('node' in our example) and an array of entity_ids you want to be reindexed. Once you've done this, Search API will take care of the rest as if you've just updated your node / entity. Additional tip: In some cases, it's worth to clear field_cache for an entity before sending it to reindex:

  1. // Clear field cache for the node.

  2. cache_clear_all('field:node:' . $nid, 'cache_field');

  3. // Reindex the node.

  4. search_api_track_item_change('node', array($nid));

This is the case, when you manually save / update entity values via sql queries and then want to reindex the result (for example, radioactivity module doesn't save / update a node, it directly manipulates data is sql tables). That way you'll ensure that search_api reindexes fresh node / entity and not the cached one.