Apr 21 2015
Apr 21

I’m turning 32 today. People love birthdays, to me it’s just another line number in a messed stack trace output (philosophy mode enabled).

Two years ago I released a drupal module called Get form id (deprecated from now on) that does one small task - it tells you any form's id and lets you copy & paste alter hook for that form. With time I created an improved version of the module for my personal usage and even wanted to release it as a “premium” module on CodeCanyon (yes, I know I know ;), but then changed my mind and kept using it privately.

On birthdays I love to give presents instead of getting. And today I felt that it would be great to release that code as a contrib Drupal module, so you all could use it. Since functionality of the module is a bit wider than of the initial version, I decided to rename it to “Devel Form Debug”. So please welcome the “Devel Form Debug” module that can be described in one simple line “Find out form ID and print out form variables easily”. The module relies on a Devel module in some part so I decided to mention it in the title. Below you’ll see a quick video showing you what this little module can do for you:
Apr 21 2015
Apr 21

By default Search API (Drupal 7) reindexes a node when the node gets updated. But what if you want to reindex a node / an entity on demand or via some other hook i.e. outside of update cycle? Turned out it is a quite simple exercise. You just need to execute this function call whenever you want to reindex a node / an entity:

  1. search_api_track_item_change('node', array($nid));

See this snippet at dropbucket: http://dropbucket.org/node/1600 search_api_track_item_change marks the items with the specified IDs as "dirty", i.e., as needing to be reindexed. You need to supply this function with two arguments: entity_type ('node' in our example) and an array of entity_ids you want to be reindexed. Once you've done this, Search API will take care of the rest as if you've just updated your node / entity. Additional tip: In some cases, it's worth to clear field_cache for an entity before sending it to reindex:

  1. // Clear field cache for the node.

  2. cache_clear_all('field:node:' . $nid, 'cache_field');

  3. // Reindex the node.

  4. search_api_track_item_change('node', array($nid));

This is the case, when you manually save / update entity values via sql queries and then want to reindex the result (for example, radioactivity module doesn't save / update a node, it directly manipulates data is sql tables). That way you'll ensure that search_api reindexes fresh node / entity and not the cached one.

Apr 21 2015
Apr 21

I had a case recently, where I needed to add custom data to the node display and wanted this data to behave like a field, however the data itself didn't belong to a field. By "behaving like a field" I mean you can that field at node display settings and able to control it's visibility, label and weight by dragging and dropping that field. So, as you may have undestood, hook_preprocess_node / node_view_alter approach alone wasn't enough. But we do Drupal right? Then there should be a clever way to do what we want and it is here: hook_field_extra_fields() comes for help! hook_field_extra_fields() (docs: https://api.drupal.org/api/drupal/modules!field!field.api.php/function/hook_field_extra_fields/7) exposes "pseudo-field" components on fieldable entities. Neat! Here's how it works, let's say we want to expose a welcoming text message as a field for a node, here's how we do that:

  1. /**

  2. * Implements MODULE_NAME_field_extra_fields().

  3. */

  4. function hook_field_extra_fields() {

  5. $extra['node']['article']['display']['welcome_message'] = array(

  6. 'label' => t('Welcome message'),

  7. 'description' => t('A welcome message'),

  8. 'weight' => 0,

  9. );

  10. return $extra;

  11. }

As you see in example above, we used hook_field_extra_fields() to define an extra field for an enity type of 'node' and 'article' bundle (content type). You can actually choose any other type of entity that's available on your system (think user, taxonomy_term, profile2, etc). Now if you'll clear your cache and go to display settings for Node -> Article you should see 'A welcome message' field available. Ok the last bit is to actually force our "extra" field to output some data, we do this in hook_node_view:

  1. /**

  2. * Implements hook_node_view().

  3. */

  4. function MODULE_NAME_node_view($node, $view_mode, $langcode) {

  5. // Only show the field for node of article type

  6. if ($node->type == 'article') {

  7. $node->content['welcome_message'] = array(

  8. '#markup' => 'Hello and welcome to our Drupal site!',

  9. );

  10. }

  11. }

That should be all. Now you should see a welcome message on your node oage. Please note, if you're adding an extra field to another entity type (like, taxonomy_term for example), you should do the last bit in this entity's _view() hook.

UPDATE: I put code snippets for this tutorial at dropbucket.org here: http://dropbucket.org/node/1398

Apr 21 2015
Apr 21

I'm a big fan of fighting with Drupal's inefficiencies and bottlenecks. Most of these come from contrib modules. Everytime we install a contrib module we should be ready for surprises which come on board with the module.

One of the latest examples is Menu item visibility (https://drupal.org/project/menu_item_visibility) that turned out to be a big trouble maker on one of my client's sites. Menu item visibility is a simple module that let's you define link visibility based on a user's role. Simple and innocent... until you look under the hood.

The thing is Menu item visibility stores it's data in database and does a query per every menu item on the page. In my case it produced around 30 queries per page and 600 queries on menu/cache rebuild (which normally equals to the number of menu items you have in your system).

The functionality that this module gives to an end user is good and useful (according to drupal.org: 6,181 sites currently report using this module) but as you see, storing these settings in db can become a huge bottleneck for your site. I looked at the Menu item visibility source and came to this "in code" solutions that fully replicates the module functionality but stores data in code.

Step 1.

Create a custom module and call it like Better menu item visibility., machine name: better_menu_item_visibility.

Step 2.

Let's add the first function that holds our menu link item id (mlid) and role id (rid) data:

  1. /**

  2. * This function returns a list of mlid's with a list of roles that have access to link items.

  3. * You can change the list to add new menu items or/and roles

  4. * The list is presented in a format:

  5. * 'mlid' => array('role_id', 'role_id),

  6. */

  7. function better_menu_item_visibility_menu_item_visibility_role_data() {

  8. return array(

  9. '15' => array('1', '2'),

  10. '321' => array('1'),

  11. '593' => array('3'),

  12. // Add as many combinations as you want.

  13. );

  14. }

This function returns an array with menu link item ids and roles that can access the item. If you already have Menu item visibility installed, you can easily port the data from the db table {menu_links_visibility_role} into this function.

Step 3.

And now let's do the dirty job and process the menu items:

  1. /**

  2. * Implements hook_translated_menu_link_alter().

  3. */

  4. function better_menu_item_visibility_translated_menu_link_alter(&$item, $map) {

  5. if (!empty($item['access'])) {

  6. global $user;

  7. // Menu administrators can see all links.

  8. if ($user->uid == '1' || (strpos(current_path(), 'admin/structure/menu/manage/' . $item['menu_name']) === 0 && user_access('administer menu'))) {

  9. return;

  10. }

  11. $visibility_items_for_roles = better_menu_item_visibility_menu_item_visibility_role_data();

  12. if (!empty($visibility_items_for_roles[$item['mlid']]) && !array_intersect($visibility_items_for_roles[$item['mlid']], array_keys($user->roles))) {

  13. $item['access'] = FALSE;

  14. }

  15. }

  16. }

In short this function skips access check for user 1 and for user that has 'administer menu' permission and does the access check for link menu items listed in better_menu_item_visibility_menu_item_visibility_role_data. As you see, instead of calling database it gets data from the code which is really fast. Let me know what you think and share your ways of fighting with Drupal's inefficiencies.

Apr 21 2015
Apr 21

Drupal Views offers us a cool feature: ajaxified pagers. When you click on a pager, it changes the page without reloading the main page itself and then scrolls to the top of the view. It works great, but sometimes you may encounter a problem: if you have a fixed header on your page (the one that stays on top when you scroll the page) it will overlap with the top of your view container thus scroll to top won't work preciselly correct and the header will cover the top part of your view.

I've just encountered that problem and making a note here for the future myself and, probably yourself, about how I solved this problem. If you'll look into the views internal, you'll see it uses internal Drupal JS Framework command called viewsScrollTop that's responsible for scrolling to the top of the container. What we need here is to override this command to add some offset to the top of our view.

1. Overriding JS Command

Thankfully, Views is flexible enough and provides hook_views_ajax_data_alter() so we can alter js data and commands before they got sent to the browser, let's overwrite viewsScrollTop command with our own. In your custom module put something like this:

  1. /**

  2. * This hook allows to alter the commands which are used on a views ajax

  3. * request.

  4. *

  5. * @param $commands

  6. * An array of ajax commands

  7. * @param $view view

  8. * The view which is requested.

  9. */

  10. function MODULE_NAME_views_ajax_data_alter(&$commands, $view) {

  11. // Replace Views' method for scrolling to the top of the element with your

  12. // custom scrolling method.

  13. foreach ($commands as &$command) {

  14. if ($command['command'] == 'viewsScrollTop') {

  15. $command['command'] = 'customViewsScrollTop';

  16. }

  17. }

  18. }

Now, everytime Views emits viewsScrollTop command, we replace it with our own custom one customViewsScrollTop.

2. Creating custom JS command

Ok, custom command is just a JS function attached to Drupal global object, let's create a js file and put it into it:

  1. (function ($) {

  2. Drupal.ajax.prototype.commands.customViewsScrollTop = function (ajax, response, status) {

  3. // Scroll to the top of the view. This will allow users

  4. // to browse newly loaded content after e.g. clicking a pager

  5. // link.

  6. var offset = $(response.selector).offset();

  7. // We can't guarantee that the scrollable object should be

  8. // the body, as the view could be embedded in something

  9. // more complex such as a modal popup. Recurse up the DOM

  10. // and scroll the first element that has a non-zero top.

  11. var scrollTarget = response.selector;

  12. while ($(scrollTarget).scrollTop() == 0 && $(scrollTarget).parent()) {

  13. scrollTarget = $(scrollTarget).parent();

  14. }

  15. var header_height = 90;

  16. // Only scroll upward

  17. if (offset.top - header_height < $(scrollTarget).scrollTop()) {

  18. $(scrollTarget).animate({scrollTop: (offset.top - header_height)}, 500);

  19. }

  20. };

  21. })(jQuery);

As you may see, I just copied the standard Drupal.ajax.prototype.commands.viewsScrollTop function and added header_height variable that equals to the offset/fixed header height. You may play with this value and set it according to your own taste. Note the name of the function Drupal.ajax.prototype.commands.customViewsScrollTop, the last part should match your custom command name. Save the file in your custom module dir, in my case it's: custom_views_scroll.js

3. Attaching JS to the view

There are multiple ways to do it, let's go with with the simplest one, to your custom_module.info file add scripts[] = js/custom_views_scroll.js and clear caches, that'll make this file to be autoloaded on every page load. That's all, since now, your views ajax page scrolls should be powered by your customViewsScrollTop instead of stock viewsScrollTop, see the difference?

Apr 21 2015
Apr 21

This screencast shows how you can use a cloud provider like Digital Ocean to install a working copy of ELMSLN by copying and pasting the following line into the terminal:

yes | yum -y install git && git clone https://github.com/btopro/elmsln.git /var/www/elmsln && bash /var/www/elmsln/scripts/install/handsfree/centos/centos-install.sh elmsln ln elmsln.dev http [email protected] yes

Obviously, this is for development purposes only at this time, but I hope it shows you a glimpse of the level of automation we are getting to and where you could easily take this in the future. We already have a Jenkins instance on campus that can perform these same operations against any new server after doing some manual (ew) SSH hand shakes and user account creation.

I pause in the video to let the installer run through but still, that's all it takes to get it up and going; copy and paste into a new CentOS 6.5 box and hanging out for a few minutes. While you wait you can modify your local /etc/hosts file to point to the address (which the command will print what to copy and paste where at the end).

This will eventually replace the current Vagrant install routine so that we're using the same exact one in what we recommend for blank servers, for travis builds, for vagrant and beyond. If you need more help with setting up ssh keys and using the digital ocean interface, some tutorials are linked to below.

Note: This is not an endorsement of Digital Ocean and they are simply included in this article because I've been messing around with their service for testing purposes.

Apr 21 2015
Apr 21

Welcome to the new Drupal @ PSU!

We hope you enjoy the site so much that we want you to have it. No really, go ahead, take it. Steal this site. We did, and we’re proud of that fact. This site is actually a fork of the Office of Digital Learning’s new site that just launched recently.

How did we steal this? Using backup and migrate and having access to the source for the theme, we were able to spin the ODL site up at a new location. From there, we used Drush Recipes to further optimize the site for SEO and performance. Then using Node Export to get a few default pieces of content migrated, Default Config + Features for all the exportables and profiler builder to author the install routine, we have the same site as before but sanitized and fully released to the public (including ourselves)!

This post isn’t just to brag about open sourcing a site though, it’s about how you can now spin this site up really fast (or any other) in PSU DUG’s Nittany Vagrant project. Nittany Vagrant has a few goals that make it unique compared to other Vagrant offerings:

We want to boil our jobs down to answering questions

We all run almost the same series of commands to get started. Drush dl views, ctools, features, admin_menu, jquery_update, etc. So instead of all that, we wanted to come up with a method of helping to standardize site builds to put newbees on the same playing field as long time developers. This has always been a driving force behind the Nittany distribution, but now we’re taking it a step further. Instead of having a traditional install profile, we have a server and site build process that try to get at what you are looking to accomplish.

Right now it asks if you are doing front end development, if you need SEO tools, if you want the base line we recommend, and it even gives you the option of starting your new site build out from the Publicize distribution, drupal from a repo (which could be an install profile at that point) or by ssh binding to a server and pulling it down to work on locally!

We want mirror a RHEL based environment

Most setups you see are Ubuntu / Debian based. That’s great except most of higher education has standardized development on RHEL for long term stability and support (RHEL supports packages for like 20 years or something). Nittany Vagrant is built on top of CentOS 6.5, which is about as similar as you’ll get without paying money.

We wanted a small, clean Vagrant routine

We’ve been burned before by Chef and provisioning scripts in Vagrant. We end up using Chef for Vagrant management and then managing Chef and its cookbooks more then just fixing the small issues that prop up. This time around we wanted to use something as simple as possible for everyone in our community to be able to jump in and help so we stuck with bash, drush and drush recipes to do all the heavy lifting.

It’s already helping get other people involved as we have 6 members of the local PSU DUG community now with commits to the Nittany-Vagrant repo where in the past it was always 1 or 2.

Show me

Here’s a video I recorded showing the current state of Nittany Vagrant as of this writing and what you can do with it now just by answering questions. This is only the begininig though as we want to get into site provisioning and replication, as well as the ability to move this entire process from vagrant to being applied to remote servers (which has already been shown to work on Digital Ocean).

Apr 21 2015
Apr 21

The year is 2020.

We’ve managed, through automated testing, Travis CI, Jenkins CI and crontabs; to completely eliminate downtime and maintenance windows while at the same time increasing security and reducing strain on sys admins. There are very few “sys admins” in the traditional sense. We are all sys admins through Vagrant, docker, and virtualization. We increasingly care less about how scripts work and more about the fact that they do, and that machines have given us feedback ensuring their security. They don’t hope our code is insecure, they assume that it is and instead treat every line authored by a human as insecure.

We’ve managed to overcome humanity’s mountains of proprietary vendors, replacing their code and control with our own big ideas, acted upon by which helps build the tools we need for us. We have begun to bridge the digital divide on the internet, not through training, but by refusing to be solely driven by financial gains.

We are open source. And we are a driving force that will bring about the Singularity (if you believe in such a thing). So we did it gang, it took awhile but wow, we do almost nothing and we’ll always be employed because we know how the button or the one-line script works. Congrats! Time to play Legos and chew bubble gum right?

Or is this just some far off, insane utopia that developers talk about over wine in “The valley”.

This vision of the future, 5 years out, isn’t as crazy as it might sound if you see the arch of humanity. In fact, I actually believe we’ll start to get to that point closer to 2018 in my own work and at a scale 1000s of times beyond what was previously thought possible. This is because of the convergence of several technologies as well as the stabilization of many platforms we’ve been building for some time; yes, all that work we’ve been doing, releasing to Github, drupal.org, and beyond… it’s now just infrastructure.

Today’s innovation is tomorrow’s infrastructure and this becomes the assumed playing field by which new ideas stand on. Take Drupal’s old slogan for instance, Community Plumbing. That thing we all paid for significantly at one time, but now just take for granted; that thing is becoming even more powerful then any of us could have imagined.

So, enough platitudes. (ok maybe just one more)

“Roads… Where we’re going we don’t need road.”

This next series of posts that will start to roll out here are things I’d normally save for presentations, war gaming sessions and late night ramblings with trusted colleagues. I’m done talking about the future in whispers; it’s time to share where we’re going by looking at all the roads and how they’ll converge. After all, the future of humanity demands it.

If you haven’t seen the video “Humans need not apply” then I suggest you watch it now and think. What can I do to help further bring about the end of humanity… Ok wait, not that, that’s too dark. Let’s try again…

What can I do and what do I invest in knowledge wise to help be on the side developing the machines instead of the side automated into mass extinction (career wise).

Hm… still pretty dark? No no, that’s pretty spot on. What job are you be working towards today so that 5 years from now you are still relevant? I hope that the videos to be released in the coming days provide a vision of where we’re heading (collectively) as well as some things to start to think about in your own job.

What are you doing right now that can change the world if only you spoke of it in those terms?

Apr 21 2015
Apr 21

Whenever there is a constraint on the number of developers in a pool, it can make it more difficult to solve issues. As we have been developing Nittany-Vagrant, I have found that there is definitely a smaller pool of developers running on a Microsoft Windows host for their vagrant based virtual machines.

The extra credit problem of the day for me was how to allow vagrant to automatically size a virtual machine's memory pool when utilizing VirtualBox as the VM provider on Windows. This is a well known solution on OSX:

`sysctl -n hw.memsize`.to_i / 1024 / 1024 / 4

However, it took a bit of digging to figure out a way that could be reliably reproduced for the folks that are using our Nittany-Vagrant tool on Windows hosts.

Standard with Microsoft Windows is a command line interface to the Windows Management Interface (WMI) called WMIC. WMIC has a component that allows you to read/edit system information. In particular:

wmic os get TotalVisibleMemorySize

will pull the total memory that is available to Windows. However, this also produces a number of headers that we don't want. We can then use grep to pull a line that starts with digits:

grep '^[0-9]'

Use Ruby to cast it as an int (just to be sure):


and then divide it by 1024 (to get MB instead of KB).

We then only want to use 1/4 of the RAM, so we divide it by 4 and slap it into the vm memory size customization:

v.customize ["modifyvm", :id, "--memory", mem]

That's it! That's all... but there were a few other catches that made this ... less intuitive than one would hope.

  1. grep doesn't exist on Windows systems. However, if you install git-scm for windows, it installs a number of GNU like tools including ssh and grep. If you ensure that these (located in "C:\Program Files (x86)\Git\bin") are on your path, you get grep. The easiest way to make sure this happens is to select the third option "Use GIT and optional Unix tools from the Windows Command Prompt" when installing git-scm. Winner.
  2. determining your host...
  3. will report what your host operating system is... kind of ... With git-scm installed, it reports as "mingw32". With this, you can specify the command to run to retrieve the memory size based upon what OS is being reported. This is more trial and error than it probably should be, however, it does work. If necessary, use:
    puts RbConfig::CONFIG['host_os']
    to be able to determine what your OS is reporting so you can branch appropriately in your Vagrantfile.

This leaves us with:

if host =~ /mingw32/
              mem = `wmic os get TotalVisibleMemorySize | grep '^[0-9]'`.to_i / 1024 / 4
              if mem < 1024
                mem = 1024
v.customize ["modifyvm", :id, "--memory", mem]

Hopefully, that helps some independent windows based developer out there... Cheers, +A

Header image - Vagrant - courtesy of mike krzeszak Creative Commons 2.0

Apr 20 2015
Apr 20

Cruising down the Amstel River on a blissfully warm Amsterdam evening – drink in hand – the fabulous FABIAN FRANZ (Senior Performance Engineer & Technical Lead, Tag1 Consulting) opens up about DrupalCons, chance encounters, caching, salsa dancing, love, and what to listen to when you’re programming.

Apr 20 2015
Apr 20

Almost a year ago we started putting together a site that needed to integrate with our main librarie's search engine.  We used Drupal's restful services to expose our content, but ran in to a problem with getting aliased paths to link up correctly.  What this meant was that while http://www.bioconnector.virginia.edu/content/introduction-allen-mouse-brain-atlas-online-tutorial-suite worked fine, http://www.bioconnector.virginia.edu/content/introduction-allen-mouse-br... didn't... this became really problematic when we were trying to create linked data, and traversing was just obnoxious... https://www.bioconnector.virginia.edu/node/36.json just doesn't roll off the digital tongue... as a workaround we used views to do some wonkiness.... it worked, but certainly was not "the drupal way."

Finally about a week ago Jaskaran.nagra decided to tackle this issue... after a minor debug session and testing in our development environment we've put the tool to work on production without incident.  https://www.drupal.org/sandbox/jaskaran.nagra/2470461 is the sandbox for this tool as jaskaran.nagra is still working on getting project approval :)  

https://www.bioconnector.virginia.edu/content/introduction-allen-mouse-b... now renders properly, as does https://www.bioconnector.virginia.edu/content/introduction-allen-mouse-b..., and https://www.bioconnector.virginia.edu/content/introduction-allen-mouse-b...  

In terms of 18F api compliance this is an additional tool for improving usability in the site, and big thanks to jaskaran.nagra for picking up this project - it's simple, straightforward, and delivers a very handy accessory for restws - thanks!

Apr 20 2015
Apr 20

In this article, we are going to look at how we can create a Drupal module which will allow your users to like your posts. The implementation will use jQuery to make AJAX calls and save this data asynchronously.


Creating your Drupal like module

Let’s start by creating the new Drupal module. To do that we should first create a folder called likepost in the sites\all\modules\custom directory of your Drupal installation as shown below:

Initial folder structure

Inside this folder, you should create a file called likepost.info with the following contents:

name = likepost
description = This module allows the user to like posts in Drupal.
core = 7.x

This file is responsible for providing metadata about your module. This allows Drupal to detect and load its contents.

Next, you should create a file called as likepost.module in the same directory. After creating the file, add the following code to it:

 * @file
 * This is the main module file.

 * Implements hook_help().
function likepost_help($path, $arg) {

    if ($path == 'admin/help#likepost') {
        $output = '<h3>' . t('About') . '</h3>';
        $output .= '<p>' . t('This module allows the user to like posts in Drupal.') . '</p>';
        return $output;

Once you have completed this you can go to the modules section in your Drupal administration and should be able to see the new module. Do not enable the module yet, as we will do so after adding some more functionality.

Creating the schema

Once you have created the module file, you can create a likepost.install file inside the module root folder. Inside, you will define a table schema which is needed to store the likes on each post for each user. Add the following code to the file:


* Implements hook_schema().
function likepost_schema() {
    $schema['likepost_table_for_likes'] = array(
        'description' => t('Add the likes of the user for a post.'),
        'fields' => array(
            'userid' => array(
                'type' => 'int',
                'not null' => TRUE,
                'default' => 0,
                'description' => t('The user id.'),

            'nodeid' => array(
                'type' => 'int',
                'unsigned' => TRUE,
                'not null' => TRUE,
                'default' => 0,
                'description' => t('The id of the node.'),


        'primary key' => array('userid', 'nodeid'),
    return $schema;

In the above code we are are implementing the hook_schema(), in order to define the schema for our table. The tables which are defined within this hook are created during the installation of the module and are removed during the uninstallation.

We defined a table called likepost_table_for_likes with two fields: userid and nodeid. They are both integers and will store one entry per userid – nodeid combination when the user likes a post.

Once you have added this file, you can install the module. If everything has gone correctly, your module should be enabled without any errors and the table likepost_table_for_likes should be created in your database. You should also see the help link enabled in the module list next to your likepost module. If you click on that you should be able to see the help message you defined in the hook_help() implementation.

Help Message

Creating a menu callback to handle likes

Once we have enabled the module, we can add a menu callback which will handle the AJAX request to add or delete the like. To do that, add the following code to your likepost.module file

* Implements hook_menu().
function likepost_menu() {
    $items['likepost/like/%'] = array(
        'title' => 'Like',
        'page callback' => 'likepost_like',
        'page arguments' => array(2),
        'access arguments' => array('access content'),
        'type' => MENU_SUGGESTED_ITEM,
    return $items;

function likepost_like($nodeid) {
    $nodeid = (int)$nodeid;
    global $user;

    $like = likepost_get_like($nodeid, $user->uid);

    if ($like !== 0) {
        ->condition('userid', $user->uid)
        ->condition('nodeid', $nodeid)
        //Update the like value , which will be sent as response
        $like = 0;
    } else {
        'userid' => $user->uid,
        'nodeid' => $nodeid
        //Update the like value , which will be sent as response
        $like = 1;

    $total_count = likepost_get_total_like($nodeid);
        'like_status' => $like,
        'total_count' => $total_count


* Return the total like count for a node.
function likepost_get_total_like($nid) {
    $total_count = db_query('SELECT count(*) from {likepost_table_for_likes} where nodeid = :nodeid',
    array(':nodeid' => $nid))->fetchField();
    return (int)$total_count;

* Return whether the current user has liked the node.
function likepost_get_like($nodeid, $userid) {
    $like = db_query('SELECT count(*) FROM {likepost_table_for_likes} WHERE
    nodeid = :nodeid AND userid = :userid', array(':nodeid' => $nodeid, ':userid' => $userid))->fetchField();
    return (int)$like;

In the above code, we are implementing hook_menu() so that whenever the path likepost/like is accessed with the node ID, it will call the function likepost_like().

Inside of likepost_like() we get the node ID and the logged in user’s ID and pass them to the function likepost_get_like(). In the function likepost_get_like() we check our table likepost_table_for_likes to see if this user has already liked this post. In case he has, we will delete that like, otherwise we will insert an entry. Once that is done, we call likepost_get_total_like() with the node ID as a parameter, which calculates the total number of likes from all users on this post. These values are then returned as JSON using the drupal_json_output() API function.

This menu callback will be called from our JQuery AJAX call and will update the UI with the JSON it receives.

Displaying the Like button on the node

Once we have created the callback, we need to show the like link on each of the posts. We can do so by implementing hook_node_view() as below:

 * Implementation of hook_node_view
function likepost_node_view($node, $view_mode) {
    if ($view_mode == 'full'){
        $node->content['likepost_display'] =  array('#markup' => display_like_post_details($node->nid),'#weight' => 100);

        $node->content['#attached']['js'][] = array('data' => drupal_get_path('module', 'likepost') .'/likepost.js');
        $node->content['#attached']['css'][] = array('data' => drupal_get_path('module', 'likepost') .'/likepost.css');


* Displays the Like post details.
function display_like_post_details($nid) {

    global $user;
    $totalLike =  likepost_get_total_like($nid);
    $hasCurrentUserLiked = likepost_get_like($nid , $user->uid);

    return theme('like_post',array('nid' =>$nid, 'totalLike' =>$totalLike, 'hasCurrentUserLiked' => $hasCurrentUserLiked));
* Implements hook_theme().
function likepost_theme() {
    $themes = array (
        'like_post' => array(
            'arguments' => array('nid','totalLike','hasCurrentUserLiked'),
    return $themes;

function theme_like_post($arguments) {
    $nid = $arguments['nid'];
    $totalLike = $arguments['totalLike'];
    $hasCurrentUserLiked = $arguments['hasCurrentUserLiked'];
    global $base_url;
    $output = '<div class="likepost">';
    $output .= 'Total number of likes on the post are ';
    $output .= '<div class="total_count">'.$totalLike.'</div>';

    if($hasCurrentUserLiked == 0) {
        $linkText = 'Like';
    } else {
        $linkText = 'Delete Like';

    $output .= l($linkText, $base_url.'/likepost/like/'.$nid, array('attributes' => array('class' => 'like-link')));

    $output .= '</div>'; 
    return $output;

Inside likepost_node_view() we check for when the node is in the full view mode and we add the markup returned by the function display_like_post_details(). We also attached our custom JS and CSS file when the view is rendered using the attached property on the node content. In function display_like_post_details() we get the total number of likes for the post and whether or not the current user has liked the post. Then we call the theme function which will call the function theme_like_post() which we have declared in the implementation of ‘hook_theme’ but will allow the designers to override if required. In theme_like_post(), we create the HTML output accordingly. The href on the link is the $base_url and the path to our callback appended to it. The node ID is also attached to the URL which will be passed as a parameter to the callback.

Once this is done, add a file likepost.css to the module root folder with the following contents:

.likepost {
    border-style: dotted;
    border-color: #98bf21;
    padding: 10px;

.total_count {
    font-weight: bold;

.like-link {

.like-link:hover {
    color: red;

Now if you go to the complete page of a post you will see the Like post count as shown below.

Adding the jQuery logic

Now that we see the like link displayed, we will just have to create the likepost.js file with the following contents:

jQuery(document).ready(function () {

    jQuery('a.like-link').click(function () {
            type: 'POST', 
            url: this.href,
            dataType: 'json',
            success: function (data) {
                if(data.like_status == 0) {
                else {
                    jQuery('a.like-link').html('Delete Like');

            data: 'js=1' 

        return false;

The above code binds the click event to the like link and makes an AJAX request to the URL of our callback menu function. The latter will update the like post count accordingly and then return the new total count and like status, which is used in the success function of the AJAX call to update the UI.

Updated UI with Like count


jQuery and AJAX are powerful tools to create dynamic and responsive websites. You can easily use them in your Drupal modules to add functionality to your Drupal site, since Drupal already leverages jQuery for its interface.

Have feedback? Let us know in the comments!

Apr 20 2015
Apr 20

Membership badgeI want to give a big thank you to all of our new and renewing members who gave funds to continue the work of the Drupal Association in the first quarter of this year. We couldn't do much without your support. Shout outs to all of you!

Membership Makes a Difference

We had several recap blog posts a few weeks ago, but just as a reminder, your membership is incredibly important not only to us, but to the project too! Dues from memberships go to fund intiaitves like our community cultivation grants, which help people around the world build their local Drupal communities and improve the project. For more information on how membership makes a difference, check out this infographic or see what changes are coming in 2015.

Thank You, Members!!

There are 845 fantastic members on our list of first quarter donors. You can find the list here. Let's give them a big thank you all together!

Apr 20 2015
Apr 20

Blink Reaction Pro People LogoThe following blog was written by Drupal Association Supporting Partner and DrupalCon Los Angeles Diamond Sponsor Blink Reaction and Propeople.

DrupalCons are a very important time of the year for the Drupal community. It is a time for us to come together and continue our collaboration that we share throughout the year in a virtual space and establish goals and plans to move forward in a way that is in the community’s best interest. It is also a time to take stock of our accomplishments, and who we are as a community. One of our favorite moments in DrupalCons is the group picture: it’s always amazing to see how the community stands together and continues to grow.

This year’s DrupalCon in Los Angeles is especially important to us because this is where we will unveil the name of the new Drupal agency that consists of the companies formerly known as Blink Reaction and Propeople. We have come together to create a new digital agency, the largest one in the world with a focus on Drupal, and we are very excited about what this means.

Our combined agency is part of the Intellecta Group (listed on the NASDAQ OMX) , and consists of 400+ employees in 19 offices in 11 countries on 4 continents. This is an amazing reach for an organization that is so passionate about Drupal! We’re excited for this unique opportunity to support the Drupal project and the community in ways that would have been impossible prior to the merger.

For example, we’re eager to begin promoting Drupal as a solution for the biggest enterprises on a global scale. Locally, we can influence awareness and excitement in our 19 local communities, helping the next generation find opportunity and excitement in Drupal.

We now have the ability to affect change in a multitude of cultures, in the many diverse communities where each of our offices are located. Where there aren’t yet camps, we can lead their initiation. Where there are Cons, we can help to inspire the next generation of Drupal leaders. We are committed to building up the next generation of talent via our orchestrated public and private training efforts, and look forward to beginning that work at DrupalCon Los Angeles.

So please, stop by booth 300 to say hello and learn more about the new company, and our future within the community. We look forward to seeing all of our friends in the Drupal community, old and new, and are even more excited to discuss how we’ll work with the community for many years to come.

About us.

Blink Reaction and Propeople are joining forces to create a new digital agency built on technology, driven by data, and focused on user experience. The two companies have delivered state-of-the-art Drupal solutions for a variety of the open-source platform’s largest customers. The agencies’ collective portfolio includes brands such as Pfizer, NBC, Stanford University, the City of Copenhagen and General Electric.

Blink Reaction and Propeople are a part of the Intellecta Group. The companies in the group are Blink Reaction LLC, Bysted AB, Bysted A/S, Hilanders AB, Intellecta Corporate AB, ISBIT GAMES AB, Propeople Group ApS, Rewir AB, River Cresco AB, Unreel AB and Wow Events AB. Intellecta AB is noted on NASDAQ OMX Stockholm and employs around 550 people in Sweden, Denmark, Austria, Germany, the Netherlands, the United Kingdom, Bulgaria, Moldavia, Ukraine, Brazil, USA, Vietnam and China.

Apr 20 2015
Apr 20

Blink Reaction Pro People LogoThe following blog was written by Drupal Association Supporting Partner and DrupalCon Los Angeles Diamond Sponsor Blink Reaction and Propeople.

DrupalCons are a very important time of the year for the Drupal community. It is a time for us to come together and continue our collaboration that we share throughout the year in a virtual space and establish goals and plans to move forward in a way that is in the community’s best interest. It is also a time to take stock of our accomplishments, and who we are as a community. One of our favorite moments in DrupalCons is the group picture: it’s always amazing to see how the community stands together and continues to grow.

This year’s DrupalCon in Los Angeles is especially important to us because this is where we will unveil the name of the new Drupal agency that consists of the companies formerly known as Blink Reaction and Propeople. We have come together to create a new digital agency, the largest one in the world with a focus on Drupal, and we are very excited about what this means.

Our combined agency is part of the Intellecta Group (listed on the NASDAQ OMX) , and consists of 400+ employees in 19 offices in 11 countries on 4 continents. This is an amazing reach for an organization that is so passionate about Drupal! We’re excited for this unique opportunity to support the Drupal project and the community in ways that would have been impossible prior to the merger.

For example, we’re eager to begin promoting Drupal as a solution for the biggest enterprises on a global scale. Locally, we can influence awareness and excitement in our 19 local communities, helping the next generation find opportunity and excitement in Drupal.

We now have the ability to affect change in a multitude of cultures, in the many diverse communities where each of our offices are located. Where there aren’t yet camps, we can lead their initiation. Where there are Cons, we can help to inspire the next generation of Drupal leaders. We are committed to building up the next generation of talent via our orchestrated public and private training efforts, and look forward to beginning that work at DrupalCon Los Angeles.

So please, stop by booth 300 to say hello and learn more about the new company, and our future within the community. We look forward to seeing all of our friends in the Drupal community, old and new, and are even more excited to discuss how we’ll work with the community for many years to come.

About us.

Blink Reaction and Propeople are joining forces to create a new digital agency built on technology, driven by data, and focused on user experience. The two companies have delivered state-of-the-art Drupal solutions for a variety of the open-source platform’s largest customers. The agencies’ collective portfolio includes brands such as Pfizer, NBC, Stanford University, the City of Copenhagen and General Electric.

Blink Reaction and Propeople are a part of the Intellecta Group. The companies in the group are Blink Reaction LLC, Bysted AB, Bysted A/S, Hilanders AB, Intellecta Corporate AB, ISBIT GAMES AB, Propeople Group ApS, Rewir AB, River Cresco AB, Unreel AB and Wow Events AB. Intellecta AB is noted on NASDAQ OMX Stockholm and employs around 550 people in Sweden, Denmark, Austria, Germany, the Netherlands, the United Kingdom, Bulgaria, Moldavia, Ukraine, Brazil, USA, Vietnam and China.

Apr 20 2015
Apr 20

Building great (Drupal) websites can often be made more difficult than it needs to be when your site builders, developers and themers haven't got the same content as each other.

Creating dummy content is fine up to a point: it lets you test your work to make sure you are going in the right direction. At Annertech we use devel generate all the time and it's a fantastic tool but as it generates content randomly, results can be very varied - for example, a phone number field could end up with 60 integers for example (fine for the developer, but a hindrance to the themer).

The best solution is to have the final site content prior to beginning development, but this rarely happens. If final content is not available, it's beneficial to at least have meaningful content that is relevant to the site. If you build this content into your development plan, it can be used right up to and post site deployment - saving you time and your clients money in the long run.

The Glossary

Bean: A Drupal module to makes blocks fieldable - it stands for "block entities aren't nodes"
Devel Generate: A sub-module of the devel module, used for generating dummy content, users, menus, etc.
Field collection: A module that allows you to group collections of fields and make them repeatable on Drupal entities such as nodes
Features: A module that allows you to export your configuration to code files
Drush: A Drupal shell command tool - the Swiss Army Knife of Drupal tools

The Problem

I'm using the excellent bean module to build a slideshow; its got a field collection for slides and each slide has a number of text fields and an image. We are using features to store the configuration. Each time we rebuild the site we would need to repopulate our bean with our lovely content which after 2 rebuilds could expose a problem in the form of RSI (Really Sore I) or something along those lines. Not an ideal workflow!

The Solution

So, here we are going to build a small PHP script that we can call via drush which will populate a bean with some content programatically, and here's how it looks:

This array contains the 'content' for our bean slideshow. We will loop through this to create each bean.

We will use something like this for each field collection: We are creating an instance of the entity we wish to create, in this case a field collection:

We then attach it to our bean entity. And the whole picture looks like so:

There is a bit more logic added to deal with image files and this script could be improved with a little bit of error checking, but it's a really simple approach and can be easily adapted to suit a number of scenarios.

To call the script with drush simply `drush scr sites/all/scripts/create_bean_slideshow.php` or what ever the path to the script is. For bonus points you can call this through a shell script or build tool like Phing.

In terms of a decent continuous integration workflow, if you're adding new features to a website, this method means:

  • Streamline deployments

  • Verbatim content across environments

  • Contexts, rules and other site elements can use your beans.

  • Less chance of human error

  • A repeatable and reusable process

Also it cuts down on the needless exposure to Really Sore I. And has improved our workflow for new website builds and migrations.

I'd love hear your thoughts in the comments below on how you've solved similar problems for site building.

Apr 20 2015
Apr 20

If you have a fieldgroup in a node, you may want to hide it on some conditions. Here's how to do that programmatically. At first, we need to preprocess our node like this:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. }

The tricky part starts here, if you'll google for "hide a fieldgroup" you'll get lots of results referencing a usage of field_group_hide_field_groups() like this snippet: http://dropbucket.org/node/130.

While this function perfectly works on forms it is useless if you apply it in hook_preprocess_node() (at least I couldn't make it work). The problem is fieldgroup uses 'field_group_build_pre_render' function that is get called at the end of the preprocessing call and populates your $variables['content'] with a field group and its children, so you can't alter this in hook_preprocess_node(). But as always in Drupal there's a workaround. At first let's define some simple logic in our preprocess_node() to determine if we want to hide a field group:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. if ($variables['uid'] != 1) {

  6. // You can call this variable any way you want, just put it into $variables['element'] and set as TRUE.

  7. $variables['element']['hide_admin_field_group'] = TRUE;

  8. }

  9. }

Ok, so if user's id is not 1 we want to hide some fantasy 'admin_field_group'. We define logic here and pass result into elements array that is to be used later. As I previously noted, field group uses 'field_group_build_pre_render' to combine fields into a group, so we just need to alter this call in our module:

  1. /**

  2. * Hide admin field group on a node display.

  3. */

  4. function MODULE_NAME_field_group_build_pre_render_alter(&$element) {

  5. if (isset($element['hide_admin_field_group']) && isset($element['hide_admin_field_group'])) {

  6. $element['hide_admin_field_group']['#access'] = FALSE;

  7. }

  8. }

We made a check for our condition and if it is met, we set field group's access to FALSE that means: hide the field group. So now you should have a field group hidden on your node display. Of course, this example is the simplest case, you may add dependencies on node view_mode, content type and other conditions, so sky is the limit here. You can find and copy this snippet at dropbucket: http://dropbucket.org/node/927 I wonder, if you have another way of doing this?

Apr 20 2015
Apr 20

Drupal 8 comes with many performance improvements, one of which being that javascript is no longer indiscriminately loaded on every page. This means that for anonymous users, there are many pages where there is no jQuery or even javascript loaded.

Although this is a good thing, sometime you do need jQuery (for example to use Ajax, in which case you'd also need other scripts). So how do you load these files? Using hook_page_attachments() to #attach your own library to the page.

And if we look at the documentation page for assets, we'll see how we can add our own library. We need to create a my_module.libraries.yml or my_theme.libraries.yml file. And inside, we can add the following:

  version: VERSION
    js/scripts.js: {}
    - core/jquery
    - core/drupal.ajax
    - core/drupal
    - core/drupalSettings
    - core/jquery.once

Where my_scripts will be the name of the library we will reference when attaching.

As you can see, we are not including any javascript or css of our own, we are just making use of the dependency scripts provided by core.

Above, I mentioned the use of hook_page_attachments() as the way we can attach libraries. However, it's not the only one. You can attach to render arrays or even render elements. But here we want to see how we can make sure that anonymous users get the required core javascript files loaded on pages. So we can implement hook_page_attachments() like so:

function my_module_page_attachments(array &$attachments) {
  // Attach only for anonymous users.
  if (\Drupal::currentUser()->isAnonymous()) {
    $attachments ['#attached']['library'][] = 'my_module/my_scripts';

In this hook, we check if the current user is anonymous and attach the library we created. We reference this with the construct module_name/library_name.

Hope this helps.

Apr 20 2015
Apr 20

In part of our joint educational initiative with Acquia, we’re back in 2015 with new training sessions at universities to educate students on the benefits and value of Drupal as a leading content management system. Students who are interested in Drupal and open source technologies, have the chance to learn more about Drupal from Vardot and Acquia, and experience first hand, installing and setting up Drupal.

The first event will take place in Princess Sumaya University for Technology (PSUT) on the 29th April 2015 at 12:30 PM. You can learn more about the event on Acquia’s Training Event Page

Apr 19 2015
Apr 19

Using LINQ (Language Integrated Queries) in Drupal or how to write queries x5 faster

In this article I will show you how to use an enterprise ready technology (LINQ) to write your Drupal queries x5 faster, with improved readabilty, usability and maintainability.

LINQ is basically Queries in Code - not queries in strings in code - The advantages of having queries integrated in the language are, among others:

  • Strongly typed database schema. That means no more mistakes while writing plain SQL. If you improperly write the name of a table or a field the query simply won't compile at design time. 
  • No magic strings.
  • Intellisense (AKA Autocomplete). Explore the database tables and fields and get hints while writing the query.
  • Faster development.
  • Reusable abstract expressions (lambdas and functions)

One of the big problems with Drupal is that as your requirements and complexity grow both the database abstraction layer and the query builder start to shine for their lack of productivity features (it's not just Drupal but the whole PHP ecosystem). But in Drupal this problem is worse because the data model is extremely decoupled, loose and above all complex. That was the price we had to pay to get revisions, translations and multi value fields all over an abstracted layer that can provide compatibility for multiple database engines.

Before we go any further here is what our sample use case for this article looks like in Drupal VS LINQ. This is a very simple query, but it already looks a mess in Drupal (imagine what a really complex query looks like, you are better off writing plain SQL).

// Nodos del tipo inscripción.
$query = db_select('node','inscription');
// Nos traemos el campo congreso
$query->join('field_data_field_congreso','field_congreso','field_congreso.entity_id = inscription.nid');
// Nos traemos la actividad congreso
$query->join('node','actividad','field_congreso.field_congreso_nid = actividad.nid AND actividad.type = :node_type_congreso', array(':node_type_congreso' => 'congreso'));
// Nos traemos el campo mail 
$query->join('field_data_field_email','field_email','field_email.entity_id = inscription.nid');
// Nos traemos al propietario
$query->join('users', 'inscription_owner', 'inscription_owner.uid = inscription.uid');
// Aplicamos criterio autor y actividad
$or = db_or();
$or->condition('inscription.uid', '', 'IS NULL');
$or->condition('inscription.uid', 1, '=');
// El autor debe tener el mail indicado
$query->condition('field_email.field_email_email', $mail, 'LIKE');
// Nos traemos el ID de inscripción
$query->addField('inscription', 'nid');
$result = $query->execute()->fetchAll();

And this is what that same query looks like in LINQ (it's a GIF animation to show off the power of autocomplete):

No more typing errors, runtime malformed queries or having to look up every 5 seconds what the over engineered Drupal database schema looks like.

This is all cross platform compatible:

  • The queries are run on EF (Entity Framework) that can plug on top of SQL Server, MySQL, Oracle and others.
  • This all runs on top of the .Net framework, that is already working on Linux thanks to Mono and will be natively supported soon thanks to Microsoft.
  • You can interoperate with the LINQ queries from PHP either by means of the NetPHP libraries, or with Phalanger (both of those links point to Github repositories).

To support this from Drupal we will assume that any additional .Net code is going to be customer specific, so we are going to place it into the customer's customization module.

We are going to create a new "net" folder with two subfolders "bin" and "src". The bin is going to be the place where we will be deploying the compiled binaries, and the "src" will contain the Visual Studio project's and source code.

Because we want to keep the data model binaries separate from the actual code we will be creating 3 projects:

  • One containing the Entity Framework / Linq-To-Sql data clases and any custom database acces related logic
  • One containing all our custom code and queries
  • One containing .Net tests

Your solution should end up looking something like this:

The first thing we want to do is to bring the NetPhp technology into our project. This is a bridge that uses PHP's com_dotnet extension so bring into PHP the best of .Net.

We are using the great Composer Manager module to manage module dependencies on external libraries.

Add a composer.json to your module's directory that brings in the NetPhp php libraries:

    "require": {
        "drupalonwindows/netphp": "dev-master"

Regenerate the global composer.json file either from the UI or running the drush command "composer-json-rebuild".

Navigate into sites/default/composer (or the location you have configured in composer manager) and run a composer install command.

Download the NetPhp binary library, and drop it inside your PHP installation directory (where the php.exe and php-fastcgi.exe files are located).

For this demo project we are going to put all our queries into the DrupalCustom.DrupalQueries class library project in C#. We need a way to retrieve an instance of that class, and make it aware of the database connection settings.

So we came up with this manager to alows us to retrieve the DrupalCustom.DrupalQueries instance whenever we need it and populated with the database connection options.

namespace Drupal\mymodule;

use \NetPhp\Core\NetManager;

class QueryManager {
  // @var NetManager $manager
  private static $manager = NULL;
  private static $DrupalQueries = NULL;

  public static function GetManager() {
    global $databases;
    if (static::$DrupalQueries== null) {
      static::$manager = new NetManager();
      $dll = DRUPAL_ROOT . '\\' . str_replace('/', '\\', drupal_get_path('module', 'mymodule') . '\\net\\src\\DrupalCustom\\bin\\Debug\\DrupalCustom.dll');
      static::$manager->RegisterAssembly($dll, 'DrupalCustom');
      static::$DrupalQueries = static::$manager->Create('DrupalCustom', 'DrupalCustom.DrupalQueries')->Instantiate();
      $conn = $databases['default']['default'];
      static::$DrupalQueries->InitDataContext('6470B', $conn['database'], $conn['username'], $conn['password']);
    return static::$DrupalQueries;

This is just a singleton like pattern that will retrieve an instantiated and database aware COM instance of our DrupalCustom.DrupalQueries class.

Let's see how it works with more detail.

To instantiate a .Net class from PHP you need to use a copy of the NetManager class. NetManager takes care of managing assembly and class namespaces:

static::$manager = new NetManager();

Then we must register the location of our binary file in the NetManager and give it an alias. Because we are on a development environment we will be using the /bin/Debug/DrupalCustom.dll file, but on production you might want to change this to another location. Using this location allows us to simply rebuild the .Net project and have everything updated real time.

$dll = DRUPAL_ROOT . '\\' . str_replace('/', '\\', drupal_get_path('module', 'mymodule') . '\\net\\src\\DrupalCustom\\bin\\Debug\\DrupalCustom.dll');
// Register the binary with DrupalCustom alias
static::$manager->RegisterAssembly($dll, 'DrupalCustom');

We are now ready to make an instance of our .Net class by specifying the assembly alias we registered before, and the Full Qualified Name of the class:

static::$DrupalQueries = static::$manager->Create('DrupalCustom', 'DrupalCustom.DrupalQueries')->Instantiate();

The next step is to make this query manager aware of what database is it going to use to run the queries on. We've implemented an InitDataContext that stores internally the database information into the query management class:

$conn = $databases['default']['default'];
static::$DrupalQueries->InitDataContext($conn['host'], $conn['database'], $conn['username'], $conn['password']);

That's it. From now on whenever you want to call a query that is stored in our .Net DrupalQueries class you simply need to call GetManager():

$result = QueryManager::GetManager()->MyCustomQueryOrMethod(param1, param2, param3);

You might be thinking that the 5 fold claim in improved query development performance is an exaggeration. That is far from true.

Let's see what kind of Drupal specific magic we can leverage from LINQ besides having strongly typed queries, type hinting, autocomplete and design time validation of the query.

One of LINQ's most powerful feature is the ability to reuse expressions.

Let's say for example that we have the node entity, with a bundle named "inscripcion" and that this bundle has a text based field called "field_cargo". Writing the query in C# looks like this:

var query = (from inscripton in dcs.CurrentContext.node.AsExpandable()
             join cargo in dcs.CurrentContext.field_data_field_cargo on inscripton.nid equals cargo.entity_id
             select cargo.field_cargo_value);

But we know that the data structure of field cargo and the join needed to bring it into the query is going to be always the same. So we are going to encapsulate this into its own reusable expression:

public static Expression<Func<node, IQueryable<field_data_field_cargo>>> load_cargo = (n) =>
    DCScope.Current.field_data_field_cargo.Where((i) => i.entity_id == n.nid).DefaultIfEmpty();

Then the original query can reuse this expression for improved readability and maintainablity:

var query = (from inscripton in dcs.CurrentContext.node.AsExpandable()
             from cargo in NodeFields.load_cargo.Invoke(inscripton)
             select cargo.field_cargo_value);

Even better, you can use this same expression to bring into the query the "field_cargo" as many times as you want and with as many bundles of the same entity you would like to. This was just an example, you can bundle into a single expression as much logic as you wish, such as bringing in a field for an entity reference field or any custom conditions you can come up with.

There are some approaches in Drupal to facilitate querying data, such as the EntityFieldQuery, but it is so limited that it is barely useful for anything but for making your code a little bit more readable.

We are already leveraging deployment and development gains (design-time) but what are the run-time implications of this change. Is there a performance penalty (or gain) for having to use an additional layer of COM + LINQ instead of drupal's query builder?

We are going to benchmark a sample query under several different scenarios.

This is the query in Drupal:

//nodos del tipo inscripción.
$query = db_select('node','inscription');
//unimos con las actividades (congresos)
$query->join('field_data_field_congreso','field_congreso','field_congreso.entity_id = inscription.nid');
$query->join('node','actividad','field_congreso.field_congreso_nid = actividad.nid AND actividad.type = :node_type_congreso', array(':node_type_congreso' => 'congreso'));
//aplicamos criterio autor y actividad
$query->condition('inscription.uid', $uid, '=');
$query->condition('actividad.nid', $nid, '=');
$query->fields('inscription', array('nid'));
$result = $query->execute()->fetchAll();

And this is the query in C#:

var result = (from inscription in dcs.CurrentContext.node.AsExpandable()
              join field_congreso in dcs.CurrentContext.field_data_field_congreso on inscription.nid equals field_congreso.entity_id
              join congreso in dcs.CurrentContext.node on field_congreso.field_congreso_nid equals congreso.nid
              where inscription.type == "inscripcion"
              && inscription.uid == uid
              && congreso.nid == nid
              select inscription.nid);

return result.FirstOrDefault();

We have verified that both queries output a similar SQL statement and that they are 100% equivalent.

We tested running the query 200 times in a tight loop in 3 different scenarios with the following results:

  Maximum (s) Average(s) Drupal Query Builder 0.61 0.57 LINQ over Entity Framework 6 1.23 1.12 LINQ over Linq-To-Sql 0.57 0.499

Nothing really surprising, the entity framework is a big bloated piece of an ORM, and even with compiled queries (that come by default with EF6) they are 100% slower than Drupal's query builder or Linq-To-Sql. This being Microsoft Flagship ORM at the moment, we don't understand this results and there is a chance that there is some sort of misconfiguration, but we were not able to find it.

We are also aware that EF6 first compilation is slower (it then stores statically the compiled query in the process), and being PHP a state-less architecture looping 200 times over a query is not a real world scenario. EF6 is already out of this game.

We are going to do an ApacheBench test over a page that bootstraps drupal and runs a single query without concurrency and 1000 times. This is a more real life friendly scenario because the COM object will be instantiated on every request, instead of being reused in a loop.

define('DRUPAL_ROOT', getcwd());
require_once DRUPAL_ROOT . '/includes/bootstrap.inc';
// ----------- TEST 1 LINQ
$queries = Drupal\prevencionintegral\QueryManager::GetManager();
$uid1 = $queries->InscripcionDeUsuario(48577, 24778);
$uid1 = Drupal\prevencionintegral\Managers\ManagerCongresoInscriptions::InscripcionDeUsuario(48577, 24778);

The results showed that there was no appreciable difference in throughput, even taking into account that the LINQ version has much more overhead (instantiating a COM object, communicating  through COM, etc.) as compared to Drupal's query builder.

Our conclusion is that from a performance standpoint there is no penalty when moving queries out of Drupal and into LINQ, actually you get a slight performance edge.

Apr 18 2015
Apr 18
mad scientist

“Probatura” (Spanish word): Test, trial, experiment.

If you’re a web developer, chances are that you’ve come across Emmet before. If you haven’t, chances are that you’re wasting time whenever you get to write some html in your favorite code editor or IDE. You should really check it out and see the options on the project page, but let me show you a quick example of how Emmet can help you. Let’s say you’re prototyping a simple page and know the markup you need. It would look like this:

XHTML <div> <section id="main"> <article class="blog-post"> <div class="post-title"></div> <div class="post-content"></div> </article> <article class="blog-post"> <div class="post-title"></div> <div class="post-content"></div> </article> <article class="blog-post"> <div class="post-title"></div> <div class="post-content"></div> </article> <article class="blog-post"> <div class="post-title"></div> <div class="post-content"></div> </article> <article class="blog-post"> <div class="post-title"></div> <div class="post-content"></div> </article> </section> </div> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 <div>    <sectionid="main">        <articleclass="blog-post">            <divclass="post-title"></div>            <divclass="post-content"></div>        </article>        <articleclass="blog-post">            <divclass="post-title"></div>            <divclass="post-content"></div>        </article>        <articleclass="blog-post">            <divclass="post-title"></div>            <divclass="post-content"></div>        </article>        <articleclass="blog-post">            <divclass="post-title"></div>            <divclass="post-content"></div>        </article>        <articleclass="blog-post">            <divclass="post-title"></div>            <divclass="post-content"></div>        </article>    </section></div>

Easy, right? Because you’re a cool front-ender with great and fancy text editors like Sublime or Atom, and fancy features like tag autocompletion, attribute suggestions, etc… surely just a few seconds to write. For you too, phpstorm guy, because nobody beats your all-terrain IDE, capable of even preparing pop-corn for you while you’re churning out the most beautiful PHP lines ever written with, you know… a real programming tool. Not to mention the vim players out there. You really are on another level of the game. HTML ain’t nothing for you, is it? Well, whatever built-in support you have for writing html, Emmet’s going to be better.

XHTML div>section#main>article.blog-post*5>div.post-title+div.post-content 1 div>section#main>article.blog-post*5>div.post-title+div.post-content

With Emmet, you’d type this, hit the tab key on your keyboard, and get exactly the markup mentioned. Awesome, isn’t it?


Well, guess what: Emmet’s available as a plugin for pretty much every text editor or IDE out there (in fact it’s bundled by default in some of them), so you can start using it right now without changing a single thing on your toolset. Anyway, this post is not to talk about all the goodness in Emmet. The project page does a great work at that.

Dirty coding on a Friday night

Motivated by some of the inconveniences of using Panels & mini panels to arrange elements in Drupal pages, I’ve recently started to think about the idea of having a ctools style plugin that allows me to specify an emmet-like string of markup in the settings form, that is parsed upon submission, and used later on to render the pane contents accordingly. This would be a fairly simple thing to do for normal content panes, since there are always two elements to render (pane title and pane content), so there’s no need to account for multiple combinations of elements. In other words: I don’t need a parser that supports all the features that Emmet has.

Since I had nothing to do, I couldn’t get to sleep thanks to a beautiful neck pain and I was reasonably inspired, I decided to grab a beer and start fighting with regular expressions to try and come up with a simple proof-of-concept emmet-parser in PHP. Oh, and I suck at regular expressions. I suck way more than I’d happily admit in public without feeling bad about it, so it was a great chance to spend some time getting better at them. The results, after a couple of hours feeling pretty miserable about my regex masonry, are here:

Emmet-parser in PHP PHP function wrapper_expand($markup, $content = '') { // Split string elements. Allow 1-level grouping. $wrapper_elements = preg_split('/>(?!(\w+)\))/', $markup, NULL, PREG_SPLIT_NO_EMPTY); // Wrap contents. $output = wrapper_process_elements($wrapper_elements, $content); return $output; } function wrapper_process_elements($elements_array, $content) { $output = ''; $current_element = array_shift($elements_array); $processed_element = process_emmet_element($current_element); $output .= $processed_element['prefix']; if (empty($elements_array)) { $output .= $content; } else { $output .= wrapper_process_elements($elements_array, $content); } $output .= $processed_element['suffix']; return $output; } function process_emmet_element($element) { $processed_element = array(); preg_match('/\w*/', $element, $matches); $element_tag = reset($matches); // Check for ids. (Makes no sense to accept multiple, but it's still supported). preg_match_all('/\#[a-z-A-Z0-9]*/', $element, $ids); $ids = reset($ids); array_walk($ids, '_remove_prefix_symbol'); // Check for classes. preg_match_all('/\.[a-z-A-Z0-9]*/', $element, $res); $classes = reset($res); array_walk($classes, '_remove_prefix_symbol'); // Check for attributes. preg_match_all('/\[([^\w-\]]|[a-zA-Z0-9])*\]/', $element, $attr_strings); $attr_strings = reset($attr_strings); array_walk($attr_strings, '_remove_prefix_symbol'); array_walk($attr_strings, '_remove_suffix_symbol'); // foreach ($attr_strings as $attr_string) { // $attrs = preg_split('/\s(?!(\w|\!|\?|\@)+\')/', $attr_string); // } // TODO: Implement support for groupings?. if (substr($element, 0, 1) == '(') { // $processed_element .= wrapper_expand($element); } $processed_element['prefix'] = '<' . $element_tag; if ($ids) { $processed_element['prefix'] .= ' id="' . implode(' ', $ids) . '"'; } if ($classes) { $processed_element['prefix'] .= ' class="' . implode(' ', $classes) . '"'; } if ($attr_strings) { $processed_element['prefix'] .= ' ' . implode(' ', $attr_strings); } $processed_element['prefix'] .= '>'; $processed_element['suffix'] = '</' . $element_tag . '>'; return $processed_element; } function _remove_prefix_symbol(&$value, $key) { $value = substr($value, 1); } function _remove_suffix_symbol(&$value, $key) { $value = substr($value, 0, strlen($value) - 1); } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 functionwrapper_expand($markup,$content=''){  // Split string elements. Allow 1-level grouping.  $wrapper_elements=preg_split('/>(?!(\w+)\))/',$markup,NULL,PREG_SPLIT_NO_EMPTY);     // Wrap contents.  $output=wrapper_process_elements($wrapper_elements,$content);  return$output;}   functionwrapper_process_elements($elements_array,$content){  $output='';     $current_element=array_shift($elements_array);  $processed_element=process_emmet_element($current_element);  $output.=$processed_element['prefix'];  if(empty($elements_array)){    $output.=$content;  }  else{    $output.=wrapper_process_elements($elements_array,$content);  }  $output.=$processed_element['suffix'];  return$output;}     functionprocess_emmet_element($element){  $processed_element=array();  preg_match('/\w*/',$element,$matches);  $element_tag=reset($matches);     // Check for ids. (Makes no sense to accept multiple, but it's still supported).  preg_match_all('/\#[a-z-A-Z0-9]*/',$element,$ids);  $ids=reset($ids);  array_walk($ids,'_remove_prefix_symbol');     // Check for classes.  preg_match_all('/\.[a-z-A-Z0-9]*/',$element,$res);  $classes=reset($res);  array_walk($classes,'_remove_prefix_symbol');     // Check for attributes.  preg_match_all('/\[([^\w-\]]|[a-zA-Z0-9])*\]/',$element,$attr_strings);  $attr_strings=reset($attr_strings);  array_walk($attr_strings,'_remove_prefix_symbol');  array_walk($attr_strings,'_remove_suffix_symbol');   //  foreach ($attr_strings as $attr_string) {//    $attrs = preg_split('/\s(?!(\w|\!|\?|\@)+\')/', $attr_string);//  }     // TODO: Implement support for groupings?.  if(substr($element,0,1)=='('){//    $processed_element .= wrapper_expand($element);  }     $processed_element['prefix']='<'.$element_tag;  if($ids){    $processed_element['prefix'].=' id="'.implode(' ',$ids).'"';  }     if($classes){    $processed_element['prefix'].=' class="'.implode(' ',$classes).'"';  }  if($attr_strings){    $processed_element['prefix'].=' '.implode(' ',$attr_strings);  }     $processed_element['prefix'].='>';  $processed_element['suffix']='</'.$element_tag.'>';     return$processed_element;}       function_remove_prefix_symbol(&$value,$key){  $value=substr($value,1);}   function_remove_suffix_symbol(&$value,$key){  $value=substr($value,0,strlen($value)-1);}

Beautiful, isn’t it? Before judging the code, remember I was mainly fooling around with regex and processing the parsed elements as a proof-of-concept. The code needs cleaning up, refactoring when the client has budget for it (oops, such unprofessional!) and commenting.

Testing it

So, with that code in place, I’d just need to call wrapper_expand() with the emmet-like string, and some content (optional) to wrap within the parsed string. While the content is not needed at all, keep in mind the ultimate goal of this would be to wrap some elements from the panels UI. Let’s try it:

Testing the emmet-parser PHP $output = wrapper_expand("div>section>div.class>div#id.class>section[title='Hello world!' colspan=3]>final", "Papaoutai rocks"); 1 $output=wrapper_expand("div>section>div.class>div#id.class>section[title='Hello world!' colspan=3]>final","Papaoutai rocks");


Emmet-parser results XHTML <div> <section> <div class="class"> <div id="id" class="class"> <section title="Hello world!" colspan="3"> <final>Papaoutai rocks!</final> </section> </div> </div> </section> </div> 1 2 3 4 5 6 7 8 9 10 11 <div>    <section>        <divclass="class">            <divid="id"class="class">                <sectiontitle="Hello world!"colspan="3">                    <final>Papaoutai rocks!</final>                </section>            </div>        </div>    </section></div>

 Yay, it works!

Awesome! Is it contributed already?

Of course it isn’t. While it works pretty well for the most basic stuff, like nesting elements, specifying classes, ids, and any arbitrary attributes, right now the code is just a bunch of functions with no real use from Panels UI. Aside from cleaning that up, I still have to create a ctools style plugin that takes care of showing the right fields on the settings form, processing them and store the processed markup in a suitable format (to avoid processing on display time).

Really not a big deal if the style plugin is going to be used just for panel panes. However, my goal is to make it usable for panel regions, which means I need to finish the grouping feature support, and come up with a particular character not used by Emmet (“%” for example) that allows the parser to inject contents in arbitrary sections of the generated markup. With that in place, I think this could be a pretty damn good alternative to the existing ways of defining markup from Drupal’s UI, and would provide a good foundation to extend this other areas of the site, like field formatters, or views style plugins.

Any thoughts on this from Drupal developers will be greatly appreciated, so feel free to leave your comments below!

Apr 17 2015
Apr 17

In this episode we cover an overview of the Drupal 7 Views module. The Drupal Views module is probably the most popular Drupal module and is installed in almost every Drupal 7 website I build. It’s so popular in fact that it’s included in Drupal 8 by default.

The Views module allows you to easily build and format lists of content on your Drupal 7 site. If you need to build a simple list of Nodes, Views can do that. If you need to build a table listing, Views can do that too. In this video, we will go through an introduction and overview of Views, as well as a few example views. You will learn about the different types of content you can display with Views, different display settings, and other Views options.

This video is built to be an introduction for beginners and newcomers to Drupal to teach the basics of the Drupal 7 Views module. In this video you will create two views. The first will be a page view that displays a list of articles on your Drupal website. The second will be a block view that shows the titles of the last three articles on your Drupal website. Using the skills you learn in this video, you will be ready to tackle more complicated problems that can be solved with views.

Apr 17 2015
Apr 17

The overall aim was to create a site that allowed BHoF to tell compelling narratives about the vast number of artifacts in their collection.

By providing meaningful connections between different objects and stories, they hoped to increase user engagement and encourage a deeper journey through the site -- and hoped to inspire more people to visit the Hall of Fame itself as well.

Digital Asset Management System (DAMS)

The BHoF has a very large archive, and is currently in the profess of digitizing it. Part of the brief for this project was to provide a Digital Asset Management System (DAMS) for long-term storage of these digitized assets.

After some investigation, the DAMS BHoF and Cogapp selected was Islandora. This combines Fedora Repository -- a best-of-breed digital object store -- with a management interface that is fully integrated into Drupal. Using Islandora allows the BHoF to administrate digital assets and web content from within the system, and stores those assets according to archival best practices.

Islandora gives the National Baseball Hall of Fame and Museum the opportunity to store and publish over 100 years of history in images.

Deliver content (Paragraphs)

With such a vast amount of content, BHoF needed a less conventional way to capture and present their pages. A typical approach would have involved numerous different content types and fields, with large amounts of body content entered to a WYSIWYG editor. Cogapp instead chose to use Paragraphs, another great contrib module that provides a better way to deliver content.

What this enabled us to do was create pages that can be built up out of a variety of smaller structures (the paragraph types or *bundles*), each of which has a predefined set of fields. For example, a page might be built out of a mixture of paragraphs containing generic items like text and images, as well as more particular items like a Hall of Fame inductee.

Building pages in this way gives them a freeform feel, but the data remains structured for full control over presentation. To this end, each paragraph item can use its own theming template, which is defined as paragraphs-item--PARAGRAPH-ITEM-NAME.tpl.php. More generally, all paragraphs are Drupal entities, so they can be administrated, manipulated, and presented using all the power that this allows. When constructing a page using paragraphs, users are presented with the option to add a paragraph of a particular type:

Paragraph of a particular type
This example shows instances of two paragraph types: a subtitle and a reference to a Hall of Famer's profile.

 a subtitle and a reference to a Hall of Famer's profile

In summary, the Paragraphs module offers us:

  • An extendable content area
  • The ability to add complex (but structured) content
  • Fully rearrangeable layout using drag-and-drop
  • Finer control over output than just using WYSIWYG

Reach content (Entity references)

Drupal has excellent content support, with associations between content defined using the entity reference module. By combining entity reference fields with paragraphs, we created a way to pull in content from inside paragraphs. This allows us to:

  • Reach content regardless of source (locally added images vs DAMS assets)
  • Support user onward journeys (reach other content)
  • Support social media sharing (assets and content)

A major advantage of our approach was the use of individual preprocessors for each content type. We used this to create a seamless experience on content viewing by our users. By individually preprocessing each content type we are able to:

  • Interchangeably use locally hosted images and DAMS assets:
    • As hero images
    • As sidebar images
    • As full size images on content
    • As slideshows
  • Have finer control on content (e.g. automatically generate an inline table of contents)
  • Create different Twitter cards based on content type
  • Integrate other sources of content from the CMS.

Present content (Entity view modes)

View modes provide entities with a mechanism to select which fields to display, and a way to display them, through templates. We use view modes to gain even finer control on HTML output on the referenced entities in much the same way we did with paragraph type templates. Drupal comes with two default view modes per content type, 'full mode' and 'teaser'. To add additional view modes we used hook_entity_info_alter.

View modes are administrated from the 'Manage Display' tab of a content type:

View modes

Each view mode:

  • Has its own display fields
  • Has its own template file
  • Applies its own mode-specific logic (e.g. for slideshows, full images, etc)
  • Allows front-end work with templates rather than the Drupal theming system.

Engage users (Extra fields)

The ability to share pages over social media with Drupal is straightforward, and there are several proven modules for doing this. In this case, however, we had slightly different requirements.

We wanted to engage users and facilitate discussions through social media integration. When a user is visiting a page in BHoF she is not just viewing a single item: she is reading text, seeing images that may be coming from the DAMS, interacting with slideshows, viewing Hall of Famers' profiles, and so on.

We wanted to provide a way to share a page as well as any other asset that was appearing in that page. Our approach was to use <a href="https://api.drupal.org/api/drupal/modules!field!field.api.php/function/hook_field_extra_fields/7">hook_field_extra_fields</a> to add extra (pseudo) fields. Next, hook_node_view is used to create the logic within each extra field. In this scenario, each extra field contains a social media share link (e.g. https://developers.facebook.com/docs/plugins/share-button).

Extra fields act as any other field in a entity with its own display settings. This allowed us to re-use our previous approach, making share links available through view mode templates, for each individual item. Also, preprocessing each content type made it possible to provide relevant information on different twitter cards depending on the asset being shared (e.g. twitter card large image or article).

Apr 17 2015
Apr 17

Angie Byron is Director of Community Development at Acquia. For this interview, during the final day of DrupalCon Amsterdam, we were able to find an empty auditorium. Alas, filming with my brand-new GoPro camera, we got off to a topsy-turvy start...

RONNIE RAY: I’ve had you upside down.

ANGIE BYRON: Oh hahaha!

I go by Angie Byron or webchick, and more people know me as webchick than Angie Byron.

Today, what I love to do at DrupalCons, on the last day of the sprint days, is just walk around all the tables and see what everyone is working on, cause there’s hundreds of people here and they’re all sort of scratching their own itches on everything from Drupal-dot-org to, like, what is the newest coolest content staging thing gonna be?, to how are we going to get Drupal 8 done?

And everybody working together and collaborating with people they don’t get to see all the time, it’s a lot of fun for me.

I feel like we made a lot of really great decisions about the Drupal 8 release management stuff here that we’ll be able to put into practice, and help try and focus efforts on getting the critical issues resolved, trying to clean up the loose ends that we still have, and getting the release out the door faster.

And the other thing I’m going to work on for the next month is something called Drupal Module Upgrader, which is the script that can help contrib modules port their modules to Drupal 8. It automates a lot of that task.

Now that Beta is here it’s a great time for people to update their modules, so I want to work on tools to help facilitate that.

RR: What are you reading, besides books on Drupal?

AB: Not much. Although I love reading kids books, because I have a daughter who’s 16 months now and she loves to be read to. So my latest books I’ve been reading are Where is the Green Sheep? and Go, Dog, Go! and a bunch of Richard Scarry stuff and things like that because she loves to know what everything’s called. She loves books.

There’s a Dr. Seuss book called Oh, The Places You’ll Go! That book is dark, man, that is like a dark book. It’s entertaining. I remember it from when I was a kid but I don’t remember it like that!

RR: Music?

AB: I listen to a lot of old music cause I’m one of those curmudgeonly people who thinks the best music was already made. So, like I’ve been having like a ‘70s rock, ‘80s pop, ‘90s punk rock, like – that’s sort of what’s in my chain all the time. Hair metal, junk like that. How to relive my kid-age stuff.

I think the community has grown to such an enormous size now that I guess one thing I wonder about, – not really worry about– but am curious about, is if can we still maintain that small-knit community feel that we had back when I started, when we were 70 people at a DrupalCon – not the 2,500 people we have now.

It’s cool to kind of walk around DrupalCon, especially on a sprint day, especially because I feel we have retained that – and people are finding people to connect with and cool things to work on and stuff like that.

I think it’s something we all need to collectively be intentional about is, you know, it’s not just enough that Drupal is just a great software project, it’s also about the people and trying to maintain that welcome feeling – that got us all in the door – for generations to come.

So that’s something I would leave as a parting note.

Apr 17 2015
Apr 17

This session was presented at Bay Area Drupal Camp, San Diego Drupal Camp, Phoenix Drupal Camp, and Stanford Drupal Camp.

Have you written a few simple modules for Drupal 7, and are a little bit nervous to find out the changes you'll be facing in Drupal 8?

Latest slides can be seen at: mrf.github.io/drupal8modules

Working code samples are at: github.com/mrf/drupal8examples

The topics covered should be easily understood by anyone who is comfortable writing PHP. If you are still learning the basics of writing modules things might move a bit too quickly, but you should come anyway! I'll be covering the concepts introduced to the Drupal world via Drupal 8 that you will be using most frequently. The focus will be on what you NEED to know to write functional modules. I'll do my best to stay away from the cool possibilities and elegant architecture that we get with Drupal 8, but sometimes the excitement is too much for me to handle. Following a quick review of object oriented basics to get our bearings I will dive into an overview of: Configuration Management, Routes, Permissions, Annotations and Plugins. Lets upgrade our brains so we can feel comfortable writing simple Drupal 8 modules from scratch.

Pull requests welcome!

Apr 17 2015
Apr 17

Recently we had the task of loading data from a content type with 350 fields. Each node is a University’s enrollment data for one year by major, gender, minority, and a number of other categories. CSV exports of this data obviously became problematic. Even before we got to 350 fields, with the overhead of the Views module we would hit PHP timeouts when exporting all the nodes. If you’re not familiar with Drupal's database structure, each field’s data is stored in a table named ‘field_data_FIELDNAME’. Loading an entire node means JOINing the node table by entity_id with each related field table. When a node only has a handful of fields, those JOINs work fine, but at 350 fields the query runs slow.

On this site we’re also plotting some of the data using highcharts.js. We really hit a wall when trying to generate aggregate data to plot alongside a single university's. This meant loading every node of this content type to calculate the averages, which turned our slow query into a very slow query. We even hit a limit on the number of database JOINs that can be done at one time.

In retrospect this is a perfect case for a custom entity, but we already had thousands of nodes in the existing content type. Migrating them and implementing a custom entity was no longer a good use of time. Instead, we added a custom table that keeps all the single value fields in a serialized string.

The table gets defined with a hook_schema in our module's .install file:

function ncwit_charts_schema() {
  $schema['ncwit_charts_inst_data'] = array(
    'description' => 'Table for serialized institution data.',
    'fields' => array(
      'nid' => array(
        'type' => 'int',
        'default' => 0,
        'not null' => TRUE,
        'description' => 'node id for this row',
      'tid' => array(
        'type' => 'int',
        'default' => 0,
        'not null' => TRUE,
        'description' => 'intitution term id that this data belongs to',
      'year' => array(
        'type' => 'int',
        'default' => 0,
        'not null' => TRUE,
        'description' => 'school year for this node',
      'data' => array(
        'type' => 'blob',
        'not null' => FALSE,
        'size' => 'big',
        'serialize' => TRUE,
        'description' => 'A serialized array of name value pairs that store the field data for a survey data node.',
    'primary key' => array('nid'),
  return $schema;

The most important part of the array is 'data' with type 'blob', which can be up to 65kB. Not shown is another array to create a table for our aggregate data.

When a new node is saved hook_node_insert() is invoked. hook_node_update() fires both when a new node is saved and when it's updated.

 *  Implements hook_node_insert().
 *  save serialized field data to inst_data table for a new node
 *  For a new node, have to use this
function ncwit_charts_node_insert($node) {
 *  Implements hook_node_update().
 *  save serialized field data to inst_data table
function ncwit_charts_node_update($node) {
  if (isset($node->nid)) {
    // we're also calling this function from hook_node_insert
    // because hook_node_update doesn't have the nid if is a new node
  else {

Now we actually process the fields to be serialized and store. This section will vary greatly depending on your fields.

function ncwit_charts_serialize_save($node) {
  // save each value as a simple key => value item
  foreach ($node as $key => $value) {
    $data[$key] = $value[LANGUAGE_NONE][0]['value'];
  $fields = array();
  $fields['nid'] = $node->nid;
  $fields['tid'] = $node->field_institution_term[LANGUAGE_NONE][0]['tid'];
  $fields['year'] = $node->field_school_year[LANGUAGE_NONE][0]['value'];
  $fields['data'] = serialize($data);
      'nid' => $node->nid,

When a node is deleted we have some clean-up to do.

 *  Implements hook_node_delete().
 *  Also remove node's data from inst_data
function ncwit_charts_node_delete($node) {
  if ($node->type !== 'data_survey') {
    //only care about data_survey nodes
  $query = db_select('ncwit_charts_inst_data', 'i');
  $query->fields('i')->condition('i.nid', $node->nid);
  $result = $query->execute();
  $data = $result->fetchAssoc();
  if ($data > 0) {
    db_delete('ncwit_charts_inst_data')->condition('nid', $node->nid)->execute();

When first installed or when fields get changed, we added a batch process that re-saves the serialized strings. Aggregate data is calculated during cron and saved in another table. Rather than loading every node with JOINs, the data comes from a simple query of this custom table.

Pulling the data out of the database and calling unserialize() gives us a simple associative array of the data. To pass this data to highcharts.js we have a callback defined that returns the arrays encoded as JSON. Obviously this gets more complicated when dealing with multiple languages or multi-value fields. But in our case almost everything is a simple integer.

This process of caching our nodes as serialized data changed our loading speed from painfully slow to almost instant. If you run into similar challenges, hopefully this approach will help you too.

Apr 17 2015
Apr 17

Why should I mix .Net and PHP?

The PHP ecosystem is half broken. Bad coded snippets, lack of enterprise level libraries. The big base of PHP developers are amateurs with no real software background. Yet sometimes users and companies are driven by market forces to use some PHP based software such as Drupal, Wordpress, etc. But PHP has been a long way since what it was at the start of the 2000's and has now almost any feature you would expect from a modern programming language (sort of). There are also productivity oriented development tools like Visual Studio integration (PHP Tools) and more and more tools are coming into the market that have boosted the developing on PHP productivity. Automated testing, continuous integration, package distribution, and many other features have been consolidating over the last few years.

Being able to consume .Net code allows you not to get stuck by all of the half-baked PHP ecosystem. Whenever you need an enterprise level library (PDF generation, Word and Excel manipulation, etc.) that is only available in .Net it can be used seamlesly. You can even enjoy faster development by using enterprise level development tools such as Visual Studio or more solid software by moving part of your code to the .Net framework.

Understanding how it works

The NetPhp libraries are split into two main components:

Historically, consuming .Net assemblies from PHP is done by means of the com_dotnet extension (included in PHP core). The problem of using the extension as-is is that it is there are a big number of issues and rigidity when consuming your assemblies. For example, you are limited to using .Net 3.5 or lower compiled assemblies, and any binary you want to consume need to be decorated as COM Visible (so you need to recompile and interface anything you plan to use).

With NetPhp all those restrictions have been overcome, allowing you to consume any type of compiled .Net library (no mater what framework version) giving flexibility as to how is the type loaded where you can even specify the exact location of the dynamic link library to get it loaded real time.

The NetPhp classes use the com_dotnet php extension to instantiate some classes inside the compiled NetPhp binary that allow us to interact with any .Net library. You can even mix .Net runtime versions in the same process.

How to install the NetPhp binary

The NetPhp binary is a COM Visible dynamic link library (.dll) that you must register (system wide) as a COM object. To do so, use the regasm tool. You can register a COM object with regasm and there is no need (but you could) to bind it to a specific file in your file system.

The regasm tool is bundled with the .Net framework and you will find a copy in the location:


I recommend the following steps to register the binary:

  • Copy the binary file to your PHP installation directory (where the php.exe and php-fastcgi.exe are located) such as c:\php\php5_6\netutilities.dll
  • Run a simple regasm command with no parameters "regasm 'c:\php\php5_6\netutilities.dll'" (remeber to use an elevated command prompt)

To ensure that the type has been properly installed use OleView (comes with the Windows SDK included with Visual Studio) just type "oleview" in your Visual Studio Developer Console. You can download a copy of oleView (x86 and x64 at the end of this article).

Remember that Windows is split between x86 and x64. Because your are probably using the x86 version of PHP, you must register the assembly using the x86 version of regasm and you will only see it with the x86 version of OleView.

Once in OleView open the .Net category:

And look for the netutilities.* COM objects:

Configure your PHP application to use the NetPHP libraries

The next step is to bring the NetPhp php library into your PHP project. 

The library is available on Github and there is a composer package for it.

You can download the files directly from Github and drop them into project, but you will need to manually setup an autoloader.

The recommended way of including it into your project is to use composer.

Add this to your composer.json:

    "require": {
        "drupalonwindows/netphp": "dev-master"

And run the composer install command.

Your application is now ready to consume any .Net binary be it a custom library of yours, or any of the types inside the .net framework.

Using NetPhp: Importing types and libraries

All the magic in NetPhp revolves around the MagicWrapper class. This class must aware of a .Net type (a type can be any kind of .Net type, be it a regular class, an enum, etc...) in order to be instanced.

In order to obtain an instance of the MagicWrapper you need to use the NetManager class. The NetManager class is used to register you assemblies and type aliases, and then retrieving a MagicWrapper instance of any type inside of any of the registered assemblies.

The NetManager class has 3 methods:

  • RegisterAssembly($assemblyPath, $alias): Registerr an assembly
  • RegisterClass($assemblyName, $class, $alias): Register an alias for a type inside an assembly
  • Create($assembly, $class): Retrieve a MagicWrapper that is aware of the specified type in the specified assembly
RegisterAssembly($assemblyPath, $alias)

Use RegisterAssembly to make the manager aware of a specific assembly and asign it an alias for future reference. You can use a strongly typed name or the path to a dll file.

For example, to register one of the .Net frameworks assemblies to use any of the types defined inside the mscorlib assembly:

$manager = new \NetPhp\Core\NetManager();
$manager->RegisterAssembly('mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089', 'mscorlib');

You can use your custom compiled DLL (you can target any .Net framework version and there is no need to make it COM visible so you can use any of the already compiled libraries that you have):

$manager = new \NetPhp\Core\NetManager();
$manager->RegisterAssembly('d:\\mypath\\mybinary.dll', 'MyLibrary');

You can give aliases to any full qualified type name in an assembly with the RegisterClass method so that you can later use short names to acces those types:

$manager->RegisterClass('mscorlib', 'System.Collections.ArrayList', 'ArrayList');

Once an assembly is registered, you can retrieve a MagicWrapper with the Create() method;

$m = $manager->Create('mscorlib', 'ArrayList');

Or without using the class alias:

$m = $manager->Create('mscorlib', 'System.Collections.ArrayList');

You do not need to register an alias to create a MagicWrapper for a type, but you must register an assembly.

Using NetPhp: The MagicWrapper/NetProxy

After using the Create() method of a NetManager you will be served an instance of NetProxy. We will go into details as to what the MagicWrapper does, but you must know that internally the NetProxy is simply a convenience wrapper over the MagicWrapper and that usually you will not directly operate on the internal MagicWrapper isntance of the NetProxy.

From now on we will talk without difference between the NetProxy and MagicWrapper.

You can retrieve the internal MagicWrapper at any time from a NetProxy instance using the GetWrapper() method.

A MagicWrapper by itself does not represent the instance of an object, what it does represent is a .Net Type awarenes. It can hold .Net objects inside it, and perform operations on them such as calling methods, accessing properties or instantiating classes.

The NetProxy has the following public methods:

  • Get($host): Used to obtain an instance of NetProxy because it cannot be instantiated directly. Host must be always an instance of MagicWrapper.
  • GetType(): Returns a string representation of the .Net type that is being wrapped.
  • Val(): Gives you the object that is being wrapped by the internal MagicWrapper, if the type can be converted to a PHP native type it will do so, otherwise you will get the COM instance (quite useless).
  • UnPack(): Returns the MagicWrapper COM instance. If your assembly is COM Visible you can directly interact with the returned COM object for better performance.
  • Instantiate(...$args): Create an instance of the type that the MagicWrapper is aware of. The instance is stored internally by the magic wrapper and you should operate on the NetProxy as if it was the .Net instance itself. Because constructors can be overloaded, you cannot pass NULL arguments to a constructor because type inference cannot be done on NULL objects.
  • Enum($value): Similar to Instance() but used for enums. Stores internally a reference to an Enum value that can be used to call methods that receive an Enum parameter.
  • IsNull(): Check if the internal instance is null. This will always return true before you call Instantiate() or Enum().
  • AsIterator(): If the internal .Net type is a collection, this method will return an instance of NetProxyCollection that you can use from PHP as if it was a regular array to perform enumerations.

Let's see all this into action with an example:

$manager = new \NetPhp\Core\NetManager();
$manager->RegisterAssembly('mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089', 'mscorlib');
$manager->RegisterClass('mscorlib', 'System.Collections.ArrayList', 'ArrayList');
$list = $manager->Create('mscorlib', 'ArrayList')->Instantiate();
// Retrieve a NetProxyCollection instance wrapper
$list = $list->AsIterator();  
// Check the .Net type.
$net_type = $list->GetType();
$this->assertEquals('System.Collections.ArrayList', $net_type);
// Populate the ArrayList contents
for ($x = 0; $x < 200; $x++) {
  $list->Add("Object {$x}");
// Make sure that count() works.
$this->assertEquals(200, count($list));
// Iterate over the collection
foreach ($list as $item) {
  echo $item->Val();

We have seen what public methods the NetProxy exposes, but we have not said yet that you can call any method or property of the internal .Net type directly on the NetProxy.

In the previous example we called the Add() method on the NetProxyCollection instance, but this  method really belongs to the System.Collections.ArrayList class.

When calling methods and accessing properties the return types are PHP native if they are compatible they will be returned in their PHP representations, otherwise we will get an instance of NetProxy wrapped over the .Net instance.

The following .Net types can be converted directly to PHP types:

  • System.String
  • System.Int32
  • System.Double
  • System.Boolean
  • System.Byte
  • System.Decimal
  • System.Char
  • System.Single
  • System.Void

When using the NetProxyCollection as an iterator the elements will be also retrieved with their native types when available.

Apr 16 2015
Apr 16

Previously, I posted my thoughts on the Acquia Certified Developer - Back End Specialist exam as well as my thoughts on the Certified Developer exam. To round out the trifecta of developer-oriented exams, I took the Front End Specialist exam this morning, and am posting some observations for those interested in taking the exam.

Acquia Certified Developer - Front End Specialist badge

My Theming Background

I started my Drupal journey working on design/theme-related work, and the first few Drupal themes I built were in the Drupal 5 days (I inherited some 4.7 sites, but I only really started learning how Drupal's front end worked in Drupal 5+). Luckily for me, a lot of the basics have remained the same (or at least similar) from 5-7.

For the past couple years, though, I have shied away from front end work, only doing as much as I need to keep building out features on sites like Hosted Apache Solr and Server Check.in, and making all my older Drupal sites responsive (and sometimes, mobile-first) to avoid penalization in Google's search rankings... and to build a more usable web :)

Exam Content

A lot of the questions on the exam had to do with things like properly adding javascript and CSS resources (both internal to your theme and from external sources), setting up theme regions, managing templates, and working with theme hooks, the render API, and preprocessors.

In terms of general styling/design-related content, there were few questions on actual CSS and jQuery coding standards or best practices. I only remember a couple questions that touched on breakpoints, mobile-first design, or responsive/adaptive design principles.

There were also a number of questions on general Drupal configuration and site building related to placing blocks, menus, rearranging content, configuring views etc. (which would all rely on a deep knowledge of Drupal's admin interface and how it interacts with the theme layer).


On this exam, I scored an 86.66%, and (as with the other exams) a nice breakdown of all the component scores was provided in case I want to brush up on a certain area:

  • Fundamental Web Development Concepts : 90%
  • Theming concepts: 80%
  • Sub-theming concepts: 100%
  • Templates: 75%
  • Template functions: 87%
  • Layout Configuration: 90%
  • Performance: 80%
  • Security: 100%

Not too surprising, in that I hate using templates in general, and try to do almost all work inside process and preprocess functions, so my templates just print the markup they need to print :P

I think it's somewhat ironic that the Front End and general Developer exams both gave me pretty good scores for 'Fundamentals', yet the back-end exam (which would target more programming-related situations) gave me my lowest score in that area!


I think, after taking this third of four currently-available exams (the Site Builder exam is the only one remaining—and I'm planning on signing up for that one at DrupalCon LA), I now qualify for being in Acquia's Grand Master Registry, so yay!

If you'd like to take or learn about this or any of the other Acquia Certification exams, please visit the Acquia Certification Program overview.

Apr 16 2015
Apr 16

Apr. 16 2015

How does mobile-friendliness affect Google search rankings?

Google reports:

Starting April 21, we will be expanding our use of mobile-friendliness as a ranking signal. This change will affect mobile searches in all languages worldwide and will have a significant impact in our search results. Consequently, users will find it easier to get relevant, high quality search results that are optimized for their devices.” (Google Webmaster Central Blog, 2015)

In other words, if you want your website to rank higher on searches for different size devices such as mobile phones or tablets, it is in your best interest to begin implementing a mobile-friendly layout. If you are new to the concept, Google provides the "Google Guide to Mobile-Friendly Websites," which covers the basic information, requirements and steps for creating a mobile-friendly site. This guide contains tips that relate to being able to perform online tasks easily and choosing themes that remain consistent throughout varying devices.

Is my website mobile-friendly?

There are a number of free online tests that will run checks on your website to determine its level of mobile-friendliness. Websites such as http://responsivetest.net/ and http://www.responsinator.com/ are good test sites that allow visitors to see what thier site will look like when displayed on different screen sizes (i.e. desktop, tablet or mobile).

Google also has a Mobile-Friendly Test specifically directed towards their new search algorithm. Instead of presenting a site on different devices, Google’s test checks to see if the website passes certain mobile-friendly requirements that are taken into consideration by their search algorithm. Because this is the test that a website must pass in order to meet the new search algorithm’s criteria, it is a must for those who want to rank higher in mobile Google searches.

Should I implement mobile-friendliness on my website?

Yes, if you want to obtain the maximum potential your site has for being searched, you should. In today’s technological setting, over half of all web searches are performed through mobile devices. Cisco claims that “Global mobile data traffic grew 69 percent in 2014. Global mobile data traffic reached 2.5 exabytes per month at the end of 2014, up from 1.5 exabytes per month at the end of 2013” (Cisco Visual Networking Index (VNI), 2015). While mobile traffic rapidly increases, companies realize they need to incorporate new marketing tactics such as creating a clear mobile-friendly agenda for their web properties. eMarketer predicts that “in 2015, mobile search will surpass desktop, and by 2018, mobile search ad spending will make up 76.7% of all digital search ad spending” (eMarketer, 2014).

What are ways I can make my website mobile-friendly?

There are multiple solutions to making your site mobile-friendly. Three main options entail creating a mobile application version or a mobile version of your website, or implementing responsive web design (RWD) to your website. At Achieve Internet, we strongly believe RWD is the best solution to creating a mobile-friendly website.

Mobile applications are also a popular option. An application can display your site’s content in a simple, user-friendly fashion. Since Google now incorporates app indexing, an organizations mobile app can also be searched on Google. App indexing presents “deep links to your app [that] appear in Google Search results on Android so users can get to your native mobile experience quickly and easily” (Google Developers). However, applications are very specific to a business compared to a blog or news app that feeds information. Building a complete business-specific app takes siginicant time, effort, and planning. It can also become a costly option since there are a plethora of unique deliverables and technical development requirements involved.

Mobile versions of a website can also be created to provide users with a mobile-friendly experience. Mobile websites utilize a different URL than your main site, and are made to display easily-digested versions of its content. This means a mobile website might have bigger buttons and more spread out content, making it easier for the user to access what they need on the site. Although, one big problem with creating a mobile version is that there may be content that gets left out. Some aspects of the original website make layouts or functionality in a mobile version difficult and must be removed

Responsive Web Design (RWD) is a reliable option and common practice utilized in mobile-friendly web development. It allows a site to adaptively present itself on mulitple screen sizes or devices. Below is an example of the Achieve-built Grammy Amplifier website shown on a mobile device as well as a tablet and desktop.

Mobile & Tablet




Because different devices have different screen sizes, developers incorporate Cascading Style Sheets 3 (CSS3) to satisfy device display requirements. This implementation greatly improves the user experience as visitors will be able to see a clear, simple layout of the website no matter what device they are viewing on.

Should I implement RWD?

Yes, if your website focuses on presenting content in a complex manner such as an eCommerce or marketing site that requires multiple layers of functionality we recommend implementing RWD. RWD will convert all the content from your original site and make it easier to navigate on different screen sizes. Unlike updating a mobile website’s content, RWD makes it possible for mobile versions of your website to stay up to date with the original site display. It keeps all of the content available for users as well.

What do we have to look forward to?

Forrester states:

Many developers tell Forrester that they use responsive web design (RWD) as their web design philosophy, but the highest adoption rate by vertical is less than 20%. This suggests that many organizations are currently working on responsive website projects for the first time” (Forrester Research, 2014).

That is not the case here at Achieve, we have gained a lot of experience implementing RWD for many of our clients who’ve requested it. Because of these projects, Achieve developers have already acquired layouts and good mobile-friendly development practices. With more site owners contacting us about making their sites mobile-friendly due to Google’s new search algorithm, we will only develop more skills in RWD as we proceed.

Don’t wait until your site becomes obsolete in Google rankings and user experience. Achieve can help you get your site to be mobile-friendly and segue in to the future. As technology advances at such a rapid speed make sure your company is prepared to take advantage of all it has to offer.

For more information about mobile-friendly & responsive web design, refer to any of our previous posts:

?      A Drupal Developers Guide to Responsive Web Design

?      Does Your Site Need Responsive Web Design?

?      Why You Should Design For Mobile First

?      Responsive Web Design - Fact Sheet

Apr 16 2015
Apr 16

One of the major disadvantages of entities in Drupal 7, is the lack of support for built-in comments. This is due to the model of the core comments module, which is heavily tied to nodes -node comments are essentially settings of each content type-, and not available for other entity types. In Drupal 8 this has changed already, and comments are considered fields that can be attached to any entity type.

For Drupal 7, in the meantime, there’s been a similar solution for a while, which is the reply module. The reply module leverages the Entity API to define a “reply” entity type, and offers a UI to create “reply” bundles. Then, via the Field API, you can add a “Reply” field to any entity type, choosing the reply bundle that should be used by that particular field. The flexibility that gives is huge, since it means that you’re not restricted to just a comment type for your application.

I’ve been using it in a project for some time, and have to say the module works pretty well, although there’s not an official stable release yet. One of the problems I came across when I started to use it, was the lack of support for the panels module, so I decided to write a ctools “content_type” plugin to add that support myself. Here is the process I followed.

First, let ctools know where the ctools plugins live:

Ctools Plugin Directory PHP /** * Implements hook_ctools_plugin_directory() */ function reply_ctools_plugin_directory($module, $plugin) { if ($module == 'ctools') { return 'plugins/' . $plugin; } } 1 2 3 4 5 6 7 8 /*** Implements hook_ctools_plugin_directory()*/functionreply_ctools_plugin_directory($module,$plugin){  if($module=='ctools'){    return'plugins/'.$plugin;  }}

Then, create the plugin in the relevant folder. Let’s name it reply_add_form. Following the code above, in this case it would be under “{module_folder}/plugins/content_types/reply_add_form”. Let’s have a look at the $plugin declaration array:

Plugin array PHP /** * Plugins are described by creating a $plugin array which will be used * by the system that includes this file. */ $plugin = array( 'single' => TRUE, 'title' => t('Reply - Add form'), 'description' => t('Form to add a new reply to an Entity'), 'category' => t('Reply'), 'defaults' => array(), 'render callback' => 'reply_add_form_content_type_render', 'edit form' => array( 'reply_add_form_select_entity_type' => t('Reply form - Entity Type'), 'reply_add_form_select_reply_field' => t('Reply form - Reply Field'), ), 'required context' => array( new ctools_context_required(t('Entity being viewed'), 'any'), ), ); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 /*** Plugins are described by creating a $plugin array which will be used* by the system that includes this file.*/$plugin=array(  'single'=>TRUE,  'title'=>t('Reply - Add form'),  'description'=>t('Form to add a new reply to an Entity'),  'category'=>t('Reply'),  'defaults'=>array(),  'render callback'=>'reply_add_form_content_type_render',  'edit form'=>array(    'reply_add_form_select_entity_type'=>t('Reply form - Entity Type'),    'reply_add_form_select_reply_field'=>t('Reply form - Reply Field'),  ),  'required context'=>array(    newctools_context_required(t('Entity being viewed'),'any'),  ),);

No magic in there if you’re familiar with ctools (if you aren’t, you can install the advanced_help module, which has plenty of documentation on how to create your own ctools plugins). One point to highlight, is the fact that the plugin has two edit forms instead of one: in the first one, we’ll choose the entity type for which we’re adding the form, and in the second one, we’ll choose the reply field used, from the available within the selected entity type (could be more than one).

Also, note that the required context bit is accepting “any” context available within the panels page. I had to do it that way because, unlike with nodes, it’s impossible to know in advance all the entity types that will be available in the system how the user will name the arguments and contexts in panels and offer just the relevant options. Instead, all contexts are accepted, and the user (which will be usually a developer or a site-builder, anyway), is responsible for choosing the right one in the settings form.

Let’s have a look at the settings forms:

Settings forms PHP /** * Returns an edit form for custom type settings. */ function reply_add_form_select_entity_type($form, &$form_state) { $entities = entity_get_info(); $options = array(); // Get all existing entities. foreach ($entities as $entity_type => $entity) { $options[$entity_type] = $entity['label']; } $form['config']['reply_entity_type'] = array( '#type' => 'select', '#title' => t('Entity Type'), '#options' => $options, '#default_value' => isset($form_state['conf']['reply_entity_type']) ? $form_state['conf']['reply_entity_type'] : NULL, ); return $form; } /** * Returns an edit form for custom type settings. */ function reply_add_form_select_reply_field($form, &$form_state) { $options = array(); // Get entity type chosen in previous step. $entity_type = $form_state['conf']['reply_entity_type']; // Get all the field instances for the given entity type, and add the 'reply' // ones as options. $field_map = field_info_field_map(); $reply_fields = array_filter($field_map, '_reply_add_form_filter_reply_fields'); foreach ($reply_fields as $field_name => $fields_info) { if (!empty($fields_info['bundles'][$entity_type])) { $options[$field_name] = $field_name; } } $form['config']['reply_field'] = array( '#type' => 'select', '#title' => t('Reply field'), '#options' => $options, '#default_value' => isset($form_state['conf']['reply_field']) ? $form_state['conf']['reply_field'] : NULL, ); return $form; } /** * Submit handler for the custom type settings form. */ function reply_add_form_select_entity_type_submit($form, &$form_state) { $form_state['conf'] = array_merge($form_state['conf'], array_filter($form_state['values'])); } /** * Submit handler for the custom type settings form. */ function reply_add_form_select_reply_field_submit($form, &$form_state) { reply_add_form_select_entity_type_submit($form, $form_state); } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 /*** Returns an edit form for custom type settings.*/functionreply_add_form_select_entity_type($form,&$form_state){  $entities=entity_get_info();  $options=array();     // Get all existing entities.  foreach($entitiesas$entity_type=>$entity){    $options[$entity_type]=$entity['label'];  }     $form['config']['reply_entity_type']=array(    '#type'=>'select',    '#title'=>t('Entity Type'),    '#options'=>$options,    '#default_value'=>isset($form_state['conf']['reply_entity_type'])?$form_state['conf']['reply_entity_type']:NULL,  );  return$form;}   /*** Returns an edit form for custom type settings.*/functionreply_add_form_select_reply_field($form,&$form_state){  $options=array();  // Get entity type chosen in previous step.  $entity_type=$form_state['conf']['reply_entity_type'];     // Get all the field instances for the given entity type, and add the 'reply'  // ones as options.  $field_map=field_info_field_map();  $reply_fields=array_filter($field_map,'_reply_add_form_filter_reply_fields');     foreach($reply_fieldsas$field_name=>$fields_info){    if(!empty($fields_info['bundles'][$entity_type])){      $options[$field_name]=$field_name;    }  }     $form['config']['reply_field']=array(    '#type'=>'select',    '#title'=>t('Reply field'),    '#options'=>$options,    '#default_value'=>isset($form_state['conf']['reply_field'])?$form_state['conf']['reply_field']:NULL,  );  return$form;}   /*** Submit handler for the custom type settings form.*/functionreply_add_form_select_entity_type_submit($form,&$form_state){  $form_state['conf']=array_merge($form_state['conf'],array_filter($form_state['values']));}   /*** Submit handler for the custom type settings form.*/functionreply_add_form_select_reply_field_submit($form,&$form_state){  reply_add_form_select_entity_type_submit($form,$form_state);}

Pretty simple stuff. As mentioned, the first one allows to select the entity type for which the reply form will be added. The second one, gets a simple map containing all the fields in the system, filters them out to keep only reply fields, and then filters them out again to show only the ones available for the entity type selected in the first settings form. The forms look like this:

reply_add_form_settings_1 reply_add_form_settings_2

Finally, the render function, which simply takes care of loading the entity passed in the plugin config, and calls the appropriate function of the reply module, to let it create the reply form.

Full 'reply_add_form' plugin PHP /** * Output function for the 'reply_add_form' content type. */ function reply_add_form_content_type_render($subtype, $conf, $panel_args, $context) { if (!$context[0]->data) { return; } $entity = $context[0]->data; $reply_field_instance = field_info_instance($conf['reply_entity_type'], $conf['reply_field'], $conf['reply_entity_type']); $block = new stdClass(); $block->title = ''; $form_options = array( 'entity_id' => $entity->identifier(), 'instance_id' => $reply_field_instance['id'], ); $output = drupal_get_form('reply_form', (object) $form_options); $block->content = array( '#markup' => render($output), ); return $block; } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 /*** Output function for the 'reply_add_form' content type.*/functionreply_add_form_content_type_render($subtype,$conf,$panel_args,$context){  if(!$context[0]->data){    return;  }  $entity=$context[0]->data;  $reply_field_instance=field_info_instance($conf['reply_entity_type'],$conf['reply_field'],$conf['reply_entity_type']);     $block=newstdClass();  $block->title='';     $form_options=array(    'entity_id'=>$entity->identifier(),    'instance_id'=>$reply_field_instance['id'],  );  $output=drupal_get_form('reply_form',(object)$form_options);     $block->content=array(    '#markup'=>render($output),  );  return$block;}

I’ve uploaded the plugin file as a gist to github, so you can download it from here. Also, it’s worth noting that there’s an issue in the issue queue of the reply module to get the panels support implemented, with this plugin supplied as a patch, so hopefully you won’t need to keep it as a separate plugin, as it should make it into the next alpha release soon, and in the dev branch in the next few days.

Apr 16 2015
Apr 16

Drupal is more fun - meet Karen Grey

I sat down with Karen Grey at Drupal Camp Brighton 2015 to find out more about who she is and what she does with Drupal. I apologize for taking her out of the code sprints for that time! Since we spoke, Karen has taken on a position as Senior Drupal Developer at i-KOS in their Brighton office.

During our conversation, we touch on the difference between early Drupal 6 and working with Drupal 7 now, how public contribution helps companies build their reputation and build passion and excitement in employees, Drupal's tough-love coding standards, how Drupal's steep learning curve pays off ("I've used other CMS's, they're not fun to use.") in enjoyment and community, the expense of the project discovery phase when using proprietary software compared to open source, personalization on the web as the next big thing (shoutout Acquia Lift and Context DB!), and more!

Guest author dossier

  • Name: Karen Grey
  • Twitter: @greenybeans84
  • Drupal.org profile: karengreen
  • Work affiliation: Senior Drupal Developer, i-KOS Brighton office
  • LinkedIn profile: Karen Grey
  • A recent Drupal contribution: Commerce Smartpay module
  • Which Drupal version you started with: Drupal 6. "I built my own CMS for my final year project at university, so I understood what a CMS is and does and when I was introduced to Drupal, I thought it was pretty cool."

jam's favorite photo of Karen

Karen Grey's magnificent moustache!

Interview video

[embedded content]

Karen Grey, Senior Drupal Developer
Apr 16 2015
Apr 16

 String Telephone Drupal 7 does not have built-in support for representational state transfer (REST) functionality. However, the RESTful Web Services module is arguably the most efficient way to provide resource representations for all the entity types, by leveraging Drupal's powerful Entity API. Unmodified, the module makes it possible to output the instances of the core entity types – node, file, and user – in JSON or XML format. Further entity type resources and formats are possible utilizing hooks in added code.

As with any REST solution, the RESTful Web Services module supports all four of the fundamental operations of data manipulation: create, read, update, and delete (CRUD). The corresponding RESTful API HTTP methods are POST, GET, PUT, and DELETE, respectively.

Anyone hoping to learn and make use of this module – especially for the first time – will likely be frustrated by the current project documentation, which is incomplete, uneven, and lacking clear examples. This article – a brief overview – is intended to introduce what is possible with this module, and help anyone getting started with it.

We begin with a clean Drupal 7 installation (using the Standard profile) running on a virtual host with the domain name "drupal_7_test". After installing and enabling the module, we find that it does not have the configuration user interface one might expect. In the demonstration code below, we focus on the node entity type.

Nabbing a Node

The simplest operation – reading an entity instance – is performed using a simple GET request containing the machine name of the entity type and the entity's ID.

To allow an anonymous user to read the node using REST, it is insufficient to grant that role the "View published content". Moreover, the "Bypass content access control" permission has no effect. Rather, the module establishes an "Access the resource" permission for each entity type, which also must be enabled for anonymous users. (When testing anonymous access from a web page, be certain you’re not logged into the website in the same browser, because then you are already authenticated.)

In this example, we read the fields in the first page in the Drupal database (node/1), which has a title of "Page Title" and a body field of only "Body text". To display the information in JSON format, in a web browser, the URL would be: http://drupal_7_test/node/1.json

The results are, as expected, in JSON:

  "body": {
    "value":"\u003Cp\u003EBody text.\u003C\/p\u003E\n",
  "title":"Page Title",

If you get an HTTP 403 error ("Forbidden"), verify the two required permission settings for anonymous users accessing nodes.

To display the same information as XML, we need only alter the path extension:

The information is the same, but in a substantially different format:

      <p>Body text</p>
  <title>Page Title</title>

To get all of the nodes – or at least the first 100, by default – remove the entity ID: http://drupal_7_test/node.json

If we again want to access that first node only, but not use a URL, then we can employ cURL on the commandline: curl -X GET http://drupal_7_test/node/1.json

More descriptive options, such as --request GET, can be chosen. To see details of operations that may be helpful for debugging, add the -v option (a.k.a. --verbose).

If you do not want anonymous REST requests to have access to your website's content, use HTTP authorization. Apparently, the simplest way to do so is to enable the Basic Authentication Login submodule (restws_basic_auth). Connect by utilizing a username that has the appropriate "Access the resource" permission and begins with the (mandatory lowercase) string "restws". Assuming that the user "restws_user" has a password of "password", then we can display the first node with this command: curl -X GET http://drupal_7_test/node/1.json -u restws_user:password

The aforesaid username pattern could be represented as the regular expression /^restws.*/. That pattern can be customized in the website's settings.php file. For instance, to grant authorization only to those usernames beginning with the string "WS_", one adds the following (case-sensitive) regex to the settings.php file: $conf[ 'restws_basic_auth_user_regex' ] = '/^WS_.*/';

Creating or Changing a Node

Rather than using the conventional content management UI for adding an entity resource, we can do it with REST – specifically, a POST request. There are several ways to accomplish this: using the core function drupal_http_request(), or cURL PHP code, or cURL on the commandline, which we will use here. Returning to the XML format, we can use a file based upon the one output earlier, containing only those elements needed to create a new node with some content in the body field:

  <title>New Page Title</title>
      <p>New body text</p>

Assuming that the file is named create.xml, we can create a new node with this command:

curl -X POST -H "Content-Type: application/xml" -d @create.xml \ 
    http://drupal_7_test/node -u restws_user:password

The the output in the command window indicates that a second node, node/2, has been created:

<?xml version="1.0" encoding="utf-8"?>
<uri resource="node" id="2">http://drupal_7_test/node/2</uri>

If the XML file were missing an essential element, such as the node type, we would receive an error message such as: 406 Not Acceptable: Missing bundle: type

Modifying an existing node is similar, but instead of using the POST method, use PUT and specify a node ID. The XML file needs even fewer elements:

  <title>Modified Page Title</title>
      <p>Modified body text</p>

The cURL command is straightforward:

curl -X PUT -H "Content-Type: application/xml" -d @modify.xml \ 
   http://drupal_7_test/node/2 -u restws_user:password

The output will not include any <uri> element containing the node ID, since no new one was created.

Nuking a Node

To remove any entity resource – in this case, the first node in the database – use the DELETE method:

curl -X DELETE http://drupal_7_test/node/1 -u restws_user:password

The baffling, yet correct, output is merely:


Using this module, one can also: filter the results when querying for multiple resources (by specifying the user ID or taxonomy ID); sort by one of the properties (either in ascending or descending direction); change the default limit of 100 resource references. For debugging purposes, you can enable the logging of the details of all web service requests to a specified file, with a simple addition to the Drupal settings.php file, for example:

$conf['restws_debug_log'] = DRUPAL_ROOT . '/restws_debug.log';

If in your own explorations, you discover additional techniques and pitfalls, please consider publishing them, as there appears to be both interest and confusion as to how to leverage this module fully.

Image: "String Telephone" by psd is licensed under CC BY 2.0

Apr 16 2015
Apr 16

Volunteers sprintingThis week is National Volunteer Week, a week to recognize that volunteerism is a building block to a strong and thriving community.  The Drupal Community is no different: as an open-source project our volunteers are vital to the health and growth of our project.  There are so many roles and levels of contribution within our Drupal ecosystem that we at the Drupal Association wanted to highlight how much your contribution means to us and our work.  I took some time and asked around, here’s some of the glowing praise our staff has to say about our phenomenal volunteers. 

Amanda Gonser“I am continually impressed with the volunteers that I get to work with.  Not only do they rock at their jobs, but they are so dedicated to the work that they do for Drupal and the Cons specifically!  Anyone who has volunteered for a Con knows that it is a large undertaking, and a responsibility that isn't taken lightly. These volunteers come back each week with positive attitudes, valuable ideas and great results.  Although I have only been at the Association for a little over six months, I can truly say that these volunteers are what gives our Cons the 'special sauce' and I am lucky to get to work with volunteers from around the globe on a daily basis.” 

- Amanda Gonser, DrupalCon Coordinator

Holly Ross“Most of my day is spent with Drupal Association staff, who have the luxury of getting paid to think about Drupal for 8 hours a day. A good chunk of my job is working with volunteers though-- the Board of Directors, Drupal.org Working Groups, Community Organizers, DrupalCon session speakers. So many of you give so much of your time and your smarts back to the project and the community, and it's my privilege and duty to learn from you all.”

- Holly Ross, Executive Director

Neil Drumm"I look forward to working with community volunteers to help build and improve Drupal.org. The site would not be where it is today without everyone's work."

- Neil Drumm, Drupal.org Lead Architect

Megan Sanicki“I want to thank Cathy and Jared for being my sprint mentor at DrupalCon Latin America. I made my first comment on the issue queue. It felt so good to cross into that world finally, even if it is was just a baby toe crossing over.”

- Megan Sanicki, COO

Leigh Carver“It feels like I’m hearing news every day about the amazing programs our community members put together all over the world — from Los Angeles to Uganda and beyond. Without help from amazing community volunteers who donate time working on social media, in the issue queues, or even volunteers who take a brief moment to drop a note in my inbox (“have you seen this?”), these stories would never be shared with our wider community.” 

- Leigh Carver, Content Writer

Today, we invite you to take a few minutes to recognize your fellow Drupal contributors by tweeting or sending a message via IRC to appreciate each other.  After all, without our volunteers, our Drupal Community would not be as lively, bright, and welcoming.  Want to lend a hand?  Our get involved page has plenty of ways to volunteer with the project.

Apr 16 2015
Apr 16

Drupal has a solid Ajax interface, we can hook into the Ajax events at various places. I will explain some 5 important methods,

1) beforeSerialize - called before data is packed and runs before the beforeSend & beforeSubmit

2) beforeSubmit - called before the ajax request

3) beforeSend - called just before the ajax request

4) success - called after ajax event returns data

5) complete - called after the request ends

Lets say you want to capture some ajax event (in built or made by other module) to do some event like Views refresh. We can use a very simple logic to do that.

(function($) {
    Drupal.behaviors.timetracker = {
        attach: function(context, settings) {
        /*Task submit ajax*/
        for (ajax_el in settings.ajax) {    
          if (Drupal.ajax[ajax_el].element.form) {
             if (Drupal.ajax[ajax_el].element.form.id === 'TYPE-node-form') {
               //called before the ajax request
               Drupal.ajax[ajax_el].beforeSubmit = function(form_values, element_settings, options) {
               //Do something                      

               Drupal.ajax[ajax_el].beforeSend = function (xmlhttprequest, options) {

                //Do something                      

              //called before data is packed
              Drupal.ajax[ajax_el].beforeSerialize = function(element_settings, options) {
               //Do something     

             //Called after ajax event returns data
              Drupal.ajax[ajax_el].success = function(response, status) {
                 // Trigger views refresh refresh the activity view
                 //Pass back to original method
                 Drupal.ajax.prototype.success.call(this, response, status);

             Drupal.ajax[ajax_el].complete = function(response, status) {

In my case, I get all the forms that are set for Ajax handling and compare them with my node form. Once the form submission happens and the ajax data is returned, "success" method will get called. As the new data is added, I here refresh the respective view with the inbuilt Drupal procedure to refresh View.  Note, I use  Drupal.ajax.prototype.success.call to make sure the core function runs at the end, otherwise you may notice inconsistency.

I hope this sheds some light, let me know if you need some inputs from me.

Apr 16 2015
Apr 16

This is the story of how I built a first prototype for LinkForward, a web application for power networkers, and how I built it in record time using Drupal (12 hours), without a single line of custom code.

Today, MVP (Minimum Viable Product) is a part of every startup founder’s vocabulary. But almost 10 years after Steve Blank first published his ideas on customer development in "The Four Steps to the Epiphany”, I’ve seen myself and countless other startup enthusiasts over-invest in the development of their first prototype for web applications. When I talk with founders about their startup they will often say they are building an MVP and then immediately go on listing a long list of complex expensive features they believe their product will not be viable without (instant messaging, video chat, you name it). The worst is, I’ve done the same several times now. In the beginning of the year, when I got the idea for LinkForward, I decided to do things differently.

The goal

LinkForward is a web application that makes it easier to build and maintain a high quality professional network. Its goal is to make it easy for 2nd degree connections - that don’t know each other but that have someone in common - to introduce each other to their own network. I started building it as a side project because I felt that there is a serious need for a tool that makes it easier to be a power networker. Since mid 2014 I have been working much more consciously on my network. Even though I had returned to Belgium 5 years ago, I realized that I wasn’t really connected into the IT scene at home. Along the way I started feeling that there is an important gap in my networking toolset.

I’ve now got over 2k connections, and many of them I no longer remember. I would love to have a tool that helps me make stronger, more meaningful connections. That helps me build a vibrant, active network with those people that will help me make a bigger impact on the world. Just before New Year, I had joined a friend and her co-founder to advise them about their startup. To implement their idea they wanted to build an AI that would be able to make recommendations to people. In that discussion I suggested them to think of using a network of people as an alternative for expensive and complex AI, this is how I got the idea for LinkForward.

What is wrong with Linkedin

I really love Linkedin, I was never really good at organising business cards, and Linkedin gave me a much better alternative to keep track of my professional network. While Linkedin is great for retaining business connections, it’s inadequate if you want to actively foster your relationships. I see 3 big problems:

  1. In the name of greater engagement Linkedin is actively seducing their users to waste time on empty/shallow interactions: Linkedin’s news feed is saturated with listicles and other sensational but forgettable content. I don’t think that a single click “like XYZ’s new profile picture” helps you build a meaningful relationship in a professional context.
  2. All the incentives are geared towards indiscriminately accepting connections. Recruiters and sales people, the premium users on Linkedin need to build as many connections as possible to increase the number of people they can reach. As a result on a weekly basis I get connection requests from accounts that are obviously fake but that have already accumulated over a hundred connections.
  3. Behaviour reinforcement loops in the system were primarily built for new users (there are some positive improvements on this area recently, but the mass market nature of the system means it is not primarily geared towards power users).


A little while back I read Pretotype it a free book by Alberto Savoia. While his ideas are very similar to the Minimal Viable Product that Steve Blank popularised, this book helped me to finally get it. You see, when Steve Blank introduced MVP he left it really open what viable meant exactly. And there are a lot of interpretations that range from:

  • Landing page: A page that explains a product before it is built and that lets people sign up to receive notifications about the product.
  • Crowdfunding campaign: Using a platform like Kickstarter or Indiegogo to raise funds from your future customers.
  • Mockup prototype: A prototype built in a mockup tool, be it Balsamiq, Powerpoint, or something similar.
  • Concierge MVP: a manual service, where the founder offers a service that mimicks the product as you envision it to be working in the future.
  • Mechanical turk MVPs (or also “Wizard of Oz MVP”): On the outside it looks as if everything is running automatically, people interact through a web interface with your product, but on the inside you are doing the most expensive interactions by hand.
  • Minimum Loveable Product: a counter action against MVPs that were too minimal. Often focussing on design, to make a product more “loveable”.

The problem is that most startup founders go from demand validation (e.g. talking with potential customers, landing pages, mockups or crowdfunding campaigns) straight to their Minimum Loveable Product. In the process a lot of founders invest loads of time and money in demand that was only validated as an idea. And this is where a lot of people fail badly: they get initial traction and start investing in an idea that people think is great, but there is a really big difference between something that sounds great and something that will actually work.

That is why I’ve learned the hard way that you should build a pretotype before you build your version one. And your pretotype should be modelling the engine of your startup, the key interaction that you believe will be bringing value to your customers.

If the engine doesn’t work you can add all the whistles and bells you want, they will not make your startup a success. On the other hand if you can get that basic interaction going, even if it’s only for a small group of people, and not very effectively, you can start experimenting how you can get from there to a product that people will not just like but actually love.

Pretotyping LinkForward with Drupal

So how do you build your startup’s engine? For LinkForward the initial idea was that introduction requests would be the engine of the community platform: When a user signs up, they can ask an introduction request from the community. When another user in the network knows someone who would be a good introduction, they can flag that person and receive an email they can forward to the person in question.

First iteration (8h) - from idea to pretotype

I built the first iteration for LinkForward on two consecutive evenings, in about 8 hours of work. To do so I used Drupal Gardens, a hosted Drupal solution built by Acquia. With it, it’s really easy to get a Drupal site started without any developer or system administrator knowledge.

It had been a while since I last built a Drupal site, and my Drush knowledge had become a bit rusty (read I forgot everything). I could have asked my colleagues to set up a site for me, or have brushed up my command line knowledge, but I wanted to show that it is possible to make a prototype in Drupal without any real coding skills. The only thing I asked help with was the setup of the nameserver so that I could have a real domain and not just a subdomain.

Setting up the introduction request was really easy: I just added the fields for it to the user profile. The most of my time I spent on tweaking the View that was going to display one user’s ask at a time and the 2 flags I created to let a user interact with the system. Since I didn’t want to involve a themer, I had to do some trial and error positioning the fields using an HTML table (ok maybe this is a bit more hardcore Drupal skill, but I personally don’t consider HTML to be code, since you are manipulating the output directly).

Go live (4h) - inviting the first users

After these 2 evenings I spent another 4 hours doing some extra polishing. I’m not sure anymore when I wrote the content for the home page and the about page, but after approximately 12 hours of work, I went onto Skype and pinged about 70 friends, mostly from the Drupal community, who I thought the product might be relevant for. That is how I got the first 7 users.

Several friends suggested me to check out BNI (Business Network International), "the world's largest referral organization" founded in 1985 by Dr. Ivan Misner. I found a chapter in Ghent that was having their weekly meetup on Wednesday at 7am, I called them up and was able to attend the next morning. From the approximately 30 people that morning 12 signed up for an account.

The 3rd group I recruited from was the attendees at BootCircle, the monthly meetup I organize for startup founders in Ghent and Leuven. I also handed out flyers (black&white hand torn copy machine quality) at a few other meetup events I attended, that got me another 12 participants. Including a few colleagues that brought the total to about 40 participants. Meetup.com, a site that facilitates the organisation of local "user group” style events is a really powerful tool for setting up ad hoc meetings where you can get feedback quickly from people in your target audience.

Engagement experiments

With this initial group of people, I could already see that the engine was not yet working: most of the people that signed up didn’t return, even after they had been prompted. The engagement rate was also rather low. That is why I took the site off of Drupal Gardens so that I could do a few experiments with gamification modules.

I used the user points module in Drupal to create introduction points: every time you flagged that you wanted to introduce somebody, a point was awarded to you and subtracted from the person you indicated you could make an introduction for, a typical gamification scheme. Initially I used flag actions for this from the flag module. But I bumped into some of the limitations of the flag module, and in the end I decided to power up.

This is why I installed the rules module. Rules is Drupal’s blackbelt toolbox, it allows you to model complex business logic without a single line of code. Sending emails, adding or removing points, but also publishing and unpublishing nodes, there are few things you can’t do out of the box with Rules.

I wanted to encourage my users to invite their friends to the platform. Setting this up is trivial with the invite module if you are fine with a simple email invitation system. It’s also possible to extend the module with other types of invitations, but those don’t come out of the box. The invite module works together nicely with the user point module, so that you can award points when a member invites a new user to the platform.

At some point I thought I might be able to improve engagement if I would offer offline introduction groups a virtual place to facilitate the introductions they made. That is why I installed the organic groups module that allows you to set up sections of your site where users get different permissions depending on their group membership.

To make it easier to create a new account and to login, I added Hybridauth a module that lets you integrate with a host of social login services. With it, you can create a new account on the site with a single click of a button, copying your information from the social network service.

Francis, a founder friend who I am masterminding with about LinkForward, suggested that we might increase engagement if a customer could add more than one introduction request. I moved the request fields into their own content type and refactored the views and flagging system to enable it.

All of these changes happened on the live site in configuration only. So not in a code driven way, using features, the way we normally build sites for our customers. I would strongly recommend against this kind of live fine tuning for established business platforms where customers have a fitness and continuity expectation. For a startup however, this kind of agility and live tweaking makes Drupal an extremely powerful platform that allows you to learn at minimal cost in really short time frames. With proper warning, I believe this could even be used in an enterprise environment for a beta pilot.

Summary: Why Drupal is a great tool for modeling startups

  • It is clickable, you don’t need to write code
  • Easy to build a complex content model and to visualize it in rich data display that you can build with views
  • It has the building blocks to create viral loops with modules like the Invite module
  • You can use it to create gamification processes with the User points module
  • Social login is easy to set up with Hybrid Auth, and allows you to easily recognise spam accounts
  • You can model complex business logic with the Rules module
  • User interaction processes can be built with the Flag module

Conclusion - More refactoring ahead

Linkforward is not yet working the way I want it to - its engine is still sputtering - but through the experiments I’ve done in the past months I’ve learned a lot of valuable lessons. Crucially those lessons were really affordable: I estimate I’ve spent about 80-100 hours of my own time, my assistant has spent about 10 hours sending emails as part of one of the experiments, I had about 10 hours of technical support from our developers to set up the site and to install the modules.

Next for Linkforward I’ve got a big overhaul in mind and I’m still considering if I want to throw away the current site, copy over the content and rebuild, or if I want to continue working with the existing site base. There is a minor image upload bug that I don’t want to waste developer time on, so I might as well just restart for the new prototype. I can copy over a lot of the configuration work, so that wouldn’t be such a big deal.

I’m really excited about this project for another reason: It demonstrated that it is possible to provide a lot of value for very little developer time. That is why with Pronovix we’ve decided to launch a service based on this principle: dripdojo.com. It is a service specifically designed for intrapreneurs or entrepreneurs who would like to take a hands on role in the design of their innovation project. The goal is to help them validate the engine of their idea before larger investments are made.

We’ve also been actively using our experiences when advising our startup customers, nobody can afford investing in features that people won’t use, especially not when you are trying to launch your startup...

Apr 15 2015
Apr 15

Almost everyone who does any form of Drupal development uses Drush - it's the Swiss Army Knife of the Drupal world. Drush is the Drupal Shell, and it lets you do a whole lot of amazing things with Drupal sites without actually going to the site, logging in, and clicking buttons.  It's a command-line tool (and since I'm an old UNIX hand, it's just right for me.

Despite the fact that we all use Drush, it's pretty clear that some of us use it better than others. I'm often really impressed with seeing someone else use it effectively, with powerful aliases, and doing workflow things with drush that I could hardly imagine. And let's face it, we can't always have a mentor at hand.

This book, Drupal for Developers Second Edition,  can be your mentor. It's a pretty quick read (a little over 150 pages) as technical books go. It covers a lot of territory along the way, though.  As all these books seem to do, it starts out with installing Drush on your server, and then moves forward into using it.  Though I've been using Drush for some time now, I never have quite been able to grasp the more advanced uses, like using Drush to move code and features from development to test to production, for example.  This book give a really good idea of how to do some of those functions.

It's also good at giving simple examples of doing things like developing code to interact with Drush, and getting the most out of Drush in a developer enviromnent.

The chapters in the book cover a lot

  1.   Keeping your code and database synced up in different environments.
  2. Running and monitoring tasks in Drupal
  3. Doing debugging and error handling in Drush- I found this chapter particularly enlightening (and as it turns out I really wish I had read it last week when I desperately needed it.)
  4. Managing local and remote environments - running remote environments in Drush, and writing concise aliases.
  5. Setting up a development workflow using Drush - this gave me some great ideas for streamlining things I now do by hand - I'm really looking forward to trying this out.

You can buy a copy of this book at the Packt web site, and also at most technical book stores and online sellers such as Amazon .

Now if I only understood why this is the Second Edition when the first was apparently never published?

Apr 15 2015
Apr 15

Almost everyone who does any form of Drupal development uses Drush - it's the Swiss Army Knife of the Drupal world. Drush is the Drupal Shell, and it lets you do a whole lot of amazing things with Drupal sites without actually going to the site, logging in, and clicking buttons.  It's a command-line tool (and since I'm an old UNIX hand, it's just right for me.

Despite the fact that we all use Drush, it's pretty clear that some of us use it better than others. I'm often really impressed with seeing someone else use it effectively, with powerful aliases, and doing workflow things with drush that I could hardly imagine. And let's face it, we can't always have a mentor at hand.

This book, Drush for Developers Second Edition,  can be your mentor. It's a pretty quick read (a little over 150 pages) as technical books go. It covers a lot of territory along the way, though.  As all these books seem to do, it starts out with installing Drush on your server, and then moves forward into using it.  Though I've been using Drush for some time now, I never have quite been able to grasp the more advanced uses, like using Drush to move code and features from development to test to production, for example.  This book give a really good idea of how to do some of those functions.

It's also good at giving simple examples of doing things like developing code to interact with Drush, and getting the most out of Drush in a developer enviromnent.

The chapters in the book cover a lot

  1.   Keeping your code and database synced up in different environments.
  2. Running and monitoring tasks in Drupal
  3. Doing debugging and error handling in Drush- I found this chapter particularly enlightening (and as it turns out I really wish I had read it last week when I desperately needed it.)
  4. Managing local and remote environments - running remote environments in Drush, and writing concise aliases.
  5. Setting up a development workflow using Drush - this gave me some great ideas for streamlining things I now do by hand - I'm really looking forward to trying this out.

You can buy a copy of this book at the Packt web site, and also at most technical book stores and online sellers such as Amazon .

Now if I only understood why this is the Second Edition when the first was apparently never published?

Apr 15 2015
Apr 15

The first initiative on the Drupal.org 2015 roadmap is ‘Better account creation and login’. One of the listed goals for that initiative is “Build a user engagement path which will guide users from fresh empty accounts to active contributors, identifying and preventing spammers from moving further.” This is something Drupal Association team has been focusing on in the last few weeks.

The first change we rolled out a few days ago was a ‘new’ indicator on comments from users whose Drupal.org accounts are fewer than 90 days old. The indicator is displayed on their profile page as well. We hope this will help make conversations in the issue queues and forum comments more welcoming, as people will be able to easily see that someone is new, and probably doesn’t know yet a lot about the way community works.

Today we are taking another step towards making Drupal.org more welcoming environment for new users. But first, a bit of background.

New users and spam

It is not a surprise for anyone that a big number of user accounts registering on the site are spam accounts. To fight that and prevent spam content from appearing on Drupal.org, we have a number of different tools in place. Of course, we don’t want these tools to affect all active, honest users of the site, and make their daily experience more difficult. To separate users we are sure about from those we aren’t sure about yet, we have a special ‘confirmed’ user role.

All new users start without such a role. Their content submissions are checked by Honeypot and Mollom, their profiles are not visible to anonymous visitors of the site, and the types of content they may create are limited. Once a user receives a ‘confirmed’ role, his or her submissions will not be checked by spam fighting tools anymore; their profile page will be visible to everyone, and they will be able to create more different types of content on the site.

This system works pretty well, and our main goal is to ensure that valid new users get the ‘confirmed’ role as quickly as possible, to improve their experience and enable them to fully participate on the site.

The best way to identify someone as not a spammer is have another human look at the content they post and confirm they are not spammers. Previously, we had a very limited number of people who could do that-- about 50. Because of that, it usually took quite some time for new user to get the role. This was especially noticeable during sprints.

Today we’d like to open a process of granting a ‘confirmed’ role to the thousands of active users on the site.

‘Community’ user role

Today, we are introducing a new ‘Community’ role on the site. It will be granted automatically to users who have been around for some time and reached a certain level of participation on Drupal.org. Users who have this role will be able to ‘confirm’ new users on the site. They will see a small button on comments and user profile of any user who has not yet been confirmed. If you are one of the users with ‘Community’ role, look out for this new Confirm button, and when you see one next to a user - take another look at what the person posted. If their content looks valid, just click ‘confirm’. By doing so, you will empower new users to fully participate on Drupal.org and improve their daily experience on the site.

With expect to have at least 10,000 active users with the ‘Community’ role. With so many people to grant the ‘confirmed’ role, new users should be confirmed faster than ever before.

If you aren’t sure if you have the ‘community’ role or not, don’t worry. We will send an email notification to every user whose account receives the new role. The email will have all the information about the role and how to use it.

Thanks for helping us make Drupal.org a better place!

Apr 15 2015
Apr 15

On a sunny Amsterdam morning, we catch Morten (Tag1 Consulting, Geek Röyale) as he speeds through DrupalCon’s RAI Convention Center on an urgent Viking mission. We waylay him long enough for this brief, Danish-accented, rapid-fire chit-chat.

MORTEN: I am Morten, also known as MortenDK.

RONNIE RAY: You gave a talk this morning?

MORTEN: Yes, I gave a talk about the Drupal 8 Twig project, which is the new theming layer for Drupal. And I gave a demo on all the new and exciting things we have in it.

So that was really good to show off from a front-ender’s perspective everything that was done over the last couple of years and how the new system is going to work in Drupal 8. It was a real gas to finally really show it off. People could see we’re not just lying, but it’s actually real.

RR: So, can I ask you, what are you reading now?


RR: Any books, any magazines?

MORTEN: Ah – oh – uh – (bleep) – what’s the name of it? I actually have it on an audio file, it’s a fantasy story about... uh.. a lot of blood, a lot of personal vendettas. Good clean fun. But actually I haven’t had the time to read for a long time because I’ve been doing so much work on the Drupal project and I’ve been moving. Also, I took up my old hobby of painting miniatures again, just to geek out.

I’m a metal-head so pretty much anything... been into a lot of Opeth, a Swedish metal band – kind of a grown man’s metal. (Indecipherable.)

RR: Do you follow anyone on Twitter or FaceBook?

MORTEN: A couple, but normally not. Interviews with musicians are not always the most interesting thing, it’s the same thing as interviews with sports people, “So how did it go today?” “We played really hard!” “On the new album, we’re going to really show off.” So that’s kind of like... a couple of people... there’s a Swedish band called Arch Enemy I’ve been following closely.

RR: What’s the most exciting thing about Drupal 8 for you?

MORTEN: It is the front-end, the Twig system and the templates, and the way we have shifted focus in the front-end completely around, from being an afterthought to a whole new system that is built for front-enders instead of built by back-enders to front-enders. It’s kind of, we’ve taken control over our own destiny, and that I think is going to be the killer app for Drupal 8.

Apr 15 2015
Apr 15

It’s national library week and we could all probably learn good habits from those fastidious and oh-so-organized people in our lives: librarians.

Ahh, libraries. The smell of books, the sounds of pages turning, the incessant Karaoke singalongs of the library workers. OK, maybe the last one is a bit far-fetched, but we all know it’s founded in some truth. The fact remains that libraries are a hallowed ground of information consumption and organization.

That organization doesn’t happen by dint of chance. No, there’s a lot of hard work that goes into maintaining a collection, and these steps taken by your local library workers might inspire us to approach our websites with the same set of diligent hands. Get sorting, people!

Photo of Leeds Library courtesy of Michael D Beckwith.


Picture this: You’re scanning the stacks for that volume of poetry by your favorite Tang Dynasty era scribe, and whaddaya know — it’s not in its right place! Typical, amiright? Actually, it’s very atypical.

Library workers actively “read” the codes on the spines of books to ensure that they are arranged in correct order. Think of it as an analog version of web-crawling. Someone literally walks through the stacks and pulls misaligned books from the shelf in order to get them back to their rightful place. It’s continuous, tedious and probably the bane of a librarian’s existence. But it must be done.

Your site must organize content in a conscientious and intuitive manner for users to find what they need. Similar to identifying a book by it’s code, a good admin will identify the different content types and then start putting the pieces in place for a user-friendly website.


"Next to emptying the outdoor bookdrop on cold and snowy days, weeding is the most undesirable job in the library. It is also one of the most important. Collections that go unweeded tend to be cluttered, unattractive, and unreliable informational resources."

- Will Manley, "The Manley Arts," Booklist, March 1, 1996, p. 1108.


That quote really does say it all. And then it cites in clinical fashion. #LibraryStyle

Ever had a garden that was choked by invasive, persistent weeds? The only remedy is to pull those opportunistic plants by their roots and turn the soil for good measure. Libraries have to do something similar in order to remove old, non-circulating and damaged books that are clogging the shelf space. Sayonara, unwanted copies of Steve Wozniak’s Joke Book!

The post-audit process for a Drupal site should follow similar principles of economy and usefulness: was this custom module really needed for the UI? If it’s taking up space and not serving the users, then it’s gotta go! 

Cleaning up a site the Promet Way

One thing that helps get your site in good working order is a thorough audit.  Check out our presentation from DrupalCorn 2014 to get the skinny on how we drill down to a site’s components with the Drupal Site Audit.

Have a website that needs organizing? Drop your contact info in the form below and someone from Promet’s team will be in touch!