Nov 08 2015
Nov 08

Nothing gives me more professional satisfaction than seeing my work being used for practical applications. In 2014, I was commissioned to write and maintain a few Views-related modules to add interesting functionality to that ecosystem:

Recently, the original sponsor of this work notified me that he had finally launched Vizala, his site that utilizes these modules. Vizala "aims to be the internet's most useful database for country, demographic, social, and economic information. Instead of just providing answers, [its] robust analytics allow for in-depth analysis and provide a complete picture of your topic of interest. Vizala only uses data from trusted sources and includes links to the original source for maximum transparency."

Below is a screenshot of an economic report provided by the site. Under the hood, this report is a view using the Flipped Table style, to show data entries as columns, and Field Tooltip on header cells to provide more context to the information presented. The "Share" menu item invokes the Views Share functionality.

Vizala in action

The site uses many more Views modules, including Views Save to save filter settings. Congratulations to Vizala for the launch and thanks for sponsoring useful modules that the whole Drupal community can reuse!

Attachment Size screenshot-vizala.com_.png 97.38 KB
Nov 30 2014
Nov 30

During development, it's useful to quickly find information about form elements. Quickly is the key word here, and adding a dpm in a hook_form_alter then removing it becomes tedious.

Here's a very simple module to add debug information to the title attribute of all form elements:

Jun 26 2014
Jun 26

One of my long-standing gripes with Views is the inability to alter the behaviour of existing Views handlers (e.g. fields, filters, etc.) without having to subclass the desired handlers to add new functionality. While the subclassing approach is fine when the functionality targets a new field type, it is not ideal if the change required should affect existing fields, across different types of handlers.

I was recently commissioned to create a module that displays tooltips on field headers, regardless of field type. This is an example of the latter case above, and my solution, Views Label Tooltip, exemplifies my technique to achieve field alterations that are orthogonal to handler types. Following is an explanation of how I did it.

We want to extend Views field settings with additional options and modify their rendering and/or behaviour based on these options. In our case, each field has an additional "Tooltip" setting that gets rendered on the field's label.

This is the key to the technique. We need to store the custom field settings such that they behave just like the native ones:

  • Get imported/exported with standard view import/export
  • Get overridden when a field is overridden

For this, we use a Views display extender. The official documentation for this plugin type is that "Display extender plugins allow scaling of views options horizontally. This means that you can add options and do stuff on all views displays. One theoretical example is metatags for views." The key word is horizontal: it applies to all display plugins. What we're trying to achieve here is similar, but for field handlers. So until the Views maintainers decide to generalize the concept of extenders to other Views objects, we can use (some would say abuse) display extenders to hold the additional settings for us.

Here's the implementation of our display extender:

class views_label_tooltip_plugin_display_extender extends views_plugin_display_extender {
  function options_definition_alter(&$options) {
    $options['tooltips'] = array('default' => array(), 'unpack_translatable' => 'unpack_tooltips');

  function unpack_tooltips(&$translatable, $storage, $option, $definition, $parents, $keys = array()) {
    $tooltips = $storage[$option];
    if (!empty($tooltips)) foreach ($tooltips as $field => $tooltip) {
      $translation_keys = array_merge($keys, array($field));
      $translatable[] = array(
        'value' => $tooltip,
        'keys' => $translation_keys,
        'format' => NULL,

Here, we define the new tooltips option, and since tooltips hold translatable text, we instruct Views on how to export this data structure. (Note: unpack_translatables don't work correctly for display extenders in the current version of Views, but I submitted a patch to fix that.)

Now this option gets imported/exported along with the all other Views settings, which fulfills our first storage requirement. But since there's only one copy of it in each display, we will use this option as an array of tooltips, one entry per field. The tooltips option will be manipulated on each field's admin UI, as is shown below. There is no need for the display extender to have its own admin UI.

We need to alter the Views UI views_ui_config_item_form in order to inject our new options. Here's the code from Views Label Tooltip:

 * Implements hook_form_FORM_ID_alter() for `views_ui_config_item_form`.
function views_label_tooltip_form_views_ui_config_item_form_alter(&$form, &$form_state) {
  if ($form_state['type'] != 'field') return;

  $form_state['tooltips'] = views_label_tooltip_get_option($form_state['view']);
  $form['options']['element_label_tooltip'] = array(
    '#type' => 'textarea',
    '#title' => t('Tooltip'),
    '#description' => t('Place your tooltip text here. HTML allowed.'),
    '#default_value' => @$form_state['tooltips'][$form_state['id']],
    '#attributes' => array(
      'class' => array('dependent-options'),
    '#dependency' => $form['options']['element_label_colon']['#dependency'],
    '#weight' => $form['options']['element_label_colon']['#weight'] + 1,
  $form['buttons']['submit']['#submit'][] = 'views_label_tooltip_form_views_ui_config_item_form_submit';

Note how we need to explicitly add the CSS class dependent-options to our element. Here's how the form looks like after alteration: Tooltip setting in field UI

To place the new form element below an existing one, we set the #weight attribute to follow the latter. Note that we're using a custom function views_label_tooltip_get_option() to get the option's value, we'll see why below. Here's the implementation of the form submit handler:

 * Submit function for `views_ui_config_item_form`.
function views_label_tooltip_form_views_ui_config_item_form_submit($form, &$form_state) {
  // Set the tooltip in our display extender.
  $display_id = $form_state['values']['override']['dropdown'];
  $tooltips = $form_state['tooltips'];
  if ($form_state['values']['options']['element_label_tooltip']) {
    $tooltips[$form_state['id']] = $form_state['values']['options']['element_label_tooltip'];
  else {
  $form_state['view']->display_handler->set_option('tooltips', $tooltips);

  // Write to cache.

in this submit handler, we detect whether the user is overriding the field on the current display, or altering the default fields. Based on this, we decide to save the option to the current display, or to the default (master) display, respectively. That's why we need a custom function to read back the options: we need to emulate the standard Views overriding logic by choosing the correct display to read the options:

 * Helper function to get tooltips setting.
function views_label_tooltip_get_option($view) {
  if (isset($view->display_handler->display->display_options['fields'])) {
    // Fields are overridden: use this display's tooltips.
    $tooltips = @$view->display_handler->display->display_options['tooltips'];
  else {
    // Fields are default: use default display's tooltips.
    $tooltips = @$view->display['default']->display_options['tooltips'];
  return $tooltips;

We have now fulfilled the second storage requirement. On to rendering.

To render the tooltip on top of field labels that are generated by the Views theming system, we use JavaScript. We inject our JS code during hook_views_pre_render(). Our hook implementation calls a theme function to generate each tooltip - theme functions can be overridden, which allows theme developers to customize the tooltip markup. The hook implementation also marks each target field label with a special class that our JavaScript can recognize:

 * Implements hook_views_pre_render().
function views_label_tooltip_views_pre_render(&$view) {
  $tooltips = views_label_tooltip_get_option($view);
  if (empty($tooltips)) return;

  // Theme tooltip and add our label class before rendering.
  $themed = array();
  foreach ($tooltips as $field => $tooltip) {
    if (!empty($view->field[$field]) && empty($view->field[$field]->options['exclude'])) {
      $field_css = drupal_clean_css_identifier($field); 
      $themed[$field_css] = theme('views_label_tooltip', array(
        'view' => $view, 
        'field' => $field, 
        'tooltip' => t($tooltip),

      $label_class =& $view->field[$field]->options['element_label_class'];
      if ($label_class) {
        $label_class .= ' ';
      $label_class .= 'views-label-tooltip-field-' . $field_css;

  // Bail early if nothing to do.
  if (empty($themed)) return;

  // Add our JS files.
  drupal_add_js(drupal_get_path('module', 'views_label_tooltip') . '/js/views_label_tooltip.js');
    'viewsLabelTooltip' => array(
      $view->name => array(
        $view->current_display => array(
          'tooltips' => $themed,
  ), 'setting');

 * Theme function for `views_label_tooltip`.
function theme_views_label_tooltip(&$variables) {
  return theme('image', array(
    'path' => drupal_get_path('module', 'views_label_tooltip') . '/images/help.png',
    'attributes' => array(
      'title' => $variables['tooltip'],
      'class' => array(

Finally, the JavaScript code is responsible for adding the theme for each tooltip to the appropriate field label:

(function ($) {

  Drupal.behaviors.viewsLabelTooltip = {
    attach: function(context) {
      $.each(Drupal.settings.viewsLabelTooltip, function(view, displays) {
        $.each(displays, function(display, settings) {
          $.each(settings.tooltips, function(field, tooltip) {
            $('.view-id-' + view + '.view-display-id-' + display + ' .views-label-tooltip-field-' + field + '.views-field-' + field)


And here's the result (additionally rendered with the qTip jQuery plugin): Field labels with tooltips

Attachment Size views_label_tooltip_ui.png 24.07 KB views_label_tooltip_0.png 54.11 KB
Feb 28 2014
Feb 28

In part 1 of this series, I introduced the workflow we use for managing i18n-friendly configuration using Features. One of the cornerstones of the solution is the Features Translations module (hereafter called FT for brevity), which stores string translations in the feature.

One former limitation of FT is that it exported all translated strings in a given language/text group set, even if those strings had never been changed manually after importing them from the Drupal Translations site. For the Default text group, the resulting feature file was huge. With a deployment workflow that involves drush fr-all -y each time, the deployment time became unbearably slow.

We fortunately found some time to fix this problem, by optimizing the translation sets to only changed strings. The general idea of the solution is extremely simple: we just want to export those string translations that are different from their original translation, or that don't exist in core or contrib. Our reference is the .po files served by Drupal Translations.

You can try it right now:

  • Update FT to 7.x-1.0-beta3 or later.
  • Enable the Localization update module
  • Set the local po directory path at admin/config/regional/language/update
  • Update your contrib module string translation using drush l10n-update and ensure the .po files are correctly saved where you pointed
  • Update your feature that contains translations using drush fu my-translation-feature -y

That's it! You should notice a significant decrease in the exported string translations. Many thanks to fellow Meedani Mohammed El-Sawy for the neat implementation.

Jan 23 2014
Jan 23

oEmbed is a simple and popular protocol for embedding external media in web pages. It supports many types of content, including images, videos, and even "rich" content (i.e., HTML snippets). Unfortunately, the current Facebook embed mechanism does not work so well when the embed code is loaded dynamically on a page, e.g. via AJAX instead of statically.

To see this in action, I've created a test page that shows the difference between static and dynamic embedding. The static embedding automatically renders the post correctly, whereas loading the very same embed code using AJAX (here, via the Load button) does not. It is necessary to call FB.XFBML.parse() manually (here, via Refresh button) to nudge the embed code into rendering the post.

In my Drupal app, I needed to show the Facebook embed on a CTools modal dialog. How to trigger the call to FB.XFBML.parse() when the modal opens? Reading the code for ctools_modal_render(), I found hook_ajax_render_alter() which allows to send arbitrary AJAX commands to be executed on the browser side, upon reception of the modal's content. Here's how I used it:

// @file my_module.module
 * Implements hook_ajax_render_alter().
 * Add a `facebook_refresh` command to make sure FB embeds are shown correctly.
function my_module_ajax_render_alter(&$commands) {
  foreach ($commands as $command) {
    if ($command['command'] == 'modal_display') {
      $commands[] = array(
        'command' => 'refreshFacebook',
// @file my_module.js
   * Facebook initialization callback.
  window.fbAsyncInit = function() {
    // Wait until FB object is loaded and initialized to refresh the embeds.

   * Command to refresh Facebook embeds.
  Drupal.ajax.prototype.commands.refreshFacebook = function(ajax, response, status) {
    if (typeof(FB) != 'undefined') {

The front-end logic goes as follows: the first time the modal dialog is opened, the embed code causes the FB script gets loaded, which in turn calls window.fbAsyncInit(), where I call FB.XFBML.parse() as needed. But on subsequent openings of the modal dialog, window.fbAsyncInit() is no longer called, so my custom AJAX command refreshFacebook() takes over in that case to do the same.

The proverbial astute reader would ask why I don't rely on refreshFacebook() to do all the work, since it gets called every time, including the first time. The problem with the first time is that this command callback gets called before the FB script has finished loading, so the FB object does not yet exist at this time. Yes, tricky.

This is where things get really complicated. Trying the code above, the FB embed did show on the CTools modal, only to disappear 45 seconds later with a console message saying fb:post failed to resize in 45s! A bit of googling revealed that this is a known behaviour - but it seems no one had yet analyzed the problem to its root causes. So I pulled up the debug version of Facebook's JavaScript SDK and proceeded to decipher the code there. It turns out that the FB embed code creates an IFRAME with default width and height of 1000px, and it expects the rendering process of the actual post to resize its dimensions according to the post's display area. The IFRAME handler kicks off a 45 seconds timeout, at the end of which it hides the post and logs the message above. Only when a resize event is received does the IFRAME handler cancel the timeout.

The only way I was able to trigger this resize event was by re-initializing the FB script each time the CTools modal is loaded. So my JavaScript code became:

// @file my_module.js
   * Facebook initialization callback.
  window.fbAsyncInit = function() {
    // Wait until FB object is loaded and initialized to refresh the embeds.
    FB.init({ xfbml: true }); // added this to avoid "fb:post failed to resize in 45s" message

This will cause some warnings in the log, like FB.init has already been called - this could indicate a problem. So far, I haven't found a problem with this approach but I welcome your suggestions.

I've spent the best part of 3 days debugging this unexpected behaviour, so I hope this helps someone! I do hope Facebook would make their embed code more robust though - for example, the Twitter embed behaves much better.

Nov 13 2013
Nov 13

One of the technologies that made a lasting impression on me, as a young programmer, was Microsoft OLE. To give my own applications the ability to embed documents created in other applications, and vice-versa, was mind-blowing! But the even bigger thrill came when I understood how the API had been architected to achieve this.

Then came the Web and we had to rebuild everything from scratch - not a bad thing really, since we're still very new at this coding thing :-)

Document sharing and embedding is one of the cornerstones of the modern Web. That's why I was happy to accept a commission to write a new module that allows views to be shared - thanks herd45!. The result is Views Share, and I am proud to release version 1.0 today. Here are some interesting bits from the code:

Views Share dialog in action

The module was originally inspired by the architecture of Share This Thing, which accomplishes a similar function for nodes. The functionality is packaged as a Views area handler, which allows to add a link to the view's header or footer - the link that opens up the sharing dialog. The dialog shows the view's original URL for sharing as well as an embed code for the view. The embed code is an IFRAME tag that points to a special URL that the module catches to render the embedded view.

The embedded view needs to be rendered in an undecorated page: no sidebars, header, footer, or any other Drupal component, except for the view itself. To do this, a new theme function is introduced. This theme renders a full page, complete with HEAD and BODY, and only incorporates the Drupal parts that are absolutely needed. Here's the code for the theme preprocessor and its template file:

function template_preprocess_views_share(&$variables) {
  global $base_url, $language;

  $variables['content'] = /* render the view here */
  $variables['title'] = $view->get_title();
  $variables['base_url'] = $base_url;
  $variables['language'] = $language;
  $variables['language_dir'] = $language->direction == LANGUAGE_RTL ? 'rtl' : 'ltr';
  $variables['head']     = drupal_get_html_head();
  $variables['styles']   = drupal_get_css();
  $variables['scripts']  = drupal_get_js();
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN" "">
<html lang="<?php print $language->language; ?>" dir="<?php print $language_dir; ?>">
<?php print $head; ?>
  <title><?php print $title; ?></title>
  <base href="<?php print $base_url; ?>">
  <?php print $styles; ?>
  <?php print $scripts; ?>
<body class="views-share">

  <?php print $content; ?>


To prevent Drupal from rendering its own fully decorated pages, I call drupal_exit() right after printing the theme above.

One of the requirements of the module was to support embed previewing, based on the way Google Maps does the same: Previewing the view embedding

This was fun to implement, and there are a couple of interesting JavaScript tricks:

Opening a link in a new window

To emulate the Google Maps previewing model (and to accommodate an arbitrary IFRAME size), I needed to open the preview dialog in a different browser window. By listening to a link's click event, I was able to make it open a new window that I specified:

$('a.views-share-preview').live('click', function(e) {

    // Open popup in a new window center screen and listen to its messages.
    var width = 1024,
        height = 768,
        left  = ($(window).width()-width)/2,
        top   = ($(window).height()-height)/2,
        popup =

Updating share dialog embed code with values from the preview window

Now when the user adjusts the embedding values in the preview window, I need to update the embed code in the underlying share dialog. window.postMessage is the API that allows browser windows to communicate. In my case, the preview window sends an update message to its opener:

      var a = $('', { href:location.href } )[0];
      $('#edit-embed-width, #edit-embed-height').change(function() {
        var embedCode = Drupal.settings.viewsSharePreview.embedCode
          .replace('%width', $('#edit-embed-width').val())
          .replace('%height', $('#edit-embed-height').val());
        $('#edit-embed-share').val(embedCode); // update embed code
        $('iframe').replaceWith(embedCode); // update preview
        window.opener.postMessage({embedCode: embedCode}, a.baseURI); // update parent window

The opener window (which contains the share dialog) receives and processes the new embed code:

    var eventMethod = window.addEventListener ? "addEventListener" : "attachEvent";
    var eventWindow = window[eventMethod];
    var eventMessage = eventMethod == "attachEvent" ? "onmessage" : "message";
    eventWindow(eventMessage, function(event) {
    }, false);

Another fun bit was adding support for oEmbed. This was done in two parts: first, add the oEmbed discovery tags on the regular view page, and then respond to oEmbed calls on our endpoint.


The oEmbed standard specifies that a page can make its oEmbed content discoverable, by adding LINK tags to its HEAD. This is what our Views area handler does in case the oEmbed option is chosen. The endpoints specified in the LINK tags are also handled by our module, just like in the regular embed case.


An oEmbed structure is created by the module in response to an oEmbed request. Its most important payload is the embed code that can be used by the oEmbed consumer. The consumer can ask for the structure in JSON or XML format. To render JSON, drupal provides the drupal_json_output() function, but to convert an object to XML, I was able to find a simple class on that does the job:

// @see
if (!class_exists('ObjectToXML')) {
  class ObjectToXML {
     private $dom;

     public function __construct($obj) {
        $this->dom = new DOMDocument("1.0", "UTF8");
        $this->dom->xmlStandalone = true;
        $root = $this->dom->createElement(get_class($obj));
        foreach($obj as $key=>$value) {
          $node = $this->createNode($key, $value);
          if($node != NULL) $root->appendChild($node);

    private function createNode($key, $value) {
        $node = NULL;
        if(is_string($value) || is_numeric($value) || is_bool($value) || $value == NULL) {
          if($value == NULL) $node = $this->dom->createElement($key);
        else $node = $this->dom->createElement($key, (string)$value);
        } else {
        $node = $this->dom->createElement($key);
        if($value != NULL) {
          foreach($value as $key=>$value) {
            $sub = $this->createNode($key, $value);
            if($sub != NULL)  $node->appendChild($sub);
      return $node;

    public function __toString() {
      return $this->dom->saveXML();

You can see a demo of Views Share in action on my Feeds+Views demo site. I had lots of fun building that module, now go on and use it to make your views shareable!

Attachment Size views-share-preview.png 69.66 KB
Oct 23 2013
Oct 23
I've often wished document editing were more user-friendly in Drupal. So I came up with this very half-baked idea to embed the viewing and editing of a Google Doc in a node field. Following is a proof of concept. The code is embedded in this very field to selectively render the viewing or editing mode of the document (Note the Edit Google Doc link below):
<?php if(arg(0) == 'node' && arg(2) == 'gdoc') { // Editing ?>
Cancel Editing

<?php } else { // Viewing ?>
Edit Google Doc

<?php } ?>
Edit Google Doc To get the proper editing link, I used the one given to me by the document when I click "Share". To get the viewing link, I had to publish the document to the Web and used the link in that dialog.

This proof of concept could be turned into a module that packages this functionality as a new Field API type, responding to viewing or editing modes by embedding the corresponding URL. For tighter integration with Drupal, several ideas some to mind:

  • Create a new document when creating the node.
  • Using Google API to retrieve document text for indexing.
  • Injecting a custom CSS into the Google Doc iframe to style it with the Drupal theme.
That's it! Enjoy.
Aug 25 2013
Aug 25

I've been maintaining a growing amount of modules over the years. I am starting to document my observations, habits and challenges, in the hope they'd be useful to other Drupal coders.

I devote time and energy to the development of a module. Since I consider myself a software craftsman, each module is a creation of mine - an artifact to which I try to imbue functional and aesthetic values. I also imbue each module a life of some sort, since it will be utilized on (hopefully) many sites, go through evolutions, and have a lifetime until it is no longer needed and used.

All this just to say that I develop an emotional relationship toward each module, and the moods of these relationships are reflected in my habits.

Handling issues is the daily activity of every maintainer. Before we get more abstract, let's see some simple practices I follow here:

Documenting the commit

I make it a point to link specific commits to the issue that motivated them. So for issue #2047473 "text shows incredibly small", I committed a fix whose comment says:

Remove non-portable CSS #2047473

Drupal cleverly links the issue number to the issue page above. However, there's no mention of this commit on the issue page. That's why I add a comment - to the issue, this time - typically saying

Committed a fix on some branch.

and I link this sentence to the actual git commit page. It would be great if included an automatic list of referencing commits on the issue page, just like Trac (and I'm sure other SCM sites) does.

Documenting the fix

When marking an issue as fixed, I make sure to explain how the issue was fixed, to set the expectation of the reporter and other interested users. They can then test the fix more easily, and the probability of getting useful feedback is higher.

For example:

I basically removed the padding declaration for the input elements.

In the case of responding to feature requests, I include in the fix comment a short description of the new functionality and UI that were introduced, and describe a short recipe to test them. Often, when dealing with 3rd party modules, it's not clear where in the Drupal UI the functionality is made accessible. That's what I am trying to avoid here with the description / recipe - I am sure that screenshots would also be useful.

I've also seen maintainers attach their own committed patches to the issue - I find this clarifying to quickly assess the fix (from the perspective of an interested user). Maybe allowing the Drupal git server to embed the commit (like GitHub does) would solve this more cleanly.

  • Designing the module page
  • Making releases
  • Working with co-maintainers
  • Integrating with Drupal and other modules
  • Submitting patches to other modules
  • Designing module UI
  • Handling rising complexity
  • Providing professional services around your modules
  • ... and whatever else may come!
Jun 17 2013
Jun 17

Now that Twitter 1.1 and Feeds are buddies, time to move to other data sources. Next up: Facebook. Using trusty Feeds and friends, I was able to ingest my own Facebook home feed. Here's how to replicate this:

For the impatient, attached is a feature that should get you set up quickly. You'll need the following modules:

  • Feeds latest HEAD from 7.x-2.x branch.
  • Feeds JSONPath Parser version 7.x-1.0-beta2 - make sure to install the needed JSONPath library as per the instructions on the module page.
  • Feeds OAuth latest HEAD from 7.x-1.x branch.
  • php-proauth library that you install in sites/all/libraries module as such:

git clone

The idea behind the setup is to create a Feeds pipeline that:

  • Fetches the given resource URL (a Facebook Graph API URL) using Feeds OAuth 2.0 Fetcher. This fetcher checks for OAuth 2.0 access token for the current user and performs authorization if it's not found. It alerts the user to missing access tokens during feed creation.
  • Parses the result using Feeds JSONPath Parser, since Facebook Graph API uses JSON.
  • Maps the result to nodes using the standard Node Processor.

Create a new Facebook application. You need to add two specific settings to it:

  • Basic > Website with Facebook Login > Site URL: enter the callback URL that is reported in the Feed importer's Fetcher > HTTPS OAuth 2.0 Fetcher Settings > Site identifier description.
  • Permissions > Extended Permissions: add the read_stream permission.
  • You will need to copy the App ID and App Secret strings of the Facebook app to the Fetcher > HTTPS OAuth 2.0 Fetcher Settings > Consumer key and Consumer secret settings, respectively.
  • Set the fetcher's Method to GET.
  • Then create a new node of type Facebook feed with the Graph API URL (e.g. your home feed). Make a note of this node's nid.
  • Finally, edit the facebook view included in this feature, such that the filter Feeds item: Owner feed nid refers to the nid noted above.

That's it. This should cure your feed indigestions!

Attachment Size facebook_feed-7.x-0.2.tar 40 KB
Jun 13 2013
Jun 13

Update: This post now contains a feature that you can import in D7 to see the Twitter feed in action.

The new Twitter 1.1 API kicked in recently, which meant a new cycle of maintenance for anyone consuming their data programmatically. My own Feeds + Views demo site streams #drupal, using Feeds and complementary modules. I had to make a few changes to the importer to adapt to the new API:

  • Authorization using OAuth
  • Parsing JSON instead of XML

The new Twitter 1.1 API requires OAuth authentication/authorization for every request. Fortunately, I had already written Feeds OAuth to solve for feeds that require OAuth, so I just had to plug this in. Well, not "just", because it took choosing among the several authorization options that Twitter provides, and fixing a couple of bugs in the module itself.

Twitter provides several options for authorization, depending on the needs of the consumer (not listed on this page is the Application-only authentication). I ended up choosing the method that required the least work on the Feeds OAuth module, namely obtaining OAuth tokens from as a pre-processing step. To do this, I manually added an entry to the feeds_oauth_access_tokens table, with the tokens that were handed to me by Twitter on my application page. This way, Feeds OAuth would not have to ask me to login to Twitter in order to make the API call. Obviously, this is a temporary hack and I will work on enhancing the module support for different authentication options.

Twitter 1.1 API only returns JSON results. To parse JSON instead of RSS/Atom, I used Feeds JSONPath Parser. It does the job as advertised, but the only challenge here was to retrieve the tweet URL for each result. The Twitter search API itself does not return tweet URLs, for some unfathomable reason. My setup needs the tweet URL to pass it to oEmbed, which renders the tweet on the view. Tweet URLS are of the form<user_screen_name>/status/<tweet_id>.

To get the URL, I had to resort to coding. Here's how I did it:

  • First convince Feeds JSONPath Parser to retrieve the user's screen name. To do this, I had to map it to some field - I chose the node body, although a better solution would be to use a NULL target field, just to fool the parser into returning the value.

  • In a custom module, I created a new programmatic source field called "Tweet URL" that synthesizes the URL:

 * Implements hook_feeds_parser_sources_alter().
function demo_feeds_parser_sources_alter(&$sources, $content_type) {
  $sources['tweet_url'] = array(
    'name' => t('Tweet URL'),
    'description' => t('The URL of a tweet.'),
    'callback' => 'demo_feeds_tweet_url',

 * Populates the "tweet_url" field for each result.
function demo_feeds_tweet_url(FeedsSource $source, FeedsParserResult $result, $key) {
  $item = $result->currentItem();
  // jsonpath_parser:2 corresponds to user screen name in my importer.
  // jsonpath_parser:0 corresponds to tweet ID in my importer.
  return '' . $item['jsonpath_parser:2'] . '/status/' . $item['jsonpath_parser:0'];
  • Finally, I mapped this new source field to my target field, the URL of a Link that renders using oEmbed.

This little exercise took a good couple of hours - and that's just for a demo. API changes are always painful, but at least the Feeds OAuth module got some love and fixes in the process.

  • Enable the module twitter_feed_custom.
  • Copy the ping.php_.txt file to your Drupal root folder and rename it to ping.php. Also edit the file to point the DRUPAL_ROOT definition to your actual Drupal root folder.
  • Copy the Consumer key and Consumer secret strings of the Twitter app to the Fetcher > HTTPS OAuth Fetcher Settings > Consumer key and Consumer secret settings, respectively.
  • Create a new node of type Twitter feed with your query URL (e.g. #drupal). Make a note of this node's nid.
  • Edit the (badly-named) hashdrupal view included in this feature, such that the filter Feeds item: Owner feed nid refers to the nid noted above.
Jun 12 2013
Jun 12

I created a bare-bones content filter to add musical notation to Drupal content, using the VexFlow / VexTab music engraving library. Here's a little sample, also showing my fork of the original library to handle basic Arabic musical notation (quarter tones and special scales):

tabstave notation=true tablature=false clef=treble key=Rast notes C/4 D/4 E%@/4 F/4 | G/4 A/4 B%@/4 C/5

Feel free to fiddle with the music snippet above. You can find a tutorial for the syntax here. I hope to work more on this library to allow MIDI playback and other goodies. I love JavaScript!

Jun 02 2013
Jun 02

Last week, I described a technique to query and display nodes in all available translations. This worked well enough, but a performance-minded reader pointed out that the query generated by Views (that includes N self-joins for N enabled languages) would not scale to a large number of nodes.

My usual approach when implementing new ideas is to ensure the logic works first, and only handle optimization when needed. It's a strategy that has worked well for me in the past. So I set out to test this hypothesis, and to optimize the query if it was needed. Here's what happened:

The first obstacle was to generate a large set of nodes and their translations. Devel Generate, the Devel sub-module that generates Drupal objects for development purposes, does not support content translation at the time of writing. I submitted a D7 patch to the 2 years old feature request to achieve this. I tested it with 10K nodes, and it seems to work well. Your review is appreciated!

Having generated 10K nodes and their translations to Arabic and French (30K nodes in total), I cloned the Proverbs view from last time to query and display this content. The result was quite explicit: the view page never finished loading! Clearly, the Views-generated query was not scaling. And for good reason: 3 SQL JOINS of 30,000 records each is a performance black hole. Optimization was needed.

My goal for optimizing the query was to retain all the advantages that Views offers in terms of theming query results, integrating into Drupal pages, etc. - these are indispensable features when creating real-world applications. In short, I wanted to transparently override the Views-generated query. To do so, I needed to:

  • Remove the peformance-killing JOINs from the query
  • Perform an optimized query to find node translations
  • Re-insert the results from the optimized query into the Views results, to allow it to proceed with the display

The code I used follows. I will explain the important parts below.

Remove the peformance-killing JOINs from the query

The function demo_i18n_views_query_alter() removes from the Views query object all references to the SQL JOINs, which are called "relationships" in Views parlance. Views core invokes this hook just before converting the query object into an SQL statement. The resulting query that Views will execute looks like this:

SELECT node.nid AS nid, node.created AS node_created
FROM {node} node
WHERE (( (node.status = '1') AND (node.type IN  ('multilingual_node')) AND (node.tnid = node.nid OR node.tnid = 0) ))

Perform an optimized query to find node translations

The query as modified above will only return nodes that are translation sources. It's now up to me to query the node translations, by waiting for Views to execute the modified query, and then gathering the nids to find their translations (as stored in {node}.tnid). This is a simple query using the SQL IN operator. I call this hand-made query in the demo_i18n_views_post_execute() function, which is invoked by Views after it executes its own query.

Re-insert the results from the optimized query into the Views results

The challenge with the new query is that it returns one node translation per row, as opposed to the original query which returned all translations on the same row. In addition, the results need to be copied into the view::result object, with the right key names that Views expects. In order to find the right key names, I first displayed the results from the unmodified Views query and noted the result keys. With this information, I then proceeded to loop over the optimized query results, and find the corresponding entry in the Views result array that would receive them. This loop is also implemented in the demo_i18n_views_post_execute() function.

The results were impressive! The view page loaded in very acceptable time (ApacheBench reports a mean time of ~1350ms, against ~650ms in the case of a view with just 4 nodes), and Views happily themed the translated nodes as if it had queried them itself. You can see this code in action on my i18n demo site.

The approach of hand-crafting Views queries has been on my mind for a long time, and I'm glad I took the first step. So far, I am not sure that a generic module can be created out of this, mainly due to the necessity to transform the result set after the optimized query is run. In any case, I'll be applying this technique in my projects!

May 25 2013
May 25

Here's a little puzzle: display a table of nodes, each row containing the same content in all available translations.

Then, a couple of days ago, someone asked me if I had solved it. I hadn't thought of that puzzle since then, but I would have felt bad answering no. So, with 3 years of i18n work under my belt, I decided to give it another go. I did find a solution this time, but it's not optimal, and it required coding. You can find a demo of the solution online. Demo of solution

The basic idea is to select the nodes in their source language, then relate each node to all its translations. To do this, the view is built by filtering on Content translation: Source translation, then adding one Content translation: Translations relationship per language. Nodes and translations

Now this view works pretty well, except for nodes that are not translated: although they are picked up by the SQL statement, the related nodes in each language are empty, since the tnid is not set for untranslated nodes. That's where I had to write a new join handler that not only joins the source language node to its translation, but also joins it to itself in case there are no translations. The following code silently replaces the standard join handler for Content translation: Translations with this new one:

The resulting query will look like the following - note the JOIN clauses:

SELECT node_node_1.title AS node_node_1_title, 
            node_node_1.nid AS node_node_1_nid, 
            node_node_1.language AS node_node_1_language, 
            node_node_2.title AS node_node_2_title, 
            node_node_2.nid AS node_node_2_nid, 
            node_node_2.language AS node_node_2_language, 
            node_node.title AS node_node_title, 
            node_node.nid AS node_node_nid, 
            node_node.language AS node_node_language, 
            node.created AS node_created
FROM {node} node
LEFT JOIN {node} node_node 
           ON (node.nid = node_node.tnid OR (node_node.tnid = 0 AND node.nid = node_node.nid)) AND node_node.language = 'ar'
LEFT JOIN {node} node_node_1 
           ON (node.nid = node_node_1.tnid OR (node_node_1.tnid = 0 AND node.nid = node_node_1.nid)) AND node_node_1.language = 'en'
LEFT JOIN {node} node_node_2 
           ON (node.nid = node_node_2.tnid OR (node_node_2.tnid = 0 AND node.nid = node_node_2.nid)) AND node_node_2.language = 'fr'
WHERE (( (node.status = '1') AND (node.type IN  ('proverb')) AND (node.tnid = node.nid OR node.tnid = 0) ))

The careful reader will have noticed that there's one extra database JOIN in my solution: the one that joins the source language node to itself. If you have a suggestion to remove it, please let me know!

AttachmentSize 104.25 KB 22.44 KB
May 06 2013
May 06

I've needed to build a regular expression filter for a view I'm working on, so I'm sharing the code here because it might be helpful to other people as well. My specific case is that I am building a Blocks administration VBO. I'd like to let the administrator filter on block body content, and allow them to enter a regular expression as a filter.

I first declare the relevant field in the Views schema for the block table:

// @file
function views_block_views_data() {
  // Body
  $data['block_custom']['body'] = array(
    'title' => t('Body'),
    'help' => t('The block body.'),
    'field' => array(
      'handler' => 'views_handler_field',
    'filter' => array(
      'handler' => 'views_handler_filter_regex',
  return $data;

Now adding the regex handler is a matter of implementing the views_handler_filter_regex class. I want my handler to support MySQL, PostgreSQL, and any other database system that supports regular expressions. Here's some minimal code to achieve this:

// @file
class views_handler_filter_regex extends views_handler_filter {
  var $always_multiple = TRUE;

  function operator_options() {
    // Return placeholders that will be expanded at query creation time.
    return array(
      'match' => t('Matches regex'),
      'nomatch' => t('Does not match regex'),

  function admin_summary() {
    if (!empty($this->options['exposed'])) {
      return t('exposed');
    return parent::admin_summary();

  function value_form(&$form, &$form_state) {
    $form['value'] = array(
      '#type' => 'textfield',
      '#title' => t('Value'),
      '#size' => 30,
      '#default_value' => $this->value,

  function query() {
    // Find actual regex operators depending on database type.
    $db_type = Database::getConnection()->databaseType();
    switch ($db_type) {
      case 'mysql':
        $match = 'REGEXP';
        $nomatch = 'NOT REGEXP';
      case 'pgsql':
        $match = '*~'; // case insensitive match
        $nomatch = '!*~'; 
        // Allow other modules to define these operators.
        $operators = &drupal_static(__METHOD__);
        if (empty($operators)) {
          // hook_views_regex_operators($db_type)
          // @param $db_type - the type of database engine being used ('mysql' and 'pgsql' will not be called).
          // @return array('match' => operator for matching, 'not match' => operator for negative matching);
          $operators = module_invoke_all('views_regex_operators', $db_type);
        if (empty($operators)) {
          watchdog('views_regex', 'No regex operators found for database type %type. Using operator LIKE instead.', array('%type' => $db_type));
          $match = 'LIKE';
          $nomatch = 'NOT LIKE';
        else {
          $match = $operators['match'];
          $nomatch = $operators['not match'];
    // Replace placeholder with actual operator.
    $this->operator = $this->operator === 'match' ? $match : $nomatch;

That's it! Short and sweet.

Apr 08 2013
Apr 08

Is there a way to use Drupal Queue API to sequence the execution of tasks, like in a pipeline?

My use case is pretty simple: I have a number of tasks executing in the background, doesn't matter their order because they are self-contained. However, I'd like a single task (of a different type) to execute after all the others are done.

How would this be done?

Mar 19 2013
Mar 19

This week, I'll describe a particularly challenging component I had to deal with: inoffensive-sounding menu items. Should be easy, right? Well, it wasn't.

I won't get into the basics of creating multilingual menu items here. There is good documentation on how to get that set up using the i18n submodule i18n_menu. To provide a context for this post, I'll just mention that we are manually creating menu items, through the admin UI. We have two types of menu items:

  • Ones that should appear in both languages, localized, and pointing to the same place. Those are created with the language set to "Language neutral".
  • Ones that should only appear in a specific language. We set that language explicitly on the menu item's form.

Internally, the i18n_menu modifies the core menu_links and menu_custom tables by adding, among others, a language attribute to save the above info. In addition, the module creates a new text group called menu to save menu translations. In this group, each translation is identified by a menu name and mlid to refer back to the original menu item.

The first problem occurred even before we introduced Features into the mix. The decision to use the mlid as part of the translation identifier means that, across site stages and instances, menu items should have the same database primary key in order to be correctly translated. This design decision introduced a lot of instability in our configurations, for example preventing us from creating new menu items on the fly, for testing or customization purposes. In essence, we would have needed to stick to an install profile approach to manage the site configuration - which is a desirable goal in itself, but one we didn't pursue. In the mean time, our menu item string translations were only reliably showing on the machine where translation occurred, but not elsewhere. This clearly had to be fixed.

We found the module Entity menu links to solve one half of this problem. This module creates a uuid for each menu item, ensuring that uuid remains unchanged throughout the lifetime of the menu item. This module also modifies the core menu_links table by adding to it a uuid attribute.

Still, the string translations were being saved with the mlid. How to convince i18n_menu to use the uuid instead? That's where we had to write some code. The i18n architecture uses the concept of i18n objects that correspond to Drupal site components. We used the following code to override the i18n object info for menu items to use the uuid as a key:

 * Implements hook_i18n_object_info_alter().
function checkdesk_core_i18n_object_info_alter(&$info) {
  // Use UUID field to identify menu_link.
  $info['menu_link']['key'] = 'uuid';
  $info['menu_link']['load callback'] = 'checkdesk_core_i18n_menu_link_uuid_load';

 * Callback to load menu_link by UUID.
function checkdesk_core_i18n_menu_link_uuid_load($uuid) {
 if (!empty($uuid)) {
    $query = db_select('menu_links', 'ml');
    $query->leftJoin('menu_router', 'm', 'm.path = ml.router_path');
    // Weight should be taken from {menu_links}, not {menu_router}.
    $query->addField('ml', 'weight', 'link_weight');
    $query->condition('ml.uuid', $uuid);
    if ($item = $query->execute()->fetchAssoc()) {
      $item['weight'] = $item['link_weight'];
      return $item;
  return FALSE;

Unfortunately, the strings were still showing up with the mlid on the translation interface. After many a WTF incantation, we found that i18n_menu was not honouring the i18n object key attribute while creating the string identifiers. We submitted a patch to fix this.

At this point, we had successfully modified the indexing mechanism of i18n_menu to use uuids, as shown below. Our testing with manual .po files revealed that regardless of mlid differences, menu item translations were being successfully moved from one instance to another.

i18n_menu with UUID identifiers

Now menu item translations were made reliable across instances, but they still weren't being saved in a feature. The core Features module does support menu items, but it does not export the additional attributes we introduced above, namely language and uuid. We also need to export customized because i18n will not translate menu items that are not marked as customized.

We submitted a simple patch to Features that allows a hook_query_TAG_alter to extend the relevant query and return extra fields to be exported. Our implementation of this hook looks like this:

 * Implements hook_query_TAG_alter() for `features_menu_link`.
function checkdesk_core_query_features_menu_link_alter($query) {
  // Add missing attributes for translation.
  $query->fields('menu_links', array('uuid', 'language', 'customized'));

After this change, the exported menu links look like this (note the last 3 attributes on each entry):

 * @file

 * Implements hook_menu_default_menu_links().
function checkdesk_core_feature_menu_default_menu_links() {
  $menu_links = array();

  // Exported menu link: main-menu:node/add/discussion
  $menu_links['main-menu:node/add/discussion'] = array(
    'menu_name' => 'main-menu',
    'link_path' => 'node/add/discussion',
    'router_path' => 'node/add/discussion',
    'link_title' => 'Create story',
    'options' => array(
      'attributes' => array(
        'title' => '',
      'alter' => TRUE,
    'module' => 'menu',
    'hidden' => '0',
    'external' => '0',
    'has_children' => '0',
    'expanded' => '0',
    'weight' => '-47',
    'uuid' => 'edc54df9-4aa8-bf84-dd89-ca0a351af23b',
    'language' => 'und',
    'customized' => '1',
  // Exported menu link: main-menu:node/add/media
  $menu_links['main-menu:node/add/media'] = array(
    'menu_name' => 'main-menu',
    'link_path' => 'node/add/media',
    'router_path' => 'node/add/media',
    'link_title' => 'Submit report',
    'options' => array(
      'attributes' => array(
        'title' => '',
      'alter' => TRUE,
    'module' => 'menu',
    'hidden' => '0',
    'external' => '0',
    'has_children' => '0',
    'expanded' => '0',
    'weight' => '-49',
    'uuid' => '0bc3af5d-28a8-c864-bd93-f17d8bea2366',
    'language' => 'und',
    'customized' => '1',

With these patches, we were able to reliably persist multilingual menu links using Features. The menu item translations are saved in the translations component of the feature, as described in part 1 of this series. They look like this:

 * @file

 * Implements hook_translations_defaults().
function checkdesk_core_feature_translations_defaults() {
  $translations = array();
  $translations['ar:menu']['a6b48d33d248c146aa8193cb6f618651'] = array(
    'source' => 'Create story',
    'context' => 'item:edc54df9-4aa8-bf84-dd89-ca0a351af23b:title',
    'location' => 'menu:item:edc54df9-4aa8-bf84-dd89-ca0a351af23b:title',
    'translation' => 'أنشئ خبر',
    'plid' => '0',
    'plural' => '0',
  $translations['ar:menu']['8578b45ff528c4333ef4034b3ca1fe07'] = array(
    'source' => 'Submit report',
    'context' => 'item:0bc3af5d-28a8-c864-bd93-f17d8bea2366:title',
    'location' => 'menu:item:0bc3af5d-28a8-c864-bd93-f17d8bea2366:title',
    'translation' => 'أضف تقرير',
    'plid' => '0',
    'plural' => '0',

Now I need your help to review and support (and possibly enhance) the patches submitted to i18n and Features. Please visit them here:

As you know, the more people show interest in a patch, the more likely it will go in quickly. Your help is appreciated!

Next time, I'll describe other components of the multilingual puzzle: taxonomy terms, static pages, etc.

AttachmentSize 81.31 KB
Mar 05 2013
Mar 05

In my role as development team leader, I am responsible for the application architecture that allows other team members to focus on building functionality with minimum friction and rework. As such, one of my biggest tasks is to ensure that new features and configurations can be reliably deployed to the various stages: development, testing and production.

My current project is an Arabic/English application built on Drupal 7, that is deployed in multisite fashion to several partners. I use Features as a base configuration management system, and a number of extension modules to help me manage specific site components. The need to manage the configuration of multilingual components makes the task more complex, and in this series of posts I hope to describe a full recipe that's allowing our distributed team to commit code without overwriting existing settings.

Here are some of the main architectural components on the site:

  • Menus and menu links
  • Taxonomies
  • Static pages
  • Content types
  • Views
  • Rules
  • Heartbeat messages

All these components need to be shown in multiple languages, with the help of the Internationalization module and friends.

Although our application can be delivered in multiple languages, we do all our development on the English UI. When we switched the default language to Arabic, we found that all our menus, taxonomies, and field translations were no longer showing. And for good reason: Drupal does not store the source language of strings in its database, so i18n has to guess the source language - and by default, it considers the site's default language to be the source. Fortunately, you can explicitly specify the source language via the variable i18n_string_source_language. We decided to hard-code the value of this variable in settings.php like so:

// settings.php

// Hardcode i18n_string_source_language to prevent nasty surprises.
$conf['i18n_string_source_language'] = 'en';

// Load per-site configurations.
require 'settings.local.php';

The last line is just a directive that allows us to version-control settings.php for global configurations and loads settings.local.php for instance-specific settings such as database connection.

Because we're deploying across 3 stages, with several instances at each stage, we cannot afford to manually import .po files each time we create or modify a translation. In order to keep the UI translation workflow sane, we designated a specific stage as the recipient of all translation work, allowing the configuration manager (yours truly) to solve the problem of automating the deployment of these translations to other stages and instances. To this end, I wrote a Features plugin called Features Translations that persists the selected translations within a feature, as shown in the screenshot below:

Translations in the Features UI

A file called gets exported to the feature, looking like this:

 * @file

 * Implements hook_translations_defaults().
function checkdesk_core_feature_translations_defaults() {
  $translations = array();
  $translations['ar:default'][] = array(
    'source' => 'The optional description of the taxonomy vocabulary.',
    'context' => '',
    'location' => '',
    'translation' => 'الوصف الاختياري لمعجم الوسوم.',
    'plid' => '0',
    'plural' => '0',
  $translations['ar:default'][] = array(
    'source' => 'You are not authorized to access this page.',
    'context' => '',
    'location' => '',
    'translation' => 'غير مسموح لك بالوصول إلى هذه الصفحة.',
    'plid' => '0',
    'plural' => '0',
  $translations['ar:default'][] = array(
    'source' => 'Function',
    'context' => '',
    'location' => '',
    'translation' => 'الوظيفة',
    'plid' => '0',
    'plural' => '0',
  return $translations;

In the next part, I'll discuss our handling of multilingual menu items, which took a considerable amount of effort and patches to Features and i18n!

AttachmentSize 69.53 KB
May 05 2012
May 05

Over the years, I've accumulated a large collection of e-books and digital music albums, not to mention family pictures. Information overload is not a philosophical point of view, it's a real problem that forces me to devote time, effort and money to maintain that collection.

That's probably why so many media organizers exist. Because I believe that all applications should be delivered from the Web, and because no ready-made Web media organizer struck me as fulfilling my needs, I started to write my own using Drupal 6, dubbed Mediatheque. Here are the most important design goals I had in mind:

  • The main job of the organizer is to "ingest" media files by processing them to extract metadata, which is made available for searching and browsing.
  • I should be able to point the system to different "volumes" containing my media. These volumes are essentially folders that are present somewhere on the network.
  • No extra file space (beyond database growth) should be required to ingest media files.
  • The system should be able to store arbitrary metadata about the media.
  • I should be able to add new information handlers for media at any time - the system would silently re-process the files.
  • New version of media handlers should also be supported by re-processing the files.
  • The system should be smart about recognizing files across name changes and metadata updates.
  • The system should be robust in the face of plugin errors.
  • Media display should be dependent on its type.

Based on these goals, here are the significant implementation details of the current system:

  • Drupal Queue is used to process the files in the background. In fact, I am using 3 queues as a processing pipeline:

Starting with a volume root, the folders queue finds and enqueues the files, the files queue creates Drupal documents (nodes) and enqueues them for plugin processing, and the plugins queue applies the registered plugins to extract metadata.

  • Each processing step produces a log entry. This allows to track the errors that are produced during media processing. The log is a Drupal table that is integrated with Views via hook_views_data. Each log entry contains information about the processed document, the file hash, the plugin name and version, and the status code of the processing, allowing to detect the cases where re-processing should occur. Mediatheque log

  • Plugins are metadata extractors that are associated with MIME type patterns. For example, Mediatheque currently comes with an ID3 extractor that uses the getID3 library and an e-book metadata extractor that uses the Google Books API via Zend Gdata. Mediatheque plugins

  • Metadata extracted by the plugins is stored in the document node using my CCK Metadata module. This is a simple name/value CCK field and the document node includes this field with unlimited cardinality. The metadata pairs returned by each plugin are prefixed with a unique plugin prefix to be able to handle re-processing. CCK Metadata also allows custom formatting of specific metadata entries via hook_cck_metadata_fields:

 * Implementation of hook_cck_metadata_fields().
function mediatheque_cck_metadata_fields() {
  return array(
    'isbn:thumbnail' => array(
      'formatter' => 'mediatheque_formatter_thumbnail',

Eventually, this metadata system should reuse RDF instead of using a custom design. Mediatheque plugins

  • Finally, the main mediatheque view is created as a regular view with filterable metadata. The metadata name exposed filter is converted to a drop-down via my Views Hacks' Views Selective Exposed Filters module whose job is to restrict the available filter values to those found in the view results. Mediatheque plugins

Mediatheque is still very much a work in progress. However, many conceptual challenges have already been solved, and I would love to hear your feedback!

AttachmentSize 132.92 KB 53.99 KB 50.21 KB 87.44 KB 163.26 KB
Apr 16 2012
Apr 16

About a month ago, I started porting Sheetnode to D7. The natives were getting restless on the issue queue, so I thought I would pacify them with some serious porting effort. I am glad to announce that the port was completed a few days ago: Sheetnode 7.x-1.0-beta1 is now available, a fully-functional port of the latest D6 version.

The porting process was surprisingly smooth. I'd been avoiding porting my modules to D7 because it felt like rewriting the same code all over again - and I hate rework. But during this month, I got to learn many aspects of the D7 API, and it's not all bad :-) Here are some snippets of my experience:

  • The Field API is actually cool. I liked that it carried forward the ideas of CCK pretty much verbatim, so porting all the Sheetnode CCK field code was relatively painless.
  • The Token API is a huge improvement over D6. Specifically, it is now possible to define dynamic tokens, i.e. tokens that are dependent upon the object instance at run-time. Sheetnode needs this to expose tokens for single spreadsheet cells, given a cell coordinate (e.g. B2). In D6, the Token module required that all tokens be defined ahead of time , so I had to use a hack to implement this feature. But in D7, all I needed to do was declare the cell token to be of type dynamic:
 * Implements hook_token_info().
function sheetnode_token_info() {
  $info['tokens']['sheet']['cell'] = array(
    'name' => t('Cell reference'),
    'description' => t('A cell reference such as B52, C79, etc.'),
    'dynamic' => TRUE,
  // more stuff here...
  return $info;

 * Implements hook_tokens().
function sheetnode_tokens($type, $tokens, array $data = array(), array $options = array()) {
  $replacements = array();
  if ($type == 'sheet' && !empty($data['sheet'])) {
    foreach ($tokens as $name => $original) {
      list($token, $coord) = explode(':', $name, 2);
      if ($token == 'cell' && $cell = _sheetnode_get_cell($data['sheet'], $coord)) {
        $replacements[$original] = $cell->value;
    return $replacements;
  • The Form API is also an improvement. AJAX support is finally sane :-) Attaching JS and CSS files to elements is a great idea.

  • The Database API is a disappointment. It added complexity but gave nothing in return: simple queries are more difficult to write, and in one instance I had to split my query into two because the new API does not support the UPDATE ... SELECT idiom. I am sure that the new API caused a lot of teeth grinding from module maintainers.

  • Because Sheetnode is JavaScript-heavy, I had to ensure that the SocialCalc engine and its Drupal integration script work correctly with the new Drupal themes. I spent a significant amount of time reworking the core SocialCalc engine to enhance its element positioning code. The outcome is a very flexible spreadsheet engine that can be embedded within absolute, relative or fixed containers and still behave correctly. Hint: element.getBoundingClientRect is magical. See for example the screenshot below of the template used in the Spreadsheet Views style plugin.

Spreadsheet template in Views admin UI

  • One of the more interesting challenges was to make SocialCalc compatible with the jQuery UI dialog element, which is used in the Views admin interface. The problem was that the dialog element was capturing key events and not passing them to the spreadsheet engine. Furthermore, it was handling the ESC key to close the dialog, whereas in SocialCalc ESC is used to cancel editing a single cell. The solution was to read the code of the jQuery UI library and undo some of its settings during initialization of the spreadsheet engine:
 // If we're in a jQuery UI dialog, disable closeOnEscape and unbind the keypress event that interferes with our keyboard handling.
  if ($('.ui-dialog-content').length) {
    $('.ui-dialog-content').dialog('option', 'closeOnEscape', false);

All in all, I had fun porting Sheetnode to D7. But then again, maybe it's because my expectation was so low to start with :-)

AttachmentSize 136.21 KB
Jan 17 2012
Jan 17

Web applications are changing. Whereas most of the processing used to happen on the server, the current generation of browsers is capable of performing non-trivial tasks via the increasingly powerful HTML5 + JavaScript environment. Drupal, however, is still a server-heavy platform where JavaScript is used mostly as user interface candy. This needs to change if Drupal is to retain its status of an all-encompassing Web platform.

My own work with the client-side spreadsheet Sheetnode, started 3 years ago now, made me aware of JavaScript's power, and of the limitations of server-heavy applications. I also started an experiment in rich-client video consumption called Feedeo, which moves the logic of media navigation and playback to the browser.

However, I believe it is time to deepen Drupal's Lego-like approach to the client-side, by creating JavaScript components that are configurable and assemblable just like server-side components - via modules - are today. I want to present here a concrete use case and proposal for such an initiative.

Consider Views. The uber-query generator and renderer has proved to be a cornerstone of Drupal development, and yet it still lacks many features related to user interface flexibility. For example, I was trying to reuse the results of a node view to create a smaller, sidebar view of the node authors that is differently grouped and themed. In traditional Views programming, this would probably require creating a second view in a block that performs the same query as the main view, but with a different theme. On a busy site, that means doubling the amount of SQL queries right there.

My proposed solution involves a different approach to Views programming that moves to the client the theming and interaction tasks, leaving only query generation and execution on the server. Here's how this would happen in a very simplified form:

  • Query is generated and executed on the server as usual
  • Results are JSON-formatted and returned to the client
  • View styling for fields, exposed filters, etc is generated on the client
  • Filtering can happen on the client or on the server depending on whether we are drilling-down on results or changing the initial result set
  • Changing the view style happens easily on the client without re-querying the server (think of your typical file manager view of files: list, icons, details)
  • Several views can share the same result set

For this to happen, many Views components that are currently implemented as server-side code would need to be re-implemented for the browser. For example, render() methods would need to move to JavaScript. I hope that Views 3 could be used, with no or minimal patching, to provide the infrastructure upon which new JavaScript-oriented plugins and handlers would be developed. I am also thinking that using a robust JavaScript framework such as Backbone.js would simplify the client-side code and make development more fun.

I hope to free up some time to make a demo of this approach soon. In the mean time, I'd love to hear what you think!

Nov 12 2011
Nov 12

I wrote last time about the latest developments to my Views Auto-Refresh module, which periodically refreshes a Views page, either by reloading the whole view, or by incrementally inserting new items only. It's a useful tool for activity streams and other Twitter-like, real-time lists.

Still, I had a nagging feeling that my code was endangering the server. Consider this: every 15 seconds, each connected browser invokes a full Drupal bootstrap plus a full View render, just to ask the server if there are new items. This doesn't sound right.

So I came up with a solution: before invoking the secondary View refresh, the client code can use a "ping URL" that more efficiently determines whether there are new items or not. If yes, then full View render is invoked. The idea of the ping URL is that it can be implemented with any technology, not necessarily Drupal - and can be arbitrarily optimized.

An additional theme('views_autorefresh') parameter called ping_base_path is used to inform Views Auto-Refresh that some URL should be hit before the secondary view is invoked:

print theme('views_autorefresh', 5000, array(
  'ping_base_path' => drupal_get_path('module', 'liveblog') . '/liveblog.php?type=report, // ping URL (optional)
), array(
  'view_base_path' => 'liveblog/autorefresh', // the path of the update view
  'view_display_id' => 'page_2', // the display of the update view
  'view_name' => 'liveblog', // the name of the update view
  'sourceSelector' => '.view-content', // selector for items container in update view
  'targetSelector' => '.view-content', // selector for items container in on-page view
  'firstClass' => 'first', // class name for first element (optional),
  'lastClass' => 'last', // class name for last element (optional),
  'oddClass' => 'odd', // class name for odd elements (optional)
  'evenClass' => 'even', // class name for even elements (optional)

The script at this ping URL should accept a GET parameter called timestamp and return a JSON response of the following form:

'pong' => $count,

where $count is 0 if there are no new items since timestamp, > 0 otherwise.

For example, here's a complete ping script:


global $db_url;
$p = parse_url($db_url);
$mysql = mysql_pconnect($p['host'], $p['user'], @$p['pass']);
if (!$mysql) die('Could not connect to database server.');
mysql_set_charset('utf8', $mysql);
if (!mysql_select_db(str_replace('/', '', $p['path']), $mysql)) die('Could not select database.');

$timestamp = mysql_real_escape_string($_REQUEST['timestamp']);
$type = mysql_real_escape_string($_REQUEST['type']);
$sql = "SELECT COUNT(nid) FROM node WHERE type='" . $type .  "' AND created > " . $timestamp;
if (!empty($_REQUEST['uid'])) {
  $uid = mysql_real_escape_string($_REQUEST['uid']);
  $sql .= " AND uid = " . $uid;
$count = mysql_result(mysql_query($sql, $mysql), 0);

print json_encode(array(
  'pong' => $count,

This script illustrates parsing the Drupal MySQL connection string, receiving a number of parameters including timestamp, and returning the expected JSON response.

Two considerations are important in this script:

  1. Query parameters should be sanitized before they're used in the SQL query, to avoid SQL injections and other nasty surprises.
  2. The ping SQL query does not have to emulate the Views query exactly. If you make the ping query broader, it will create false positives, whereby the View render is invoked but no actual new items are found. If you make the ping query more restricted, it will create false negatives: no new items will be returned whereas the View would have returned new items. You probably want to avoid this last condition.

That's it! The results on my test server were encouraging: The auto-refresh round-trip time was reduced by an order of magnitude. I haven't measured CPU savings though - I leave it as the proverbial "exercise to the reader" :-)

Nov 07 2011
Nov 07

Imagine you are creating an activity stream for your site. You'd like to use Views because it gives you all the power you need to query items and style them on the page - all in time for your 11am nap. However, the resulting page is static and users have to keep refreshing it manually to see updates. In 2011, that's just uncool.

That's why I created Views Auto-Refresh, a Views Hacks sub-module that implements an auto-refreshing mechanism that integrates right into Views 2 or 3. Here's how it works:

  • Define your view normally, and turn on Ajax.
  • In the view header, enter the following PHP code:
$interval = 15000; // 15000 msec = 15 sec
print theme('views_autorefresh', $interval);
  • Turn on the "Display even if view has no result" option for the header.

Now, automagically, your view will be refreshed every 15 seconds.

Advanced Usage

That's pretty cool, but your server ends up re-querying the same items every 15 seconds, for each user on that page, which is a bit scary. To alleviate the problem a bit, Views Auto-Refresh includes a more advanced mode for "incremental" refresh. The main idea is to create a secondary view, hopefully less heavy than the first one, that only returns new items since the last refresh. The module is responsible for merging those new items into the existing items on the page. Here's how that works in its simplest form:

  • Define a secondary view that is a clone of the original (e.g. another page display of the same view).
  • Give it a unique path, say my_view/autorefresh.
  • Add to this secondary view an argument of type Node: Post date (with operator) - or any other timestamp that makes sense in your case. These arguments are supplied by Views Auto-Refresh.
  • Change the header script above to:
$interval = 15000; // 15000 msec = 15 sec
print theme('views_autorefresh', $interval, array(
  'view_base_path' => 'my_view/autorefresh',
  'view_display_id' => 'page_2',
  'view_name' => 'my_view',
  'sourceSelector' => '.view-content',
  'targetSelector' => '.view-content',
  'firstClass' => 'views-row-first',
  'lastClass' => 'views-row-last',
  'oddClass' => 'views-row-odd',
  'evenClass' => 'views-row-even',

The additional settings are used to inform Views Auto-Refresh on which secondary view to hit (given its name, display and path), as well as how to merge the items from the source (the secondary view) into the target (the primary, on-screen view). The code will fetch all items inside the sourceSelector container and prepend them to the targetSelector. Then, it will fix up the odd/even, first/last and row number classes based on the settings.

Integration with other Views features

So far, this module has been successfully tested with exposed filters. Clicking on an exposed filter disables the auto-refresh to allow the user to complete the interaction without disruption.

Future work

Aug 17 2011
Aug 17

One of the most enjoyable aspects of maintaining Sheetnode is getting to practice more JavaScript. Douglas Crockford called it the World's Most Misunderstood Programming Language and that's certainly how it's been for me until I decided to put a concerted effort in finding out more about it.

Sheetnode's underlying spreadsheet engine, SocialCalc, is entirely client-side, JavaScript-based. One of the most challenging parts of the engine is the recalculation mechanism: when a cell value changes, we need to recalculate all the cells that depend on its value. This recalculation happens client-side, but it should not hang the UI. To solve this, the code uses the only "asynchronous" mechanism available, namely DOM's window.setTimeout and window.setInterval. John Resig has a great in-depth article on what these functions do (and how they differ).

What if you wanted to execute a sequence of JavaScript operations, like Batch API but on the client-side? You would have to write your functions such that each step calls setTimeout to schedule the next one. This would make for ugly, hard-to-follow code - not "literate programming" by any stretch! That's where this simple JavaScript class I wrote comes in: WorkerQueue lets you push functions to be executed in sequence. Because you can instantiate several queues, you can have several concurrent tasks running on your browser, without hogging your UI.

* WorkerQueue.js
* Class to handle long-running, asynchronous tasks.
WorkerQueue = function(frequency) {

  this.queue = [];
  this.timeout = 0;
  this.current = null;
  this.frequency = frequency;

  this.pop = function() {
    if (!this.current) {
      if (!this.queue.length) {
        this.timeout = 0;
      this.current = this.queue.shift();
    if (this.current()) {
      this.current = null;

  this.push = function(task) {
    var self = this;
    if (!this.timeout) {
      this.timeout = window.setInterval(function(){ self.pop(); }, this.frequency);

To use this, you need to instantiate your worker queues and fill them with functions that return true when they're done or false if they need to run again. Check github for a demo.

Jun 30 2011
Jun 30

I've been a lazy adopter of Drupal 7. I admit it, mea culpa, but I'll leave this discussion to another post. What happened in the mean time is that VBO for D7 languished for a year or so in a zombie state until it was rescued in early May by Bojan Živanović, bojanz on d.o. He started out by announcing a sandbox project containing his code, and a few days later, I was convinced that this was the way to go for VBO on D7. He agreed to become co-maintainer for VBO in charge of D7 - maybe unaware of what awaited him in terms of issue queue support!

Bojan made not one, but two radical changes to VBO, solving a long-standing issue and enhancing Views itself with a cool new feature. Let me explain:

  • VBO was initially designed by yours truly as a style plugin. It obviously worked, but the side effect was that a VBO could only be displayed as a table (implemented by the module itself), not as a grid, carousel, or any other crazy Views style. In hindsight, this was not the ideal choice since what was needed was only to add a checkbox to each row, which is really a field thing. That's what Bojan did: package the VBO functionality in a Views field handler instead of a style plugin.

  • To achieve this, Views core needed to be retrofitted with the ability to display forms during its own rendering. Again, Bojan built a proof of concept that was later submitted as a patch to Views and committed to Views 3 on both D6 and D7. Let's look at what this means for a Views developer:

There's an example Views 3 plugin for D6 that illustrates the idea of embedding form elements in a view. This example module provides a new field that exposes an HTML text input. The view automatically creates a multistep form whose first page contains all the text inputs (one for each row) and adds a "Save" button. The module also adds validation and submission functions. Here's the code:

// @file**
* Implementation of hook_views_data_alter().
function views_form_example_views_data_alter(&$data) {
$data['node']['views_form_example'] = array(
'title' => t('Views Form Example field'),
'help' => t('Demonstrates a form'),
'field' => array(
'handler' => 'views_form_example_handler_field',
* Implementation of hook_views_handlers().
function views_form_example_views_handlers() {
  return array(
'info' => array(
'path' => drupal_get_path('module', 'views_form_example'),
'handlers' => array(
'views_form_example_handler_field' => array(
'parent' => 'views_handler_field',

Nothing new here, just a declaration of our field handler.

// @file views_form_example_handler_field.incclass views_form_example_handler_field extends views_handler_field {
query() {}  function render($values) {
'<!--form-item-' . $this->options['id'] . '--' . $this->view->row_index . '-->';
  }  function
views_form(&$form, &$form_state) {
// The view is empty, abort.
if (empty($this->view->result)) {
$field_name = $this->options['id'];
$form[$field_name] = array(
'#tree' => TRUE,
// At this point, the query has already been run, so we can access the results
foreach ($this->view->result as $row_id => $row) {
$form[$field_name][$row_id] = array(
'#type' => 'textfield',
'#title' => t('Your name'),
'#default_value' => '',
  }  function
views_form_validate($form, &$form_state) {
$field_name = $this->options['id'];
    foreach (
$form_state['values'][$field_name] as $row_id => $value) {
      if (
$value == 'Bojan') {
form_set_error($field_name . '][' . $row_id, "You can't be named Bojan. That's my name.");

Things get a bit interesting here: The render() method return special syntax that instructs Views to expect a form element with the given identifier. Views then calls the views_form() method to allow the handler to create the form elements, making sure that the resulting identifiers match the ones returned by render. In the case of this example, the expected identified is obtained by concatenating the field identifier with the row index - because the form element's structure will generate these identifies via Form API. The handler also declares a form validation function.

Now for the module file itself:

// @file views_form_example.module/**
* Implementation of hook_views_api().
function views_form_example_views_api() {
  return array(
'api' => 3,
* Gets our field if it exists on the passed-in view.
* @return
*  The field object if found. Otherwise, FALSE.
function views_form_example_get_field($view) {
  foreach (
$view->field as $field_name => $field) {
    if (
is_a($field, 'views_form_example_handler_field')) {
// Add in the view object for convenience.
$field->view = $view;
* Confirmation step of the views multistep form.
function views_form_example_confirmation_form($form, $form_state, $view, $output) {
$form = confirm_form($form,
t('Are you sure you want to give your name to total strangers?'),
'path' => $_GET['q'], 'query' => $view->get_exposed_input())
  );  return
* Implementation of hook_views_form_submit().
function views_form_example_views_form_submit($form, &$form_state) {
$field = views_form_example_get_field($form['#parameters'][2]);
  if (!
$field) {
// Advance to the confirmation form.
if ($form_state['storage']['step'] == 'views_form_views_form') {
$form_state['storage']['step'] = 'views_form_example_confirmation_form';
  elseif (
$form_state['storage']['step'] == 'views_form_example_confirmation_form') {
drupal_set_message('We have a winner!');

The interesting function here is views_form_example_views_form_submit() which gets called, not surprisingly, upon form submission. It finds the example field (the view object is passed to the $form structure) and then decides on the next step, like any multistep form handler. The difference is that the step name is actually the form function for the given step, which in our case is the confirmation form views_form_example_confirmation_form.

These changes made to Views are a generalization of what VBO did in D6. The fact that they were included in Views means that other modules will now be able to create specialized views forms and act upon them. In addition to VBO for D7, Drupal Commerce is also using those innovations.

Kudos to Bojan for his great work on VBO and Views. With his contribution, I am sure that VBO will rock even more on D7!

Apr 08 2011
Apr 08

I wanted to create a node view containing both the original node and its translation, sort of like this neat page. I decided to build this page as a Panels node view template, but to reach the translation(s) I had to write a new CTools relationship plugin that I am sharing here:

// @file mymodule.module/**
* Implementation of hook_ctools_plugin_directory().
* It simply tells panels where to find the .inc files that define various
* args, contexts, content_types.
function mymodule_ctools_plugin_directory($module, $plugin) {
  if (
$module == 'ctools' && !empty($plugin)) {

Place the following file in a plugins/relationships subfolder of your module:

// @file**
* Plugins are described by creating a $plugin array which will be used
* by the system that includes this file.
$plugin = array(
'title' => t('Node translation'),
'keyword' => 'translation',
'description' => t('Creates the translation of a node as a node context.'),
'required context' => new ctools_context_required(t('Node'), 'node'),
'context' => 'ctools_translation_from_node_context',
'settings form' => 'ctools_translation_from_node_settings_form',
'defaults' => array('language' => 'en', 'fallback' => FALSE),
* Return a new context based on an existing context.
function ctools_translation_from_node_context($context, $conf) {
// If unset it wants a generic, unfilled context, which is just NULL.
if (empty($context->data)) {
ctools_context_create_empty('node', NULL);
  }  if (isset(
$context->data->nid)) {
$original = node_load($context->data->nid);
$tnids = translation_node_get_translations($original->tnid);
    if (empty(
$tnids[$conf['language']])) {
$conf['fallback'] ?
ctools_context_create('node', $original) :
ctools_context_create_empty('node', NULL);
$translation = node_load($tnids[$conf['language']]->nid);    // Send it to ctools.
return ctools_context_create('node', $translation);
ctools_translation_from_node_settings_form($conf) {
$form['language'] = array(
'#type' => 'select',
'#title' => t('Language'),
'#options' => locale_language_list(),
'#default_value' => $conf['language'],
$form['fallback'] = array(
'#type' => 'checkbox',
'#title' => t('Fallback to original node'),
'#description' => t('Use original node if desired translation is not found.'),
'#default_value' => $conf['fallback'],

To use this plugin, create a new variant of the Panels node view template. In the Contexts page, add a Node translation relationship for every language that you want to support. You can specify the desired language in the relationship settings. In your Content layout, you will then be able to use those relationships to access the node translations.

To go back to the original example above, we want the English version on the left and the Arabic on the right. To achieve this, I created two node translation relationships, one for English and the other for Arabic. I placed a node content panel in each of my two columns, and set the left panel to refer to the English translation and the right panel to the Arabic translation. Note that I didn't use the original "Node being viewed" content because there would be no way of telling if it's Arabic or English, given that this node template could be accessed via either versions.

What's missing is to replicate the bilingual comments view, and I'll share my solution once I implement it :-)

Apr 06 2011
Apr 06

As a Web developer, I spend most of my time rendering lists of objects to HTML, formatted in every imaginable way. Fortunately for us Drupal heads, the Views module exists just to make this task easier - this is in fact my standard introduction to my "Views for hackers" talk.

One common scenario that I've encountered, and about which I was asked several times, is how to reuse Views to render a pre-defined list of objects - i.e., a list that already exists without the need for an SQL query. Why would anyone want to use Views in that case? Well, Views is not only a query generator but also a list renderer, and that's what interests us here. Furthermore, although we have a predefined list of object IDs (e.g., nids, uids), the actual rendering will surely need other fields and Views can give us those additional fields with very little effort on our part. Sounds good!

In my scenario, I had a content type that refers to users through various multi-valued CCK user reference fields. I needed to render all these users in a block on the node page, as a grid of user pictures that are preprocessed by ImageCache. The thought of hand-coding this theme (and later fixing it every time the client would change his mind about a detail) would have been enough to send me down an existential rabbit-hole, so I reacted quickly by reaching for Views and thinking hard about how it could be used. Fortunately, the answer was simple: default arguments to the rescue!

The main idea is to create a User view with an argument of type User: Uid to which we provide a default value based on PHP code (Provide default argument > PHP Code). The PHP code looks something like this:

= node_load(arg(1));
$uids = array();
// Replace this with your actual fields.
foreach ($node->field_users as $item) {
  if (!empty(
$item['uid'])) $uids[] = $item['uid'];
implode('+', $uids);

Also make sure to check the "Allow multiple terms per argument" option for this argument. The argument we created supplies to Views a list of uids against which to filter, so the resulting SQL statement would look like:

FROM {users}
WHERE uid IN (<the $uids list above>)

You can then configure the rest of your view exactly as you would otherwise. Now I can spend more time minding my precious media collection!

Mar 09 2011
Mar 09

I gave my "Views for hackers" talk at DrupalCon Chicago yesterday. This was definitely the biggest attendance I've ever had in my short experience as a speaker! Thanks everyone for attending and for the great questions. Please rate it at the DrupalCon site, this will help me improve it. Thanks!

Here are the slides as uploaded to SlideShare:

AttachmentSize 191.42 KB 272.33 KB
Feb 27 2011
Feb 27

One of my tenacious ideas is to create a TV-like experience using Web video. In spite of great players such as Boxee, I still feel there's more that can be done.

For one, my ideal player should be Web-based, not a desktop application. That's why I liked YouTube XL and like YouTube Leanback even more.

Secondly, the player should present a simple UI that allows the viewer to surf the enormous amount of existing Web video with minimal interaction. The UI should favour discovery over explicit search. This is not something I've seen often - or at all.

This is why I spent a few nights hacking away at Feedeo (thanks Francis Pilon for the clever name!), an experiment in online video consumption. The main idea is to create channels of continuously running videos, and to provide branch points at each video that the user can explore, then get back to the previous thread he was following. This viewing paradigm is akin to a stack.

This is heavily experimental stuff, so don't expect a beautiful UI - I'm just trying out the interaction model at this stage. Please try Feedeo and I'd love to receive your feedback!

Recent screenshot of Feedeo

AttachmentSize 531.49 KB
Jan 02 2011
Jan 02

Sheetnode is one of the most exciting modules that I maintain. The underlying SocialCalc Javascript spreadsheet engine, which I forked from Dan Bricklin's original code, is much fun to work with. Plus, organizations of many sizes are starting to use Sheetnode to collaborate on their numeric data right in their Drupal intranets, instead of relying on Excel files in attachments.

I've just released version 1.5 of Sheetnode for Drupal 6. This version brings many improvements over the previous stable version, which was 11 months old. The most tangible improvement, that will make life easier for all users, is the removal of Java dependencies for import/export, in favor of the excellent PHPExcel library, which now handles the same function - only much better. No more wrestling with PHP/Java Bridge, Apache Tomcat, Java class libraries, and other annoyances. Turning on import/export for Sheetnode is now as easy as downloading the latest version of PHPExcel, extracting it, and pointing to it the Sheetnode PHPExcel settings. I can already hear sighs of relief :-)

A Drupal 7 release is also imminent. Stay tuned...

PS. This release and the upcoming D7 release are sponsored by CTO Advisors. Thanks for your support!

Recent screenshot of the demo site

AttachmentSize 118.62 KB

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web