Aug 30 2009
Aug 30

So here we are. The feature-set and API of Drupal 7 is about to freeze. It contains many - actually plenty - of awesome and noteworthy improvements and new features. But those will be covered elsewhere and I don't want to talk about that.

Instead, we should talk about how we can assure a top quality and deliver a new Drupal release that actually works and is usable, both for users and developers. By all means, I have no influence on the decisions that will be taken, so this is a plain recommendation from someone who develops both for Drupal core and contributed modules, and simply wants to help people to build great Drupal sites.

The problem

Initially, the release of Drupal 6 primarily failed for a very simple reason: Drupal 6 contained a lot of new features and API functionality. But most of them are only implemented partially. To name a few examples: You can translate content, but you cannot translate menus, taxonomy terms, configuration settings (like the site's name), data of contributed modules, or anything else. All Drupal modules can expose triggers (system events) that should allow the site administrator to assign actions to perform when a certain event happens, but since even Drupal core modules do not fully implement triggers/actions, the entire feature was not really adopted by contributed modules - therefore, the entire concept of triggers/actions is widely unknown among Drupal developers, which also explains why that API had nearly zero improvements during the Drupal 7 cycle.

We should not do the same mistake again. Whereas the mistake is to introduce new, half-baked features that are "only available" or inconsistently applied. For a long time, I'm arguing for changing our date-based code freeze into a date-based feature freeze. Which simply means the same. But automatically opens up exceptions for API changes belonging to an already introduced new feature past code freeze. It simply makes no sense at all to release a new feature or API change that only works here or there, or which is not applied consistently. That's like: "Hey, we have ice-cream now!" -- "Great! Uhm, but no cups?!"


After discussing this very annoying fact with Stefan all over again in the past two years, he recently came to the conclusion that - if Drupal would use a DVCS supporing better merge functionality than CVS, we could do separate short-term and long-term feature branches - not holding up smaller improvements by larger improvements. However, since we cannot investigate such options yet, my recommendation still is to immediately introduce a feature-freeze for Drupal 7.

So far, Angela Byron was the only one who strongly disagreed to a feature-freeze, because she fears that Drupal core developers would be annoyed by the fact that they cannot use their contributions to the new Drupal release earlier. I disagree with that reasoning. Instead, a feature-freeze would allow all Drupal developers to focus on the newly introduced features, understanding them better and makíng them better/work/usable in the additional "feature completion" time-frame. A great chance for many Drupal core, but also contributed module developers to improve their skills and understanding of new core features - which, as a matter of fact, is one of the top problems with regard to Drupal core development.

As of now, DBTNG (the new database abstraction layer introduced in Drupal 7) seems to be the only new feature that has been implemented (almost) properly.

Feature completion

Most probably, this list is a bit biased, but as far as I can see, the mission critical exceptions for a Drupal 7 "code-freeze" to complete the introduced features are:

DBTNG Remove the legacy database layer functions. Field API Internationalization Token / RDF AJAX RDF Triggers/Actions Filter system
Aug 30 2009
Aug 30

UPDATE: it is no longer necessary to do this, as the latest release of the migrate module supports md5 passwords. The hook names have also changed - the word 'destination' has been removed.

Last month I blogged about using the Migrate and TableWizard modules to migrate users and nodes from the CMS Made Simple content management system to Drupal. Since then I've been using them more and more and have implemented a number of the migrate hooks to manipulate the data during the migration process.

One of the issues I encountered when migrating users was that the password was stored as a md5 hash. This is also how Drupal stores its passwords, so no problem, right? Wrong. When setting up the user content set and mapping fields from CMS MS to Drupal, there is a mapping field for the password. However, it expects the password to be plain text. Simply setting the password to be the md5 hash value would not work as the hash value would be re-hashed again.

To overcome this I implemented the hook_migrate_destination_complete_user() hook in a custom module. This allows my custom module to manipulate the user data, one row at a time, immediately after the user data has been stored in Drupal's database.

function mymodule_migrate_destination_complete_user(&$user, $tblinfo, $row) {
  // Get the original password.
  $password = $row->cms_users_password;

  // Update the new user record in Drupal's user table.
  db_query("UPDATE {users} SET pass = '%s' WHERE uid = %d", $password, $user->uid);

The $user object contains the newly created Drupal user, whereas the $row object contains the original user data, unmodified, and corresponds to one row of the view used in the migration. Each element of the $row object is named in the format "tablename_fieldname".

With this hook I was able to modify the migrated user data and forcefully set the password in the database to be the original md5 hashed password. Using user_save() was not possible as it is the function which generates the md5 password hash, which is why I needed to implement this solution in the first place.

Migrating users from other content management systems may not be so easy. I was just lucky that the method of encryption used for passwords in CMS Made Simple and Drupal were the same.

Aug 30 2009
Aug 30

Drupal 6 has come a long way from its initial version. And now I feel that it has finally completed to reach all the potential promised late into the 5.x cycle.

With the release of Panels, the last of the big 'game changing' features available on 5.x has been ported. This is an exciting time. Views, CCK and even wysiwyg api are miles ahead of what we had at the initial release of 6.0 and I couldn't be more excited.

CCK lets you create fields which allow you to split up content in more ways so it can be displayed easily in interesting ways. This concept will be introduced into drupal 7.0 core. Which means that the community thinks that this is important (and powerful enough) to change the way we think about content entry. This was released not long into early 6.0 but important still.

Views, the wonderful module that allows the display of information in interesting ways has also come out with a new version since the inception of 6.0 One of the most popular modules in drupal, it lets you build queries without having to write sql code yourself. Which is an excellent feature for non techie users.

Panels a module which allows for the display of content in interesting ways, without the need for mass amounts of custom theming, has also recently reached a new milestone. This module is quite powerful because of it's use of the context sensitive engine. This is a useful tool to use if you want to generate pages which have a consistant look but contain different information, and the information comes from a variety of sources.

Wysiwyg api this module brings common sense to wysiwyg in drupal. It allows for the use of multiple wysiwyg, and ties selection to input formats. Previous wysiwyg modules have not tied wysiwyg to input formats this led to confusing results when entering data, because the formatting didn't make sense.

As you can see there is quite a huge difference between what we had early in the 6.0 cycle and what we have now. And I am happy to say I look forward to what the future has to offer with drupal 7.

Related Posts

Right now there are two big CMS systems which have been recently released. As you may or may not know Drupal and Joomla are the two biggest open source CMS solutions. They both have big communites, and both have been used by companies and individuals to make big and important websites. But what the biggest thing that comes to mind is what is the difference between these CMS systems?

The new Joomla 1.5 marks the first Joomla release which gets away from the mambo codebase. It has been completely rewritten from scratch.

Aug 29 2009
Aug 29
{{#if useSimpleSecondCol}} {{#if useSecondColThin}} {{else}} {{/if}} {{#with secondColumn}} {{#if simpleList}} {{/if}} {{#if complexList}} {{/if}} {{/with}} {{else}} {{#with secondColumn}} {{/with}} {{#with thirdColumn}} {{#with content}}


{{#each images}} {{/each}} {{/with}} {{/with}} {{/if}} {{#with thirdColumn}} {{#if fullBanner}} {{#with content}}


{{{description}}} {{/with}} {{/if}} {{#if view}} {{{content}}} {{/if}} {{/with}} {{#if hasFourthColumn}} {{#with fourthColumn}}


{{{description}}} {{/with}} {{/if}} {{/if}} {{{description}}}
Aug 29 2009
Aug 29

I will be presenting together with Konstantin Käfer on Front End Performance. To be more exact, he will be talking about Front End Performance in general, and I will be talking about a subdomain of that: CDN integration.
Our sessions were merged because they overlapped to some extent — so now there’s just one supercharged session instead! It’s scheduled for Thursday (3 September), at 9 AM, in the La Reserre (translated: coal-shed) room.

In specific, I will be talking about the work I’ve been doing as part of my bachelor thesis. Integrating Drupal with a CDN was quite painful previously, but by using the CDN integration module, you can choose for either:

  • extremely easy CDN integration;
  • easy, but extremely flexible CDN integration by also using the “File Conveyor” daemon that I wrote

Thanks, Drupal community!

After DrupalCon DC, I was once more with the lucky ones to be granted a scholarship. Thanks, Drupal community! I hope many of you will benefit from the work I’ve been doing in my free time and as part of my bachelor thesis. I volunteered to help at the registration booth of DrupalCon Paris, so I’ll maybe see you there and otherwise later :)

Aug 29 2009
Aug 29
Three weeks until Talk Like a Pirate Day, an' next week th' Drupal project freezes its code in anticipation o' th' version 7 release o' th' CMS. In anticipation o' that, I've updated the Pirate module, which turns all contributed content on a Drupal-powered website into pirate-speak fer just th' day o' September 19th, t' include a developmental release for the Drupal 7 platform. I'd like fer scallywags t' test it out, an' in order t' do so on a day other than September 19th is t' either modify th' date in th' followin' code o' th' module or remove it altogether. if (date('md') != '0919') { return $text; } I see that similar code t' me module has been included in the Dialectic module, which supports other novelty input formats, and dinna spare the whip! I may be convinced t' officially merge me work into that project in th' future, though me module differs in that it activates on only one day a year. Any issues specific t' th' Drupal 7 conversion can added be t' the issue for that. One sticky, longstandin' issue that I could use some help with an' affects all versions is a bug involving URLs that contain 'ing'. Shiver me timbers! Shiver me timbers! I pledge that Pirate will have a full Drupal 7 release on th' day that Drupal 7 is released, ye scurvey dog. #D7CX:
Aug 29 2009
Aug 29

After I shared my ideas on the project, our Flex/services guru, Peter Arato, started working last week on what has become the Graphmind project. It's a module that allows you to work with mindmaps in Drupal that are compatible with Freemind. With building mindmaps I mean full CRUD, you can:

  • create a mindmap node from scratch
  • paste an existing .mm mindmap
  • do CRUD on the nodes in the mindmap (yes that's namespace collision, but we can't help it)

But it is much more than a mindmap tool: the module integrates with Drupal services.

  • it can render any view into mindmap branches
  • you can add individual users, nodes or comments as a freemind branch
  • if you use a view with an argument it will try to fill that argument using the data from the mindmap node for which you are trying to add that branch
  • a freemind node that was created from a Drupal object has a hyperlink to that Drupal node

I am really excited about this module and I think that it holds an enormous potential. Some of the things you'll be able to use this for in the future:

  • (re)organize/build taxonomies in a mindmap
  • do project brainstorms/planning on a mindmap that uses data from your Open Atrium project management system
  • display relationships between data objects in your Drupal database
  • refresh or reuse a graph, to build a new mindmap with different data, but with the same relationship tree (e.g. most active user in the last 24 hours > posts of these users > comments on these posts > people that posted these comments)
  • and somewhere not to far in the future: display SPARQL queries in mindmaps

You can find a screencast on the module on which Peter is still working on as part of a development sprint sponsored by Eureka! Ranch. So watch out for even more cool features.

Aug 27 2009
Aug 27

I'm sure I'm not the first to discover this, but...

An online dictionary search for "Drupal" says it's a synonym for "drupaceous": that is, "resembling, related to... [or] producing drupes". A drupe is a fruit whose seed is covered by a tough endocarp, like the red peaches you see here.


Aug 27 2009
Aug 27

Module weights sometimes come into play when you're trying to override certain aspects of the core or other modules. If you look in the Drupal database's system table you'll notice a field called weight - this is what determines the order in which all of the installed modules' hooks will get called during a page request.

The general rule of thumb is that if you want to make sure your custom module gets last dibs on things like hook_form_alter() then you should be sure to give it a heavier weight in the system table, either by a direct SQL query in your module's install file, or by using a module like moduleweight (D5) or Utility (D6).

As I learned this morning, the inverse applies if you're trying to override menu items, at least in Drupal 5. In my case I was trying to change the weights of some MENU_LOCAL_TASK menu items, as well as alter which one would be used as the default, but all of the changes I had made in my implementation of hook_menu() were getting ignored. On a hunch I tried making my custom module lighter than all the others, and sure enough my menu changes took precedence - apparently the first module to declare a menu item gets precedence.

Aug 27 2009
Aug 27

There are plenty of options when it comes to picking a content management system for a development project. Depending on how advanced you need the CMS to be, what language it's built in, and who is going to be using it, it can be a nightmare trying to find the "perfect" CMS for a project.

However, some CMSs have a slight edge over the rest of the competition because of the usability of the software. Some are just easier to install, use and extend, thanks to some thoughtful planning by the lead developers. 

We have a number of themes and resources to support these top content management systems. If you're looking for Wordpress Themes, Drupal Themes, or Joomla Themes we have you covered on Envato Market

We support a number of additional popular CMS systems. And a gallery of Wordpress Plugins, Drupal Plugins, Joomla plugins and more. Visit our ThemeForest or CodeCanyon marketplaces to browse through a ton of professional options.  

Here are 10 of the most usable CMSs on the web to use in your next project, so you can choose the one that fits your needs best.

1. WordPress


What is there left to say about WordPress that hasn't already been said? The PHP blogging platform is far and away the most popular CMS for blogging, and probably the most popular CMS overall. It's a great platform for beginners, thanks to their excellent documentation

and super-quick installation wizard. Five minutes to a running CMS is pretty good. Not to mention the fact that the newest versions auto-update the core and plugins from within the backend, without having to download a single file.

For those users not familiar with HTML or other markup language, a WYSIWYG editor is provided straight out of the box. The backend layout is streamlined and intuitive, and a new user should be able to easily find their way around the administration section. Wordpres also comes with built-in image and multimedia uploading support.

For developers, the theming language is fairly simple and straightforward, as well the Plugin API.

The WordPress Community is a faithful and zealous bunch. Wordpress probably has the widest base of plugins and themes to choose from. We have thousands of professional Wordpress Themes and Wordpress Plugins available for sale on Envato Market, with a full suite of styles and options to choose from. 

A great part about the Wordpress community is the amount of help and documentation online you can find on nearly every aspect of customizing WordPress. If you can dream it, chances are it's already been done with WordPress and documented somewhere.

If you need help with anything from installing a theme to optimizing the speed of your WordPress site, you can find plenty of experienced WordPress developers to help you on Envato Studio.

2. Drupal


Drupal is another CMS that has a very large, active community. Instead of focusing on blogging as a platform, Drupal is more of a pure CMS. A plain installation comes with a ton of optional modules that can add lots of interesting features like forums, user blogs, OpenID, profiles and more. It's trivial to create a site with social features with a simple install of Drupal. In fact, with a few 3rd party modules you can create some interesting site clones with little effort.

One of Drupal's most popular features is the Taxonomy module, a feature that allows for multiple levels and types of categories for content types. And you can find plenty of professional Drupal Themes, which are ready to be customized and worked with. You can also grab Drupal Plugins.

Drupal also has a very active community powering it, and has excellent support for plugins and other general questions. 

You can also hire a developer to complete a range of tasks for your Drupal site for a reasonable fixed fee.

3. Joomla!


Joomla is a very advanced CMS in terms of functionality. That said, getting started with Joomla is fairly easy, thanks to Joomla's installer. Joomla's installer is meant to work on common shared hosting packages, and is a very straightforward considering how configurable the software is.

Joomla is very similar to Drupal in that it's a complete CMS, and might be a bit much for a simple portfolio site. It comes with an attractive administration interface, complete with intuitive drop-down menus and other features. The CMS also has great support for access control protocols like LDAP, OpenID and even

The Joomla site hosts more than 3,200 extensions, so you know the developer community behind the popular CMS is alive and kicking. Like Wordpress, you can add just about any needed functionality with an extension. However, the Joomla theme and extension community relies more on paid resources, so if you're looking for customizations, be ready to pull out your wallet. You can also grab Joomla plugins, or hire Joomla developers to help you get your store set up right.

4. ExpressionEngine


ExpressionEngine (EE) is an elegant, flexible CMS solution for any type of project. Designed to be extensible and easy to modify, EE sets itself apart in how clean and intuitive their user administration area is. It takes only a matter of minutes to understand the layout of the backend and to start creating content or modify the look. It's fantastic for creating websites for less-than-savvy clients that need to use the backend without getting confused.

ExpressionEngine is packed with helpful features like the ability to have multiple sites with one installation of software. For designers, EE has a powerful templating engine that has custom global variables, custom SQL queries and a built in versioning system. Template caching, query caching and tag caching keep the site running quickly too.

One of my favorite features of EE that is the global search and replace functionality. Anyone who's ever managed a site or blog knows how useful it is to change lots of data without having to manually search and open each page or post to modify it.

ExpressionEngine is quite different than other previously-mentioned CMS in that it's paid software. The personal license costs $99.95, and the commercial license costs $249.99. You can also get help with ExpressionEngine on Envato Studio.

5. TextPattern


Textpattern is a popular choice for designers because of its simple elegance. Textpattern isn't a CMS that throws in every feature it can think of. The code base is svelte and minimal. The main goal of Textpattern is to provide an excellent CMS that creates well-structured, standards-compliant pages. Instead of providing a WYSIWYG editor, Textpattern uses textile markup in the textareas to create HTML elements within the pages. The pages that are generated are extremely lightweight and fast-loading.

Even though Textpattern is deliberately simple in design, the backend is surprisingly easy to use and intuitive. New users should be able to find their way around the administration section easily.

While Textpattern may be very minimal at the core level, you can always extend the functionality by 3rd party extensions, mods or plugins. Textpattern has an active developer community with lots of help and resources at their site.

6. Radiant CMS


The content management systems that we've listed so far are all PHP programs. PHP is the most popular language for web development, but that doesn't mean we should overlook other popular web languages like Ruby. Radiant CMS is a fast, minimal CMS that might be compared to Textpattern. Radiant is built on the popular Ruby framework Rails, and the developers behind Radiant have done their best to make the software as simple and elegant as possible, with just the right amount of functionality. Like Textpattern, Radiant doesn't come with a WYSIWYG editor and relies on Textile markup to create rich HTML. Radiant also has it's own templating language Radius which is very similar to HTML for intuitive template creation.

7. Cushy CMS

Cushy CMS

Cushy CMS is a different type of CMS altogether. Sure, it has all the basic functionality of a regular content management system, but it doesn't rely on a specific language. In fact, the CMS is a hosted solution. There are no downloads or future upgrades to worry about.

How Cushy works is it takes FTP info and uploads content on to the server, which in turn the developer or the designer can modify the layout, as well as the posting fields in the backend, just by changing the style classes of the styles. Very, very simple.

Cushy CMS is free for anyone, even for professional use. There is an option to upgrade to a pro account to use your own logo and color scheme, as well as other fine-grain customizations in the way Cushy CMS functions.

8. SilverStripe


SilverStripe is another PHP CMS that behaves much like Wordpress, except has many more configurable options and is tailored towards content management, and not blogging. SilverStripe is unique because it was built upon its very own PHP framework Saphire. It also provides its own templating language to help with the design process.

SilverStripe also has some interesting features built in to the base, like content version control and native SEO support. What's really unique with SilverStripe is that developers and designers can customize the administration area for their clients, if need be. While the development community isn't as large as other projects there are some modules, themes and widgets to add functionality. Also, you'll want to modify the theme for each site, as SilverStripe doesn't provide much in terms of style, to give the designer more freedom.

9. Alfresco


Alfresco is a JSP is a beefy enterprise content management solution that is surprisingly easy to install. A really useful feature of Alfresco is the ability to drop files into folders and turn them into web documents. Alfresco might be a little bit more work than some of the other CMS and isn't as beginner-friendly, it certainly is quite usable given the massive power of the system. The administration backend is clean and well-designed.

While Alfresco might not be a great choice for most simple sites, it's an excellent choice for enterprise needs.

10. TYPOlight


TYPOlight seems to have the perfect balance of features built into the CMS. In terms of functionality, TYPOlight ranks with Drupal and ExpressionEngine, and even offers some unique bundled modules like newsletters and calendars. Developers can save time with the built-in CSS generator, and there are plenty of resources for learning more about the CMS.

If there is a downside to TYPOlight, it's that it has so many features and configurable options. Even though the backend is thoughtfully organized, there are still a lot of options to consider. But if you're wanting to build a site with advanced functionality and little extra programming, TYPOlight could be a great fit.

Aug 26 2009
Aug 26

So, my fellow Drupallers, we are only inches away from the code freeze. Are we afraid yet?

A common trend amongst Drupal developers is that we’re all mostly on last years version. Many Drupal programmer blogs have only recently been upgraded to Drupal 6, or are even still running Drupal 5. Not picking on anyone in particular.

I think that’s a good indicator of a problem with Drupal. Upgrading is hard, and when the very people that do Drupal 24/7 are not upgrading, how can we expect anyone else to? And yes, I took a long time upgrading as well.

I think that’s part of the explanation for the long lag before modules were ready for Drupal 6. Everyone was still on Drupal 5, and there was no need or demand to upgrade your module.

This is bad for the community in many ways, and therefore, I’d like to challenge you, fellow module and core developers, to upgrade your blogs to Drupal 7. In September, or October at the very least When the first release candidate comes out. Yes, before the final release.

This is uncommon in the Drupal world, but if we look at other open source projects, like Linux or the BSDs, the developers there are mostly eating their own dog food, running their main working computer right out of CVS/SVN/whatever HEAD. For inspiration, check out this video about the OpenBSD release process. Websites are public facing, so it’s a bit worse when something is broken, but shouldn’t we at the very least be able to run HEAD on our blogs when in the code freeze?

Yes, this is likely to be a painful experience. You will need to make patches for the modules you’re using. You will probably have to live without Views or Panels for a while. But you can also help Drupal 7 become our most successful release ever.

If enough developers do this, we will have a much better tested Drupal 7 core when it is released. Not just unit tests, but real, live sites. We will have many more modules available from day 1. Users looking to try out Drupal will be able to hit the ground running, instead of having to choose between the new release (Drupal 7) and the widely supported release (Drupal 6).

I know this is a lot of work, but is that not what we do as developers? Are you mice or are you men? Dare you come along on a perilous journey to the bleeding edge?

It is fraught with struggle and danger, but the rewards will be enormous.

P.S.: I forgot to mention it when first posting this, but this idea is highly inspired by Moshe Weitzman’s #D7CX – this is basically an extension of that, with more people pitching in.

P.P.S.: Edit: After taking a bit with Heine Deelstra on #drupal, who kindly explained to me that Drupal 7 HEAD contains several security vulnerabilities at the moment, I have opted to postpone the public-facing part of this challenge a few months. I will continue running a Drupal 7 version of my blog on my development setup, but the real, online copy will stay Drupal 6 for now :)

Aug 26 2009
Aug 26

In a classic case of the cobblers’ kids having no shoes, until yesterday was still running on Drupal 5. With the Drupal 7 code freeze just days away, I figured it was time. Some of my hesitation stemmed from bitter memories. My website has been around since the Drupal 4.6 days, and the upgrades to 4.7 and 5 were unpleasant. Turns out I should have done it months ago. Upgrading 5 to 6 has easily been the most straightforward upgrade yet. It has taken me a fair bit of time, but that is mostly playing with all the new capabilities and having fun retweaking my workflow.

Here is a stream-of-consciousness report of the process.

  • Install Drupal 6.13 in new location, setup to point at it.
  • Make a copy of the old database.
  • Create sites/
    • Copy sites/default/default.settings.php to ites/
    • Edit settings.php $db_url to point at new copy of old db.
    • Set $update_free_access = TRUE
  • Run update.php – note, I haven’t moved over any modules, so this just updates the core. Quick and dirty approach that could come back to bite me.
  • Download the latest stable drupal 6 version of all my modules (started as short list, since this site is mostly a simple blog, but quickly grew).
  • Enable Backup & Migrate first, and use it to take a db snapshot.
  • Turn on all the contrib modules I want to use.
  • Run update.php again.
    • This went fine, if it had failed, I would have reverted to the snapshot, and updated one module at a time.
    • Recreated my views
    • Annoying but straightforward. I love views2
  • Upgraded from Simple Menu module to new industry standard: Admin Menu.
  • Used my Format Manager module to debug by messed up formats.
    • Textile filter doesn’t play nicely with some other filters.
    • Textile filter only seems to be handling two-levels of indented bullets.
  • Used Better Formats to setup a different default format for me and anonymous users.
  • Used Twitter module to create a nicer looking view block, rather than just Aggregator to pull down my feed.
    • Also setup to tweet when I add a new blog post.
  • Switched to view blocks for Recent Comments
  • Using Wabi theme instead of Styled Beauty (which has no Drupal 6 version)
    • Not quite as arty, but very readable.
    • A little table crazy.
    • Not sure if I am sold, but it is fine for now.
    • Annoyingly, you need to resubmit the theme configuration page after changing the style.css file, so the color-style.css override file is regenerated.
    • Tweaked page.tpl.php to work with PopupAPI.
    • Tweaking theme to remove homepage (aka spam bait) from Anonymous comment form
    • Tweaking theme to highlight authors comments.
  • Replaced obsolete Google Sitemap with XMLSitemap
  • Moving from Mollom to CAPTCHA + Spam – we’ll see how it goes
    • Note: You really need to install a TTF font for CAPTCHA to be usable.
    • It would be nice if Spam integrated with views.
    • Installing Comment Notify to let anonymous commenters see when someone else replies to thread
    • I hope this will make discussions more dynamic.
    • But, opens up bigger window for comment spam.
  • Hmm, missing the ability to set URL redirects directly on a node edit form?
  • Replace Admin block module with a comment view for keeping on top of moderated comments (now auto-moderated by spam module).
Aug 26 2009
Aug 26

In this very brief article, I highlight the key properties of CDNs: what differentiates them and which technical implications you should keep in mind.

A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity.

That is why CDNs achieve differentiation through their feature sets, not through performance. Depending on your audience, the geographical spread (the number of PoPs around the world) may be very important to you. A 100% SLA is also nice to have — this means that the CDN guarantees that it will be online 100% of the time.
You may also choose a CDN based on the population methods it supports. There are two big categories here: push and pull. Pull requires virtually no work on your side: all you have to do, is rewrite the URLs to your files: replace your own domain name with the CDN’s domain name. The CDN will then apply the Origin Pull technique and will periodically pull the files from the origin (that is your server). How often that is, depends on how you have configured headers (particularly the Expires header). It of course also depends on the software driving the CDN — there is no standard in this field. It may also result in redundant traffic because files are being pulled from the origin server more often than they actually change, but this is a minor drawback in most situations. Push on the other hand requires a fair amount of work from your part to sync files to the CDN. But you gain flexibility because you can decide when files are synced, how often and if any preprocessing should happen. That is much harder to do with Origin Pull CDNs. See this table for an overview:

PullPushTransfer protocolnoneFTP, SFTP, WebDAV, Amazon S3 …Advantagesvirtually no setup
  • flexibility
  • no redundant traffic
  • no flexibility
  • redundant traffic

It should also be noted that some CDNs, if not most, support both Origin Pull and one or more push methods.
The last thing to consider is vendor lock-in. Some CDNs offer highly specialized features, such as video transcoding. If you then discover another CDN that is significantly cheaper, you cannot easily move, because you are depending on your current CDN’s specific features.

My aim is to support the following CDNs in this thesis:

  • any CDN that supports Origin Pull
  • any CDN that supports FTP
  • Amazon S3 and Amazon CloudFront. Amazon S3 (or Simple Storage Service in full) is a storage service that can be accessed via the web (via REST and SOAP interfaces). It is used by many other web sites and web services. It has a pay-per-use pricing model: per GB of file transfer and per GB of storage.
    Amazon S3 is designed to be a storage service and only has servers in one location in the U.S. and one location in Europe. Recently, Amazon CloudFront has been added. This is a service on top of S3 (files must be on S3 before they can be served from CloudFront), which has edge servers everywhere in the world, thereby acting as a CDN.

This is a republished part of my bachelor thesis text, with thanks to Hasselt University for allowing me to republish it. This is section five in the full text.

Previously in this series:

Aug 26 2009
Aug 26

Following the example of The king of Denmark & Drupal I am sharing my top picks from the DrupalCon Paris 2009 schedule. If you are reading this and going to Paris, let me know in the comments or email me via the drupalcon contact form, I would like to meet my readers...

Day 1

Paris, Paris, sightseeing and a #DFD meetup. Let's make this happened. Meeting other Dupal designers/themers would be awesome.


Day 2

10:00 - Keynote: The State of Drupal by Dries 11:20 - Real-time End-User Theme Configuration 13:40 - Explore the glory of Drupal 7's improved render and theming systems 14:50 - Aegir: Build Once, Deploy often. Real life use-cases. 16:10 - Growing Drupal by 100x 17:15 - Rules: How to leverage rule-based automation on your sites!

Day 3

9:00 - Open Atrium; Building a product with Drupal and the Power of Decentralized Features or Functional Fips: PHP basics for themers 10:00 - Keynote: Social + Media: What We Need Next 11:20 - Web Typography Fundamentals: From Em Dash to Hanging Punctuation 13:40 - All youre (x)html(5) are belong to us! 14:50 - Making Drupal Dance: Techniques for Beautiful, Core-worthy Designs 16:10 - Drupal Ingredients for your Website Dish, or Modules You Should Use on Every Site 17:15 - Agile Drupal Development with Scrum

Dazy 4

9:00 - Learn to use the CTools suite or Functional Interactive Design 10:00 - Keynote: Semantic Web fundamentals 11:20 - jQuery for designers and themers or What's new in Panels 3 13:40 - Enterprise Drupal Site And Team Management Panel 14:50 - Designing Grid Systems Does Not Begin And End With 960 16:10 - Accelerated grid theming using NineSixty 17:15 - Keynote: Closing session

Aug 26 2009
Aug 26

Views 2 has a great feature called Relationships, allowing your view to aggregate fields from various content types which are related some how. For example, say you have a Person content type and a Dependent content type. Within your view, you might want the Person's first/last name, age, etc.. as well as that person's Dependent's first name. With Views 2, you connect these content types in a view via Relationships and all is dandy. In Views 1 (Drupal 5) however, this is impossible without custom code.

I was recently able to pull this off using Views 1 API via hook_query_alter() and appending fields to the $view->field array; however, this isn't the best approach. A better solution would use $query->add_field(), but I couldn't get it working. I'm posting this blog as an interim for someone needing a similar solution, as well as hoping someone might comment with a better solution. Here it is (based on a view with filter:content_type=uprofile):


* implementation of hook_query_alter
function my_relationships_views_query_alter (&$query, &$view, $summary, $level) {
// these two important, otherwise the added views filters become statically based on the first page visit.
$view->is_cacheable = FALSE;
/* While views_view_add_field should do the trick, it looks like its function parameters are insufficient.
     * So I had to rewrite my own version of that function here:
     * //orig: views_view_add_field($view, $table, $field, $label, $sortable = FALSE, $default_sort = 0, $handler = '');
_uprofile_add_field($view, 'bio', 'nid', 'Email', '_go_uprofile_email');
_uprofile_add_field($view, 'bio', 'nid', 'Role', '_go_uprofile_role');
_uprofile_add_field(&$view, $table, $field, $label, $handler, $sortable = TRUE, $position = 0, $fullname = '', $id = '', $queryname = ''){
$position==0) $position = count($view->field) +1;
$fullname=='') $fullname = "$table.$field";
$id=='') $id = "$table.$field";
$queryname=='') $queryname = "$table_$field";
$view->field[] = array
'tablename' => $table,
'field' => $field,
'label' => $label,
'handler' => $handler,
'sortable' => $sortable,
'position' => $position,
'fullname' => $fullname,
'id' => $id,
'queryname' => $queryname,
$view->table_header[] = $label;
_uprofile_add_email($fieldinfo, $fielddata, $value, $data) {
db_result(db_query('SELECT u.mail
       FROM {bio} b, {users} u
     WHERE b.uid=u.uid AND b.nid=%d'
, $value));
_uprofile_add_role($fieldinfo, $fielddata, $value, $data) {
       FROM {bio} b, {users_roles} ur, {role} r
     WHERE r.rid=ur.rid AND ur.uid = b.uid AND b.nid=%d'
, $value));
Aug 25 2009
Aug 25

In this article, I explain what was required to integrate the Episodes page loading performance monitoring system with Drupal.
Episodes was written by Steve Souders, whom is well-known for his research on high performance web sites and has authored multiple books on this subject.

The work I am doing as part of bachelor thesis on improving Drupal’s page loading performance should be practical, not theoretical. It should have a real-world impact.

To ensure that that also happens, I wrote the Episodes module. This module integrates the Episodes framework for timing web pages (see the “Episodes” section in my “Page loading profiling tools” article) with Drupal on several levels — all without modifying Drupal core:

  • Automatically includes the necessary JavaScript files and settings on each appropriate page.
  • Automatically inserts the crucial initialization variables at the beginning of the head tag.
  • Automatically turns each behavior (in Drupal.behaviors) into its own episode.
  • Provides a centralized mechanism for lazy loading callbacks that perform the lazy loading of content. These are then also automatically measured.
  • For measuring the css, headerjs and footerjs episodes, you need to change a couple of lines in the page.tpl.php file of your theme. That is the only modification you have to make by hand. It is acceptable because a theme always must be tweaked for a given web site.
  • Provides basic reports with charts to make sense of the collected data.

I actually wrote two Drupal modules: the Episodes module and the Episodes Server module. The former is the actual integration and can be used without the latter. The latter can be installed on a separate Drupal web site or on the same. It provides basic reports. It is recommended to install this on a separate Drupal web site, and preferably even a separate web server, because it has to process a lot of data and is not optimized. That would have led me too far outside of the scope of this bachelor thesis.

You could also choose to not enable the Episodes Server module and use an external web service to generate reports, but for now, no such services yet exist. This void will probably be filled in the next few years by the business world. It might become the subject of my master thesis.

1. The goal

The goal is to measure the different episodes of loading a web page. Let me clarify that via a timeline, while referencing the HTML:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
<html xmlns="" xml:lang="en" lang="en" dir="ltr">
    <title>Sample Drupal HTML</title>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <link rel="shortcut icon" href="" type="image/x-icon" />
    <link type="text/css" rel="stylesheet" media="all" href="" />
    <link type="text/css" rel="stylesheet" media="print" href="" />
    <script type="text/javascript" src=""></script>
    <script type="text/javascript">
    jQuery.extend(Drupal.settings, { "basePath": "/drupal/", "more": true });
    <!--[if lt IE 7]>
      <link type="text/css" rel="stylesheet" media="all" href=" />
    <script type="text/javascript" src=""></script>

The main measurement points are:

  • starttime: time of requesting the web page (when the onbeforeunload event fires, the time is stored in a cookie); not in the HTML file
  • firstbyte: time of arrival of the first byte of the HTML file (the JavaScript to measure this time should be as early in the HTML as possible for highest possible accuracy); line 1 of the HTML file
  • domready: when the entire HTML document is loaded, but just the HTML, not the referenced files
  • pageready: when the onload event fires, this happens when also all referenced files are loaded
  • totaltime: when everything, including lazily-loaded content, is loaded (i.e. pageready + the time to lazy-load content)

Which make for these basic episodes:

  • backend episode = firstbyte - starttime
  • frontend episode = pageready - firstbyte
  • domready episode = domready - firstbyte, this episode is contained within the frontend episode
  • totaltime episode = totaltime - starttime, this episode contains the backend and frontend episodes

These are just the basic time measurements and episodes. It is possible to also measure the time it took to load the CSS (lines 8-9, this would be the css episode) and JavaScript files in the header (line 10, this would be the headerjs episode) and in the footer (line 27, this would be the footerjs episode), for example. It is possible to measure just about anything you want.

2. Making episodes.js reusable

The episodes.js file provided at the Episodes example is in fact just a rough sample implementation, an implementation that indicates what it should look like. It contained several hardcoded URLs, does not measure the sensible default episodes, contains a few bugs. In short, it is an excellent and solid start, but it needs some work to be truly reusable.

There also seems to be a bug in Episodes when used in Internet Explorer 8. It is actually a bug in Internet Explorer 8: near the end of the page loading sequence, Internet Explorer 8 seems to be randomly disabling the window.postMessage() JavaScript function, thereby causing JavaScript errors. After a while of searching cluelessly for the cause, I gave up and made Internet Explorer 8 also use the backwards-compatibility script (episodes-compat.js), which overrides the window.postMessage() method. The problem had vanished. This is not ideal, but at least it works reliably now.
Finally, there also was a bug in the referrer matching logic, or more specifically, it only worked reliably in Internet Explorer and intermittently worked in Firefox, due to the differences between browsers in cookie handling. Because of this bug, many backend episodes were not being measured, and now they are.

I improved episodes.js to make it reusable, so that I could integrate it with Drupal without adding Drupal-specific code to it. I made it so that all you have to do is something like this:


<!-- Initialize EPISODES. -->
<script type="text/javascript">
  var EPISODES = EPISODES || {};
  EPISODES.frontendStartTime = Number(new Date());
  EPISODES.compatScriptUrl = "lib/episodes-compat.js";
  EPISODES.logging = true;
  EPISODES.beaconUrl = "episodes/beacon";

<!-- Load episodes.js. -->
<script type="text/javascript" src="" />

<!-- Rest of head tag. -->
<!-- ... -->


This way, you can initialize the variables to the desired values without customizing episodes.js. Line 6 should be as early in the page as possible, because it is the most important reference time stamp.

Here is a brief overview with the highlights of what had to be done to integrate the Episodes framework with Drupal.

  • Implemented hook_install(), through which I set a module weight of -1000. This extremely low module weight ensures the hook implementations of this module are always executed before all others.
  • Implemented hook_init(), which is invoked at the end of the Drupal bootstrap process. Through this hook I automatically insert the JavaScript into the <head> tag that is necessary to make Episodes work (see the “Making episodes.js reusable” section). Thanks to the extremely low module weight, the JavaScript code it inserts is the first tag in the <head> tag.
  • Also through this same hook I add Drupal.episodes.js, which provides the actual integration with Drupal. It automatically creates an episode for each Drupal “behavior”. (A behavior is written in JavaScript and adds interactivity to the web page.) Each time new content is added to the page through AHAH, Drupal.attachBehaviors() is called and automatically attaches behaviors to new content, but not to existing content. Through Drupal.episodes.js, Drupal’s default Drupal.attachBehaviors() method is overridden — this is very easy in JavaScript. In this overridden version, each behavior is automatically measured as an episode.
    Thanks to Drupal’s existing abstraction and the override I have implemented, all JavaScript code can be measured through Episodes without hacking Drupal core.
    A simplified version of what it does can be seen here:
Drupal.attachBehaviors = function(context) {
  url = document.location;

  for (behavior in Drupal.behaviors) {
    window.postMessage("EPISODES:mark:" + behavior, url);
    window.postMessage("EPISODES:measure:" + behavior, url);
  • Some of the Drupal behaviors are too meaningless to measure, so it would be nice to be able to mark some of the behaviors as ignored. That is also something I implemented. Basically I do this by locating every directory in which one or more *.js files exist, create a scan job for each of these and queue them in Drupal’s Batch API. Each of these jobs scans each *.js file, looking for behaviors. Every detected behavior is stored in the database and can be marked as ignored through a simple UI that uses the Hierarchical Select module.
  • For measuring the css and headerjs episodes, it is necessary to make a couple of simple (copy-and-paste) changes to the page.tpl.php of the Drupal theme(s) you are using. These changes are explained in the README.txt file that ships with the Episodes module. This is the only manual change to code that can be done — it is recommended, but not required.
  • And of course a configuration UI (see the attached figures: “Episodes module settings form” and “Episodes module behaviors settings form”) using the Forms API. It ensures the logging URL (this is the URL through which the collected data is logged to Apache’s log files) exists and is properly configured (i.e. returns a zero-byte file).

Only basic reports are provided, highlighting the most important statistics and visualizing them through charts. Advanced/detailed reports are beyond the scope of this bachelor thesis, because they require extensive performance research (to be able to handle massive datasets), database indexing optimization and usability research.

  • First of all, the Apache HTTP server is a requirement as this application’s logging component is used for generating the log files. Its logging component has been proven to be scalable, so there is no need to roll our own.
    The source of this idea lies with Jiffy (see the “Jiffy” section in my “Page loading profiling tools” article).
  • The user must make some changes to his httpd.conf configuration file for his Apache HTTP server. As just mentioned, my implementation is derived from Jiffy’s, yet every configuration line is different.
  • The ingestor parses the Apache log file and moves the data to the database. I was able to borrow a couple of regular expressions from Jiffy’s ingestor (which is written in Perl) but I completely rewrote it to obtain clean and simple code, conform the Drupal coding guidelines. It detects the browser, browser version and operating system from the User Agent that was logged with the help of the Browser.php library.
    Also, IPs are converted to country codes using the ip2country Drupal module.
    This is guaranteed to work thanks to the included meticulous unit tests.
  • And of course again a configuration UI (see the attached figure: “Episodes Server module settings form”) using the Forms API. It ensures the log file exists and is accessible for reading.

Due to lack of time, the basic reports are … well … very basic. It would be nice to have more charts and to be able to filter the data of the charts. In particular, these three filters would be very useful:

  1. filter by timespan: all time, 1 year, 6 months, 1 month, 1 week, 1 day
  2. filter by browser and browser version
  3. filter by (parts of) the URL

5 Insights

  • Episodes module
    • Generating the back-end start time on the server can never work reliably because the clocks of the client (browser) and server are never perfectly in sync, which is required. Thus, I simply kept Steve Souders’ onbeforeunload method to log the time when a next page was requested. The major disadvantage of this method is that it is impossible to measure the backend episode for each page load: it is only possible to measure the backend episode when the user navigates through our site (more specifically, when the referrer is the same as the current domain).
    • Even just measuring the page execution time on the server cannot work because of this same reason. You can accurately measure this time, but you cannot relate it to the measurements in the browser. I implemented this using Drupal’s hook_boot() and hook_exit() hooks and came to this conclusion.
    • On the first page load, the onbeforeunload cookie is not yet set and therefor the backend episode cannot be calculated, which in turn prevents the pageready and totaltime episodes from being calculated. This is of course also a problem when cookies are disabled, because then the backend episode can never be calculated. There is no way around this until the day that browsers provide something like document.requestTime.
  • Episodes server module
    • Currently the same database as Drupal is being used. Is this scalable enough for analyzing the logs of web sites with millions of page views? No. Writing everything to a SQLite database would not be better. The real solution is to use a different server to run the Episodes Server module on or even an external web service. Better even is to log to your own server and then send the logs to an external web service. This way you stay in control of all your data! Because you still have your log data, you can switch to another external web service, thereby avoiding vendor lock-in. The main reason I opted for using the same database, is ease of development.
      Optimizing the profiling tool is not the goal of this bachelor thesis, optimizing page loading performance is. As I already mentioned before, writing an advanced profiling tool could be a master thesis on its own.

6. Feedback from Steve Souders

I explained Steve Souders what I wanted to achieve through this bachelor thesis and the initial work I had already done on integrating Episodes with Drupal. This is how his reply started:


Wow, this is awesome.

So, at least he thinks that this was a worthwhile job, which suggests that it will probably be worthwhile/helpful for the Drupal community as well.

Unfortunately for me, Steve Souders is a very busy man, speaking at many web-related conferences, teaching at Stanford, writing books and working at Google. He did not manage to get back to the questions I asked him.

This is a republished part of my bachelor thesis text, with thanks to Hasselt University for allowing me to republish it. This is section eight in the full text.

Previously in this series:

Aug 25 2009
Aug 25

Yesterday was the third Drupal 7 accessibility taskforce meeting, and the final meeting before code freeze. With September 1 just around the corner the Drupal accessibility community would like to reach out the the broader community for some additional help during this busy time to get as many accessibility improvements into Drupal 7 core as possible.

There are many ways to help and to get involved. If anyone is wondering what they can do or how their skills may be useful please e-mail Everett to find out where your skills can best be put to use.

There are five issues that we would like to request assistance with. They are listed below with links to their issue page.

    Thank you so much to everyone who has contributed to Drupal accessibility thus far, and thanks in advance for those who can find the time to contribute over the next week.



    Aug 25 2009
    Aug 25

    Play Video

    Movie link (right-click to download) Problem with this video? Contact me

    Like many other content management systems, Drupal is built on PHP. With PHP, you can dig in and make it do whatever you want. Likewise, you can dig into Drupal and make it do most everything you want - well, almost. If you're not afraid of writing PHP code, then the sky's the limit. If you're limited by the time and desire to learn PHP, then Panels is likely the way to go!

    If you're seeking a high degree of control, regarding the individual building blocks which make up the output of your Drupal site, you can choose one or more of the following. In order of complexity (and decreasing silliness) you can A) generate an XSLT parsing system to re-render what was already rendered by Drupal from your own custom module built to output custom XML code, B) become a Drupal theming master (a topic for another video series) or C) use Panels and be done with it.

    My suggestion is to go with 'C' and be done with it. Hopefully, this video will provide you with the head start you're looking for when working with the Panels module.

    Aug 25 2009
    Aug 25

    There was a great question on Drupal developers mailing list the other day—one to which I've "rediscovered" the solution to a few times—so I wanted to make sure that everyone was aware of it.

    The basic question is:

    When a node is being saved, how can you see what values have changed?

    The short answer is:

    Use the 'presave' operation to load a copy of the node before it's saved, stick it back into the node object, and in your 'update' operation code compare the "before" and "after" versions:

    * Implementation of hook_nodeapi().
    function example_nodeapi(&$node, $op, $a3, $a4) {
    // We want to compare nodes with their previous versions. Ignore new
      // nodes with no nid since there's no previous version to load.
    if ($op == 'presave' && !empty($node->nid)) {
    // We don't want to collide with values set by other modules so we'll
        // use the module name as a prefix and a long name to be save.
    $node->example_presave_node = node_load($node->nid);
      elseif (
    $op == 'update') {
    // On update we pull the previous version out of the node and compare
        // it to the newly saved one.
    $presave = $node->example_presave_node;
    // Pretend we're comparing a single value CCK number field here.
    $field_name = 'field_example';
        if (
    $node->$field_name != $presave->$field_name) {
    t("The node's value changed from %previous to %current.", array(
    '%previous' => $presave->$field_name[0]['value'],
    '%current' => $node->$field_name[0]['value'],

    Aug 24 2009
    Aug 24

    Just a week to go, and we are busy, frenetically working on the last details to get you the coolest, most fun, most informative Drupal Conference ever. We cannot but express our regrets to those who will not be able to join us: know that we have worked hard to get you as much video coverage as possible, so you can still follow the conference.

    As you all may know, organizing an event of this stature is no easy deal and we are fully expecting to still meet bumps on the road in the next seven days but.. we know that with your support we'll be able to surmount anything. God willing, this time Wi-fi will work :)

    So, some additional information about how well this is going:

    • There will be over 850 people in attendance from all over the world.
    • We have a line up of 82 sessions lead by members of the Drupal community, not counting the dozens of BoF sessions.
    • We have 42 sponsors (6 Platinum, 2 Gold, 12 Silver, and 22 Bronze) representing Drupal service providers and corporate users including major national and international software and media companies with familiar sponsors like MTV and Microsoft.
    • There are still two opportunities left for your company to be a sponsor (Gold and Silver), you will not be on the printed materials but, we will get you the exposure both at the venue and online, remember: benefits from the sponsorship go to furthering the Drupal community if interested, hurry and use the contact form on the conference website.
    • We have a great job fair lined up with the top Drupal service providers hiring from the best and the brightest developers and business persons the community has to offer.
    • We have added DrupalCampParis4 to the first day of Drupalcon! Please sign up online on the Barcamp website.
    • Lastly, we have sold 25 of the 30 remaining late registration tickets. There are only 5 tickets left! If you are interested in attending Drupalcon Paris and need a ticket, please purchase one today before they are all sold out.

    Now it's time to go visit and browse through the sessions, prepare your wits, your photoshops and write some cool code you can show off. Keep a tight list of all you want to learn. Ready all your gripes, your dreams, your projects, for in a week's time 800 Drupalers will meet in the most beautiful city in the world (hack, can you really beat Paris?) and there will be plenty of opportunities to learn, plenty to talk about and exchange, opportunities to find a job and many new and wonderful projects to start (and for those that don't really follow the Drupal calendar, please look at what else is happening on September 1st...).

    Cheers and chers Drupaliens on vous attends!

    The Paris Organizing team

    Aug 24 2009
    Aug 24

    In this article, seven distinctly different page loading profiling tools are compared: UA Profiler, Cuzillion, YSlow, Hammerhead, Apache JMeter, Gomez/Keynote/WebMetrics/Pingdom, Jiffy and Episodes. “Profiling” must be interpreted rather broadly: some of the tools cannot measure actual performance but are useful to gain insight in page loading performance characteristics.

    If you can not measure it, you can not improve it.
    — Lord Kelvin

    The same applies to page loading performance: if you cannot measure it, you cannot know which parts have the biggest effect and thus deserve your focus. So before doing any real work, we will have to figure out which tools can help us analyzing page loading performance. “Profiling” turns out to be a more accurate description than “analyzing”:

    In software engineering, performance analysis, more commonly today known as profiling, is the investigation of a program’s behavior using information gathered as the program executes. The usual goal of performance analysis is to determine which sections of a program to optimize — usually either to increase its speed or decrease its memory requirement (or sometimes both).

    So a list of tools will be evaluated: UA Profiler, Cuzillion, YSlow, Hammerhead, Apache JMeter, Gomez/Keynote/WebMetrics/Pingdom and Jiffy/Episodes. From this fairly long list, the tools that will be used while improving Drupal’s page loading performance will be picked, based on two factors:

    1. How the tool could help improve Drupal core’s page loading performance.
    2. How the tool could help Drupal site owners to profile their site’s page loading performance.

    1. UA Profiler

    UA Profiler is a crowd-sourced project for gathering browser performance characteristics (on the number of parallel connections, downloading scripts without blocking, caching, et cetera). The tests run automatically when you navigate to the test page from any browser — this is why it is powered by crowd sourcing.

    It is a handy reference to find out which browser supports which features related to page loading performance.

    2. Cuzillion

    Cuzillion was introduced on April 25, 2008 so it is a relatively new tool. Its tag line, “‘cuz there are zillion pages to check” indicates what it is about: there are a lot of possible combinations of stylesheets, scripts and images. Plus they can be external or inline. And each combination has different effects. Finally, to further complicate the situation, all these combinations depend on the browser being used. It should be obvious that without Cuzillion, it is an insane job to figure out how each browser behaves:

    Before I would open an editor and build some test pages. Firing up a packet sniffer I would load these pages in different browsers to diagnose what was going on. I was starting my research on advanced techniques for loading scripts without blocking and realized the number of test pages needed to cover all the permutations was in the hundreds. That was the birth of Cuzillion.

    Cuzillion is not a tool that helps you analyze any existing web page. Instead, it allows you to analyze any combination of components. That means it is a learning tool. You could also look at it as a browser profiling tool instead of all other listed tools, which are page loading profiling tools.

    Here is a simple example to achieve a better understanding. How does the following combination of components (in the tag) behave in different browsers?

    1. an image on domain 1 with a 2 second delay
    2. an inline script with a 2 second execution time
    3. an image on domain 1 with a 2 second delay

    First you create this setup in Cuzillion (see the attached figure: “The example situation created in Cuzillion”). This generates a unique URL. You can then copy this URL to all browsers you would like to test.

    As you can see, Safari and Firefox behave very differently. In Safari (see the attached figure: “The example situation in Safari 3”), the loading of the first image seems to be deferred until the inline script has been executed (the images are displayed when the light purple bars become dark purple). In Firefox (see the attached figure: “The example situation in Firefox 3”), the first image is immediately rendered and after a delay of 2 seconds — indeed the execution time of the inline script — the second image is rendered (the images are displayed when the gray bars stop). Without going into details about this, it should be clear that Cuzillion is a simple, yet powerful tool to learn about browser behavior, which can in turn help to improve the page loading performance.

    3. YSlow

    YSlow is a Firebug extension (see the attached figure: “YSlow applied to”) that can be used to analyze page loading performance through thirteen rules. These were part of the original fourteen rules — of which there are now thirty-four — of “Exceptional Performance”, as developed by the Yahoo! performance team.

    YSlow 1.0 can only evaluate these thirteen rules and has a hardcoded grading algorithm. You should also remember that YSlow just checks how well a web page implements these rules. It analyzes the content of your web page (and the headers that were sent with it). For example, it does not test the latency or speed of a CDN, it just checks if you are using one. As an example, because you have to tell YSlow (via Firefox’ about:config) what the domain name of your CDN is, you can even fool YSlow into thinking any site is using a CDN — tricking YSlow into thinking is using a CDN is easy (see the attached figures: “The original YSlow analysis” and “The resulting YSlow analysis”).

    That, and the fact that some of the rules it analyzes are only relevant to very big web sites. For example, one of the rules (#13, “Configure ETags”) is only relevant if you are using a cluster of web servers. For a more in-depth article on how to deal with YSlow’s evaluation of your web sites, see Jeff Atwood’s “YSlow: Yahoo’s Problems Are Not Your Problems”. YSlow 2.0 aims to be more extensible and customizable: it will allow for community contributions, or even web site specific rules.

    Since only YSlow 1.0 is available at the time of writing, I will stick with that. It is a very powerful and helpful tool as it stands, it will just get better. But remember the two caveats: it only verifies rules (it does not measure real-world performance) and some of the rules may not be relevant for your web site.

    4. Hammerhead

    Hammerhead (see the attached figure: “A sample Hammerhead run”), announced in September 2008 is a Firebug extension that should be used while developing. It measures how long a page takes to load and it can load a page multiple times, to calculate the average and mean page load times. Of course, this is a lot less precise than real-world profiling, but it allows you to profile while you are working. It is far more effective to prevent page loading performance problems due to changes in code, because you have the test results within seconds or minutes after you have made these changes!

    Of course, you could also use YSlow (see the YSlow section) or FasterFox, but then you have to load the page multiple times (i.e. hammer the server, this is where the name comes from). And you would still have to set up the separate testing conditions for each page load that Hammerhead already sets up for you: empty cache, primed cache and for the latter there are again two possible situations: disk cache and memory cache or just disk cache. Memory cache is of course faster than disk cache; that is also why that distinction is important. Finally, it supports exporting the resulting data into CSV format, so you could even create some tools to roughly track page loading performance throughout time.

    5 Apache JMeter

    Apache JMeter is an application designed to load test functional behavior and measure performance. In the perspective of profiling page loading performance, the relevant features are: loading of web pages with and without its components and measuring the response time of just the HTML or the HTML and all the components it references.

    However, it has several severe limitations:

    • Because it only measures from one location — the location from where it is run, it does not give a good big picture.
    • It is not an actual browser, so it does not download components referenced from CSS or JS files.
    • Also because it is not an actual browser, it does not behave the same as browsers when it comes to parallel downloads.
    • It requires more setup than Hammerhead (see the Hammerhead section), so it is less likely that a developer will make JMeter part of his workflow.

    It can be very useful in case you are doing performance testing (How long does the back-end need to generate certain pages?), load testing (how many concurrent users can the back-end/server setup handle?) and stress testing (how many concurrent users can it handle until errors ensue?).To learn more about load testing Drupal with Apache JMeter, see John Quinn’s “Load test your Drupal application scalability with Apache JMeter” article and part two of that article.

    6 Gomez/Keynote/WebMetrics/Pingdom

    Gomez, KeyNote, WebMetrics and Pingdom are examples of third-party (paid) performance monitoring systems. They have four major disadvantages:

    1. limited number of measurement points
    2. no real-world browsers are used
    3. unsuited for Web 2.0
    4. paid & closed source

    6.1 Limited number of measurement points

    These services poll your site at regular or irregular intervals. This poses analysis problems: for example, if one of your servers is very slow just at that one moment that any of these services requests a page, you will be told that there is a major issue with your site. But that is not necessarily true: it might be a fluke.

    6.2 No real-world browsers

    Most, if not all of these services use their own custom clients (as mentioned in Scott Ruthfield’s Jiffy presentation at Velocity 2008). That implies their results are not a representation of the real-world situation, which means you cannot rely upon these metrics for making decisions: what if a commonly used real-world browser behaves completely differently? Even if the services would all use real-world browsers, they would never reflect real-world performance, because each site has different visitors and therefor also a different mix of browsers.

    6.3 Unsuited for Web 2.0

    The problem with these services is that they still assume the World Wide Web is the same as it was 10 years ago, where JavaScript was rather a scarcity than the abundance it is today. They still interpret the onload event as the “end time” for response time measurements. In Web 1.0, that was fine. But as the adoption of AJAX has grown, the onload event has become less and less representative of when the page is ready (i.e. has completely loaded), because the page can continue to load additional components. For some web sites, the “above the fold” section of a web page has been optimized, thereby loading “heavier” content later, below the fold. Thus the “page ready” point in time is shifted from its default.

    In both of these cases, the onload event is too optimistic, as explained in Steve Souder’s Episodes white paper.

    There are two ways to measure Web 2.0 web sites (covered by the Episodes presentation):

    1. manual scripting: identify timing points using scripting tools (Selenium, Keynote’s KITE, et cetera). This approach has a long list of disadvantages: low accuracy, high switching costs, high maintenance costs, synthetic (no real-world measurements).
    2. programmatic scripting: timing points are marked by JavaScript (Jiffy, Gomez Script Recorder, et cetera). This is the preferred approach: it has lower maintenance costs and a higher accuracy because the code for timing is included in the other code and measures real user traffic.
      If we would now work on a shared implementation of this approach, then we would not have to reinvent the wheel every time and switching costs would be much lower. See the Jiffy/Episodes section later on.

    The end user is dependent upon the third party service to implement new instrumentations and analyses. It is typical for closed source applications to only implement the most commonly asked feature and because of that, the end user may be left out in the cold. There is a high cost for the implementation and a also a very high cost when switching to a different third party service.

    7 Jiffy/Episodes

    7.1 Jiffy

    Jiffy (presented at Velocity 2008 by Scott Ruthfield — alternatively, you can view the video of that presentation) is designed to give you real-world information on what is actually happening within browsers of users that are visiting your site. It shows you how long pages really take to load and how long events that happen while or after your page is loading really take. Especially when you do not control all the components of your web site (e.g. widgets of photo and music web sites, contextual ads or web analytics services), it is important that you can monitor their performance. It overcomes four major disadvantages that were listed previously:

    1. it can measure every page load if desired
    2. real-world browsers are used, because it is just JavaScript code that runs in the browser
    3. well-suited for Web 2.0, because you can configure it to measure anything
    4. open source

    Jiffy consists of several components:

    • Jiffy.js: a library for measuring your pages and reporting measurements
    • Apache configuration: to receive and log measurements via a specific query string syntax
    • Ingestor: parse logs and store in a database (currently only supports Oracle XE)
    • Reporting toolset

    Jiffy was built to be used by the WhitePages web site and has been running on that site. At more than 10 million page views per day, it should be clear that Jiffy can scale quite well. It has been released as an open source project, but at the time of writing, the last commit was on July 25, 2008. So it is a dead project.

    7.2 Episodes

    Episodes (also see the accompanying whitepaper) is very much like Jiffy. There are two differences:

    1. Episodes’ goal is to become an industry standard. This would imply that the aforementioned third party services (Gomez/Keynote/WebMetrics/Pingdom) would take advantage of the the instrumentations implemented through Episodes in their analyses.
    2. Most of the implementation is built into browsers (window.postMessage(), addEventListener()), which means there is less code that must be downloaded. (Note: the newest versions of browsers are necessary: Internet Explorer 8, Firefox 3, WebKit Nightlies and Opera 9.5. An additional backwards compatibility JavaScript file must be downloaded for older browsers.

    Steve Souders outlines the goals and vision for Episodes succinctly in these two paragraphs:

    The goal is to make Episodes the industrywide solution for measuring web page load times. This is possible because Episodes has benefits for all the stakeholders. Web developers only need to learn and deploy a single framework. Tool developers and web metrics service providers get more accurate timing information by relying on instrumentation inserted by the developer of the web page. Browser developers gain insight into what is happening in the web page by relying on the context relayed by Episodes.

    Most importantly, users benefit by the adoption of Episodes. They get a browser that can better inform them of the web page’s status for Web 2.0 apps. Since Episodes is a lighter weight design than other instrumentation frameworks, users get faster pages. As Episodes makes it easier for web developers to shine a light on performance issues, the end result is an Internet experience that is faster for everyone.

    A couple of things can be said about the current codebase of Episodes:

    • There are two JavaScript files: episodes.js and episodes-compat.js. The latter is loaded on-the-fly when an older browser is being used that does not support window.postMessage(). These files are operational but have not had wide testing yet.
    • It uses the same query string syntax as Jiffy uses to perform logging, which means Jiffy’s Apache configuration, ingestor and reporting toolset can be reused, at least partially.

    So, Episodes’ very raison d’existence is to achieve a consensus on a JavaScript-based page loading instrumentation toolset. It aims to become an industry standard and is maintained by Steve Souders, who is currently on Google’s payroll to work full-time on all things related to page loading performance (which suggests we might see integration with Google’s Analytics service in the future). Add in the fact that Jiffy has not been updated since its initial release, and it becomes clear that Episodes is the better long-term choice.

    8 Conclusion

    There is not a single, “do-it-all” tool that you should use. Instead, you should wisely combine all of the above tools. Use the tool that fits the task at hand.

    However, for the scope of this thesis, there is one tool that jumps out: YSlow. It allows you to carefully analyze which things Drupal could be doing better. It is not necessarily meaningful in real-world situations, because it e.g. only checks if you are using a CDN, not how fast that CDN is. But the fact that it tests whether a CDN is being used (or Expired headers, or gzipped components, or …) is enough to find out what can be improved, to maximize the potential performance.
    This kind of analysis is exactly what I will perform in the next section.

    There is one more tool that jumps out for real, practical use: Episodes. This tool, if properly integrated with Drupal, would be a key asset to Drupal, because it would enable web site owners to track the real-world page loading performance. It would allow module developers to support Episodes. This, in turn, would be a good indicator for a module’s quality and would allow the web site owner/administrator/developer to carefully analyze each aspect of his Drupal web site.
    I have created this integration as part of my bachelor thesis, the Episodes module. More on this in a follow-up article.

    This is a republished part of my bachelor thesis text, with thanks to Hasselt University for allowing me to republish it. This is section six in the full text.

    Aug 24 2009
    Aug 24

    Drupal 6 JavaScript and jQuery CoverI just finished reading a new book, Drupal 6 JavaScript and jQuery, by Matt Butcher. The book title makes it sound highly specialized, but in fact it's a great resource for a variety of readers, from Drupal beginners to JavaScript experts. Best of all, the book brings together two of my favorite open source technologies: Drupal and jQuery. If you aren't already a fan, I've written elsewhere about Drupal's benefits, and for jQuery, one statement should win you over: query HTML content using plain old CSS selectors!

    Matt does a great job leading the reader from very basic principles to advanced concepts, thoroughly enough to initiate less experienced coders, but quickly enough to get everyone to real meat right away. You will get immediate value from this book whether you are a web designer just starting out with Drupal and/or JavaScript, an intermediate coder looking to expand your skills with either Drupal or JavaScript, or an advanced Drupalista or JavaScriptor looking to bring the two together.

    Because I reach under the Drupal hood only sporadically, I love how how Matt quickly covers all the basic terminology and functionality of Drupal, JavaScript, and jQuery to remind me how they all work, and work together. I can see turning to this book as a basic reference again and again.

    Best of all, in each chapter Matt provides a hands-on project worth doing for it's own sake as well as for the added learning. Trying it out, I modified the rotating sticky node teasers project in Chapter 3 to complete something I've been wanting to do here on my blog for some time: make my Flickr-stream header images rotate dynamically. Read more about exactly how I did it. If I can do something like this in just a few minutes, that tells you a lot about the power and simplicity of Drupal and jQuery together, and Matt's ability to make it all understandable.

    Ready to dip your toes or delve deeper into how jQuery let's you "write less, do more" with Drupal? You can buy Matt's Drupal 6 JavaScript and jQuery book directly from the Packt website.

    FYI: I drafted this entire review sitting on an Oregon coast beach using the Evernote iPhone app. Pretty nice working conditions ;)

    Aug 23 2009
    Aug 23

    Thanks to the support of Google Summer of Code'09, I was able to develop the Recommender Bundle for Drupal, which includes the following modules: (not developed)

    Basically, the modules make content recommendations based on such criteria as "Users who viewed this node also viewed", or "You might be interested in these nodes because you have visited similar nodes". Please refer to the project description of each module for more details.

    Followup new features development and code maintenance would be coordinated through the issue queue of each module. Please feel free to use it.

    Have fun using the modules! :)

    Aug 22 2009
    Aug 22

    OH: Yes! ♪Corned beef is on sale!♬ What? I love corned beef, so I thought I would sing about it.

    Aug 22 2009
    Aug 22

    There are a lot UI Improvements under way for Drupal 7. Wouldn't it be wonderful to be sure if these really are improvements and we don't overlook small details that make the entire building crumble? How about ongoing, small-scale, crowdsourced user testing. Leisa Reichelt issued a similar call some time ago to test the admin header. A pleasing number of people tested the header and helped checking the names of the top-level categories and testing (probably confirming) the usefulness of the entire concept.

    What this is up to is a sustainable concept that will ideally be an ongoing process. Apart from this mid-term goal there is also a very here-and-now one: we need to urgently test the D7UX stuff before Drupal 7 comes out or before the time even for small changes is over.

    You can help by:

    1. do testing and upload the videos
    2. find some participants, if you don't want to do testing yourself
    3. be a testing participant if you don't know Drupal too well

    A perfect participant has some web experience, has mastered things like ebay product selling forms and maybe has some experience in other cms's or blogging software. No. 2 of the above list would mean another motivated tester has won some participants and could carry out the tests with them. Personally I am willing to do quite some testing, this can be a lot of fun :) We need interviewers and test participants, so what about building up an infrastructure where we have:

    - always enough test participants
    - always enough people who would like to do the interviews 
    - always enough nicely described tasks anyone can just pick from.

    The problem is often that only one of these three is there. Bringing them together is key. As long as this infrastructure is not in place: anyone that would like to participate please leave a comment on this post, we will get the ball rolling.

    How to do the testing 

    Get the latest D7UX special version via SVN from the google repository or if using SVN does not come in handy for you get a zip archive from here directly. I'll try to keep the zip file up to date. This version is tweaked a tiny little bit to get the icons into the shortcut bar that are very important IMHO. Icons have been postponed past code freeze, so they're not in the official version atm. Please report back if there are any issues with the zipped version. Don't mind the .svn and CVS folders in there: they are just inactive if you don't use SVN or CVS.

    You need PHP 5 and a PDO-enabled version of mysql to install Drupal 7. Most online Linux machines at your Webhoster should be ready for that, while namely if you run Xampp on Windows, you need to fix mysql in Xampp.

    There is one thing to watch when you got it installed (choose the default install profile thas is preselected): it appears as if you were logged in, but you are not. The javacript progress circle will run for infinity if you click anything in the admin header. Click on "Add content" in the left sidebar menu, the login block will come up, log in. Now you can start :)

    Software and hardware

    Pick a screen recording software. I don't know the free ones, but actually any will do. If you got something like camtasia on windows or silverback for the mac, you are privileged. But actually I find one needs only certain features: being able to cut the video, being able to denoise and optimize the audio and to export it into say .mov or any other format that vimeo accepts (it does not accept flash formats like .swf or .flv.) If you do not want to do videos or it is too much of a technical effort: no problem, do your interview and record just the audio or transcribe it from memory. Any real world testing is valuable, so we don't want to stop you by overcharging the technical side.

    Doing the tests personally and best in surroundings familiar to the participant is sure best and the nicest. But you can also do your interview remotely with success: get Teamviewer (available for mac and windows). Get the full version yourself and send your participant the quick support version. Don't forget to zip the file, since every mail server in the world rejects mails with attached .exe files (don't know with dmg for mac). Connect to the screen of your participant, best as remote administrator so you could control his mouse if you wanted. As long as you stay outside the Teamviewer window with your mouse you won't obtrude the other side. What sets Teamviewer apart from the other solutions I have seen: No need to install, runs unobtrusively just from the .exe file. Leaves no traces on the client computer, works through all corporate firewalls.

    Don't forget to show the remote mouse cursor (for some strange reason off by default), this is in the options shown at the top of your teamviewer window. Call the participant per Skype (or per skype out, if not possible per direkt Skype connection). This way you can with ease start your screen recording software and record the sound with good quality. Doing tests remotely is a viable alternative and opens up the personnage that is available to you a lot.

    Let the fun begin

    Plug in a microphone (this is for doing tests in person) You can use the built-in microphone of your laptop, but it might pick up the noise of the keyboard, which is often very loud on the recording and cannot be removed. If you don't use the keyboard but only the mouse, you will be fine. Make sure the volume is high enough, but you can easily raise it afterwards, if it is not too noisy. I bought a cheap lavalier microphone for 20 Euros that has battery as power, which is almost perfect. If the volume is low, this does not matter if it is noiseless or low-noise. This allows for increasing the volume to any desired level when editing the video.

    Please, do the interview in English! Even broken english is much preferrable to a foreign language. This may be hard on some participants, but taking it slow and easy might help. Ask the person kindly if he/she allows uploading the video to vimeo or another public platform. Just don't record their faces to keep anonymity byx not mentioning names (and if you mention it, ask them for permission). Most people agree to uploading. If they don't, consider doing a thorough writeup of the video and publish this in written form.

    Now you are set to go. Pick a topic from here or make it much simpler. Since we are mainly testing the admin overlay, very simple user interaction is enough. We want to see, how general tasks are affected by the new interface and the fact that we have an admin theme. So here some suggestions, of which two or three may be more than enough for your half-hour session.

    I've found that I spend half of the time dioing small-talk with the participants. This will make them forget this is a user test so they feel at ease and act naturally. Half an hour is just about the concentration-span before everybody gets tense. The rest is introduction, outro and breaks (yeah, breaks! Though best to keep them short, every screencast software offers this option).

    Suggestions for tasks:

    • Create an article and publish it fo the front page.
    • Go to the front page (ridicolous? Oh no, you will see...)
    • Find Content - let them go to admin/content/node and recognize the article they have just written
    • Edit the article from that list, and/or edit it from the node page
    • Be aware of a big issue: The "Where am I?" - effect. Do they understand what the overlay is, what's their live site and when they are in an admin or "action" area?

    O.K. let's get more daring...

    • Choose a different theme for display (argh, there is only Garland and Stark right now... Nevertheless, do it)
    • Add the article to the menu. Either via the menu settings in the node form, or (very daring) through the menu admin page
    • Go to the user page and let's see what users we have.
    • Very daring: go to Blocks admin page and let them stare =8-0 Maybe ask what they think they can do here and try to make them try out and click around
    • Add some Tags to the articles (you need several articles) and click on the taxonomy term in node view to see where we go.
    • The most important: let your participant click around and explore for a while. Observe where the interface leads them and what reactions they show. Are they overwhelmed? Can they find a meaning in what they see? Do they feel motivated to act and try things out?

    Most of these tasks are done either in the admin theme or in the admin overlay. The overlay (black sides and top) looks different from the pure admin theme (white with greenish gray), try to find out if this is confusing

    Now edit your recording so unneccessary parts are cut off and the sound is agreeable. Register a free vimeo account (basic). No, we do not get money from vimeo, but the quality is better that on youtube and you can upload videos longer than 10 minutes.  Join this vimeo group and upload your video, best with some information what you tested and about the person that participated. Done. You have contributed considerably to UX improvement in Drupal.

    If you want a bit more guidance, as said above comment on this post or contact me via e-mail, and we (the UX team) will provide you with short scripts for what and how to test.

    The long term plan

    Bojhan Somers is currently building a site to coordinate all the efforts better as a subdomain of We will put and link all the stuff from there and build the infrastructure outlined in the start of this post. Give us some time until it's ready. We can use the vimeo group and maybe even this post as a temporary home :P Maybe start a group at

    Code just works or doesn't. Is there a binary indicator for user interfaces? Well probably not, but one can get quite close to that. A user succeeds in performing a task or not. Further limiting the target group to "Novice" "Some experience in Drupal" and some more levels of expertise can get quite consistent results. There is also a position in user testing that says five participants on the same issue are enough to get a representative picture. After that and surely before, reaction patterns start to repeat. 

    Aug 22 2009
    Aug 22

    I'm reviewing my modules to determine each of their popularity for #D7CX. I also use this window before a major release to decide if any of my modules should just shrivel up and blow away into the dust of history.

    The following table is the list of modules where I am the node author, or lead maintainer simpler terms.

    Projects where I am a major contributor
    Aug 21 2009
    Aug 21

    For some reason I find myself rewriting this little bit of code every time I need to update a bunch of nodes on a site. Going to post it here to save myself some time. Be aware that this might time out if you've got a large number of nodes, designed for up to a couple hundred nodes:

    // TODO: Set your basic criteria here:
    $result = db_query("SELECT n.nid FROM {node} n WHERE n.type = '%s'", array('task'));
    while (
    $row = db_fetch_array($result)) {
    $node = node_load($row);
      if (
    $node->nid) {
    $node->date = $node->created;    // TODO: Test and set your own value here:
    if (empty($node->field_task_status[0]['value'])) {
    $node->field_task_status[0]['value'] = 'active';
    $node = node_submit($node);
    drupal_set_message(t('Updated <a href="!url">%title</a>.', array('!url' => url('node/'. $node->nid), '%title' => $node->title)));
    Aug 21 2009
    Aug 21

    I  intend to write a module to integrate Hyves within drupal sites. Hyves is a facebook alike community website that offers an nice api, much like facebook does. Hyves is used by users in the Netherlands.

    The module would offer lean methods to call the Hyves API in a more standard way. There would be a settings page for developers and/or sitemasters to select the features they want.

    This post is nearly to check if there would be people interested in a module like this. Please let me know and i will refactor a custom project to a drupal module. :)

    This entry was posted on Friday, August 21st, 2009 at 3:05 pm and is filed under Drupal. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

    Aug 20 2009
    Aug 20

    Everett Zufelt presenting to the Drupal in Government ShowcaseIt was the first sunny day we've had in a week, a perfect day to spend on a patio after work drinking a pint. However 35 people made their way to The Code Factory to attend the Drupal in Government Showcase. This was the largest Drupal event ever held in Ottawa and it clearly demonstrated that there is interest in this Content Management System.

    Having organized the event we wanted to highlight the CLF 2.0 theme that are providing to the community as well as the extensive work that we have done on introducing accessibility enhancements to Drupal.

    NOTE: We have since followed up with another Drupal in Government event which was recorded future use.

    We had three interesting case studies presented. Mark Stephenson of RealDecoy, presented on their work with the Canadian Human Rights Museum. Wirespeak presented two case studies of their experience developing Drupal sites for government clients. 

    The three remaining speakers touched on range of related issues. Jason Prini talked briefly about digitalOttawa's Common Look & Feel site.   Ingres & FOSSLC talked briefly about their work to bring Drupal to work with the Ingres database and the FOSSLC's implementation of Drupal.  The User Advocate Group's Michael Baynger gave a short talk about usability as it pertains to Drupal.

    People attended from a number of government departments including Canadian Transportation Agency, Justice Canada, National Search and Rescue Secretariat, Agriculture Canada, Natural Resources Canada, Treasury Board, National Capital Commission, Canadian Border Security Agency, Environment Canada, City of Ottawa.

    The majority were not Drupal users, but were interested in powerful, flexible & multi-lingual CMS and wanted to learn more.  Having a range of developers speaking to Drupal's strengths and weaknesses was useful as this was clearly not a sales pitch by a marketing department. Having competing organizations present together shows clearly the strength of having a collaborative development community.

    Accessibility was one of the central points of the discussion around this session. The Government of Canada's CLF 2.0 guidelines include the Priority 1 and Priority 2 checkpoints of the Web Content Accessibility Guidelines 1.0 (WCAG). The universal accessibility guidelines are common to a lot of government agencies who have a mandate to communicate with the public. 

    Since the theme layer is distinct from the core applicaiton, many of the accessibility challenges that are outstanding can be resolved with an improved Drupal CLF theme. Most of the work that OpenConcept has been doing on accessibility issues has been focused on Authoring Tool Accessibility Guidelines 2.0 (ATAG) and improvents to form management.

    There was interest in having another meeting in the fall to continue the discussions that were started here.  We may look at creating an open software stack and describing how to build a scalable, secure web solution with Drupal. Could see inviting Redhat, Ingres, & Alfresco to present on solutions bringing open source web applications to government.

    Aug 20 2009
    Aug 20

    Drupal's API doesn't contain any ready-made function to check if a user has a certain role. While it's certainly easy enough to do, it can take a moment to figure out initially. Here's a little function that demonstrates how to do it. It isn't itself particularly useful since checking for roles is trivial without it, but it should serve as a point of demonstration.

    * Function to determine is a given user has a role.
    * @param $role
    *  A string containing the role we want to check.
    * @param $uid
    *  The UID of the user we wish to examine for the role. If not provided,
    *  the currently logged in user will be user.
    * @return
    *  TRUE if the user has the role. FALSE otherwise.
    function <modulename>_user_has_role($role, $uid = FALSE) {
    // Can't just check for !$uid since UID 0 is the anonymous user.
    if ($uid === FALSE) {
      else {
    $user = user_load($uid);
    // Check for the role.
    if (in_array($role, $user->roles)) {
    // Role found
    return TRUE;
    // Role not found.
    return FALSE;

    Aug 20 2009
    Aug 20

    Here's a quick example of writing an Action (5.x-2.x, which is a backport of 6.x's API style) for Views Bulk Operations. It allows you to assign selected users from a view (Usernode) and assign them to selected Organic Groups from a secondary page.

    * Implementation of hook_action_info().
    function go_views_bulk_ops_action_info() {
      return array(
    'go_views_bulk_ops_action' => array(
    'description' => t('Add Selected Users to OG'),
    'type' => 'node',
    'configurable' => TRUE,
    go_views_bulk_ops_action_form($context) {
    $ogs = og_all_groups_options();
    $form['ogs'] = array(
    '#title' => t('Groups'),
    '#type' => 'checkboxes',
    '#options' => $ogs,
    '#description' => t('Select which Groups the selected users will join'),
    go_views_bulk_ops_action_validate($form_id, $form_values) {}function go_views_bulk_ops_action_submit($form_id, $form_values) {
      return array(
    'ogs' => $form_values['ogs']);
    go_views_bulk_ops_action(&$node, $context) {
    $uid = db_result(db_query("select uid from {usernode} where nid=%d", $node->nid));
    $context['ogs'] as $gid){
    og_save_subscription($gid, $uid, array('is_active' => 1));
    Aug 19 2009
    Aug 19

    This is an old post I sent to drupal-devel mailing list more than 3 years ago, I would like to revive as I think finally the small core / many distributions idea is gaining some ground. I haven't changed anything from the original post here,


    A "great developer platform" is how I see Drupal, and I'm sure most of the developers too. But the thing is: we need some focus, some targets to agree on.

    The problem is: currently we are pretending Drupal -core- to be too many things at the same time, mostly that great developer platform (1), but also an out of the box ready to use community portal (2) or kind of that. And the consequence is we are handling too many issues when fixing bugs for 'Drupal core' that are too specific of some more 'user level, site specific' modules.

    The main point is *core shouldn't need to be an out of the box nice site* that anyone can use, and we need to jump in the *one core, many distributions* idea for that. And that need to be an out of the box nice site is actually consuming far more effort than what could be a proper CMS core.

    IMHO we are wasting efforts in some higher level modules, that should be more focused to get what is properly the core system working right in the first place. And I'm not making any judgement about which modules should be in core and which ones not. I mean,, ok to have a few nice modules in core, but when a module grows too much, because people wants all that nice features, and them to be easy to set up, we can a) move it out of 'core' or b) provide a basic one in core that can be extended by a contrib module.

    Just as an example -and I have nothing against forum module: At some point, we are duplicating the taxonomy interface to handle a forum specific vocabulary. Then, this causes some bug -like this one, again just an example: -, and then we have a
    *core bug*, module specific, but as it is a core module, this one has the same importance -in the bug tracking system- as any other critical bug -like a basic API not working properly.

    Notice that on one side we have a very big module in core to handle forums, but on the other side we don't have an small one for what could be a badly needed feature for other modules to build upon: a basic image node module. -again just an example, but I'm not talking about forum vs image at all.

    And as a side effect... How many modules you need to patch for what could be a small 'core patch' -if core was smaller- before your patch can be even considered because you can say at least 'it applys to HEAD' ?? And how many modules you need to re-patch again because of a minimum detail has changed in the main core patch during the follow up on the issue tracker?

    So I want to insist on the idea of "many distributions to serve different sites, one core to serve them all" --nice 'mantra', isn't it? ;-)

    But, anyway, that's an old idea. Just look at Linux.... Could you install 'Linux' --properly understood as the core OS- and pretend you have a nice OS ready to play with your computer?


    Aug 19 2009
    Aug 19

    A while back, I wrote about taxonomies in Drupal - see post. We're now at the point where we're creating first drafts of our main taxonomies - a "master" taxonomy for organizing more or less ALL content on our flagship site and a few auxiliary ones for information like news and archives.

    So, I'm revisiting some of the links and presentations from my previous post and doing some more research on how best to go about creating our taxonomies. There is an interesting discussion here, by the way, on the whole nodes/content types vs. taxonomy for organizing content in Drupal sites in which I definitely feel, with my limited Drupal knowledge (3-4 months as a user), that making everything a taxonomy is NOT the way to go. Content types and taxonomies are nicely complementary and should be kept that way.

    Anyway, we've been in an IARD (Information Architecture Re-Design) process for about 9 months now and have so far created or done the following:

    • Domain model
    • Content audit
    • Personas
    • some user interviews in the form of conversation with stakeholders, information gathered through a wiki, some polling and surveys
    • Wireframing and prototyping

    and are doing some content modeling (slightly out-of-order!) now. The domain model, content modeling exercises and content audit are all informing the creation of our "master" taxonomy. It's tricky stuff. I'm a librarian by training and thus have some significant experience with many classification systems (Dewey, LC, Dublin Core, etc.). But, keeping things simple and allowing for flexibility (not letting the level of specificity get too high or granular) is difficult.

    So, here's what I'm reading to help:

    • Morville and Rosenfeld's

    as well as other IA resources. We're still at least 6 months away from the data & content migration phase so we have time to play around with various taxonomies and how they can help fix our confused IA.

    We are still commited to only using 2 of the 3 types of vocabularies that Drupal offers, namely: simple lists of terms and organized hierarchies of terms. We are not planning to allow any free tagging of content initially. So, the taxonomies we employ will, at first, only be used "behind-the-scenes" for content editors to organize content and to aid users in navigation and creation of RSS feeds.

    This could prove to be the most challenging part of our project so far. But, for me, it will likely be the most exciting and rewarding!

    The Weekly Drupal is a weekly column on about Drupal from the perspective of a new user. Use this feed to subscribe.

    Share this
    Aug 19 2009
    Aug 19


    I have been encouraged by the increased participation in the Drupal 7 accessibility issue queue in the past few weeks. There are still a number of outstanding issues that are fundamental to improving the accessibility of Drupal 7. Some of these issues can be dealt with after code freeze, others need to be dealt with before.

    I believe that Drupal accessibility has to be adopted and fostered at the grassroots level.  Hopefully after code freeze some additional accessibility documentation can be added to and the documentation that is currently on the site can be reordered to make it more useful.  I believe that clear and thorough documentation will lower the barrier for entry into the Drupal accessibility arena, making it easier for community members to get involved who currently don't know where to start.

    Ongoing Accessibility Challenges

    I believe that one of the problems that will continue to persist, even with the best efforts of the Drupal accessibility community, is that accessibility, like usability, CSS, PHP, and SQL, is a specific skill set.  Unfortunately, at this time, there are far more community members who possess these other skills than the skills related to accessibility.  I imagine that increased documentation and community involvement of those who do possess accessibility skills will help this situation, but also imagine that without the adoption of some form of minimal accessibility standard for Drupal that accessibility will continue to lag behind.

    Perhaps the best approach to take, given the current Drupal accessibility landscape, is for the Drupal accessibility community to continue to do its best to educate and assist with accessibility, until there is a threshold level of accessibility awareness within the broader community.  At that time, I believe that it will be useful, and achievable, to enforce a minimal accessibility requirement on all patches committed to core.  I also believe that it would be useful for community members who are interested in accessibility to raise awareness by asking the question: "How has accessibility been considered for this patch", on any issues involving user interface components.  I am happy to make myself available to anyone who has questions about improving the accessibility of core components at [email protected].

    Current Accessibility Needs

    As far as getting as much accessibility into Drupal 7 core as possible, I want to start by thanking the community, for the support that we have received thus far.

    The following, in no particular order, are needs that I have from the broader community to assist me in using my time as efficiently as possible between now and September 1, and even after September 1, so that as much accessibility improvement as possible can be realized.

    1. It would be really helpful if when community members notice an accessibility issue that they believe can be dealt with after code freeze that they would add a quick comment.  This will help me to better prioritize issues.

    2. There are several areas that need work, that I believe will be required before code freeze, A list of the areas, and an idea of resources that would be helpful to me, are listed below, in no particular order.

    a. Autocomplete - A brief walkthrough of how the JS events are currently handled so that I can figure out what is causing "undefined" often to be returned for screen-reader users would be useful.  I imagine that we will not get ARIA support for autocomplete into Drupal 7, but this may be able to be handled with a contributed module.

    b. Table-drag - At the very least some decisions need to be made about where to place an option to disable drag, and what the scope and persistence level of that option will be. It would also be helpful if someone familiar with the code could work with me to roll this patch, as it would save me the time of having to learn how this component works on the code level.

    c. Identifying "active" menu item links - I have read through menu_local_tasks() a few times, and am still not certain about which calls to theme('menu_item_link'), ...) are being called for active menu items.  It would be helpful if someone familiar with the coding of the menu system could help me to work through this so that I can role a patch for theme_local_task not providing context to screen-reader users.

    d. Form element labeling - I am working through all form elements to identify problems with labels, there are currently a number of open issues about form element labeling.  My suggested approach is that * every * form element that supports a label (e.g. not button or fieldset) have a label generated, and that a new property be added to all of these elements "#show-label". If the "#show-label" option is false then the label will be styled with .element-invisible so that it does not appear on screen, but will still be accessible to screen-reader users.  Wondering if any community members have any thoughts about this approach.

    e. Form validation - As I understand it, invalid form fields currently have their labels marked with the class "invalid".  I believe that the forms system needs to be modified so that more information is made available within the label for invalid form fields.  At the very least an icon, or other structural element that supports text, that indicates that the field is invalid.  I believe that this will require a change to how theme_element() currently deals with labeling.

    f. Collapsible fieldsets - currently there is no information available to screen-reader users regarding the function of the links used to expand and collapse fieldsets.  There is a visual icon, but this is styled as a CSS background image, and cannot have an alt attribute set.  My recommendation is that the CSS image be included within the link as an img element, and that the JS switch the class, and alt attribute of the img where it is currently switching the background image.  I will need the assistance of someone familiar with the JS involved here to understand the functionality so that I can roll a patch for this issue.


    Some important steps toward Drupal 7 accessibility have been achieved thus far, but there is still more work to be done between now and code freeze. I am hoping that the issues listed above, along with many other smaller issues, will be able to be adequately addressed between now and the release of Drupal 7 to ensure that it is a robustly accessible CMS. Not only will improvements to Drupal's accessibility mean a great deal to users with disabilities, but it will also mean more opportunity to use Drupal in government and with other organizations where accessibility is a must.

    Aug 18 2009
    Aug 18

    Some time ago I wrote an article that looks deeply at the Drupal path system and shows how easy it is for new developers to hook into a running Drupal system. I explore the idea that this openness and extensibility is a key factor in winning large numbers of developers to work on Drupal, and that this is one of the reasons the project is succeeding. The paper is now available at Acquia, beautifully formatted as a technical whitepaper. I’m very happy with this article and am excited to finally have it available. It’s well worth the short survey you’ll be asked to complete before you can download.

    Aug 18 2009
    Aug 18

    created on Mon, 2009-08-17 17:55

    We're excited about the re-design of Most of us who work with Drupal on a regular basis are very familiar with - the home of the Drupal project, the contributed modules and all the amazing developer activity that makes Drupal such a great platform. But as Dries pointed out in his DrupalCon keynote this past year in DC, the page, until now just a simple placeholder page forwarding visitors to the project, is apparently getting a significant amount of traffic. Lets face it - lots of people just assume your website ends with a .com extension. And since people who are familiar with drupal are likely browsing directly to, we can assume that most of the folks that end up on are trying to learn about the project.

    Now these noobies have a great place to learn about our favorite CMS and Drupal gets the slick showcase it deserves. The good folks over at Development Seed did a excellent job of designing the site and the images they've used make for a very compelling presentation. We can't help but notice the smiling face of Raincity's own Robert Scales (who is responsible for many of Drupal's tattoos) and our good friend and community hero Boris Mann (apparently responsible for Drupal's great haircut).  It's great to see the Vancouver drupal community so well represented!

    Tattoo meets Suit looks great with this new makeover and it's another victory in battling Drupal's reputation for being "less pretty" than other platforms like Wordpress and Joomla. Efforts like the Design4Drupal camp held this past spring in Boston are trying to build the design community and proving that with a good designer and a solid themer, anything is possible. We already know that Drupal is powerful, but, like the at end of many hollywood teen flicks, the nerdy girl just took off her glasses on prom night and it turns out she's beautiful!

    And more than just being pretty, has a message for the Fortune 500 Companies: Drupal is enterprise ready. Many of us in the community have known this for quite some time after seeing Drupal deployed and scaled to serve high traffic sites and massive online communities, but large companies have been slow to embrace open source technologies. Now, with companies like Fedex, Nokia and Sanyo jumping on board and efforts being made by Microsoft to let their servers run PHP and MySQL, the future looks bright for Drupal and the Fortune 500. We're ready when you are!

    Drupal is Enterprise Ready

    And you may also notice the change in font on the drupal logo - this is another sign of good things to come. If you haven't seen it yet, check out the redesign that Mark Boulton Design has been cooking up. Redesin Iteration 11 by Mark Boulton Design


    Aug 18 2009
    Aug 18

    both the 5.x and 6.x versions are now available for download on github. sorry, i just can't do CVS anymore. to download:

    1. start by going here:
    2. then click the all tags drop-down and choose the appropriate version
    3. then click the download button

    a full description of the module is available here

    available versions


    This is the stable version for Drupal 5.x. Note - if you're upgrading from a previous version, after install, just visit the log4drupal admin page, and make sure your options are okay. Minor changes were made to the admin options.

    The major change from previous versions is the new ability to automatically recursively print out arguments. For example :

    log_debug("This message will recursively print out the contents of the node variable", $node);


    a bug fix to the version described here. allows proper operation if you choose the Path relative to Drupal root option for Filename precision.

    Aug 15 2009
    Aug 15

    Look at the top keyword searches that bring people to my site, according to Google Analytics:

    1. tom geller (O.K., that's a gimme.)
    2. wamp drupal
    3. drupal wamp
    4. (content targeting)
    5. drupal on windows
    6. drupal windows

    Further, about one in five requests for support sent through my site's contact form is WAMP-related.

    So -- what's the story? Is it that WAMP is hopelessly messed up? Is there a vacuum of relevant information out there? (My Running Drupal on Windows using WAMP article is Hit #4 on Google.) Have you had problems running Drupal on WAMP? Does the Acquia Drupal stack installer for Windows help?


    About Drupal Sun

    Drupal Sun is an Evolving Web project. It allows you to:

    • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
    • Facet based on tags, author, or feed
    • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
    • View the entire article text inline, or in the context of the site where it was created

    See the blog post at Evolving Web

    Evolving Web