Aug 11 2012
Aug 11

Drupal 7 uses InnoDB tables. InnoDB provides many benefits, but can cause some unexpected headaches. One headache for me is that, by default, MySQL tells InnoDB to create one file on your system, called ibdata1, to hold ALL the data from EVERY InnoDB table you have on your MySQL server. This file never shrinks in size; it only expands to contain new data. If you delete something from MySQL or drop a table, the space that table was using is reallocated for other new data. This isn't a bad thing, especially for those who have a lot of drive space, and not many databases that are altered or dropped quite frequently.

I develop a lot of sites on my little MacBook Air (with a 128GB SSD), so I often download database snapshots from live and testing environments, empty out the tables on my local environment, then import the database dumps. Can you spot the problem here?

Using Daisy Disk I just noticed that my ibdata1 file had grown to more than 10 GB, and my Air's drive only had about 5 GB free space!

So, after reading through MySQL's InnoDB Engine documentation and this answer on Stack Overflow, I found that it's not too hard to change MySQL to keep data tables in their own files, and delete the files after the tables are deleted (thus saving me a ton of space). It just takes a little time and annoyance.

Here's how to do it, roughly:

  1. Export/dump all your databases. (In my case, I didn't do this, since I could just grab them all from production or development servers.) If you have a good backup and restoration system in place, you shouldn't need to fret too much about this part, but if you don't, you'll probably need to spend a bit of time dumping each database or writing a script to do this for you.
  2. Drop (delete) all databases, except for the mysql database, and information_schema, if it exists.
  3. Shut down MySQL.
  4. Delete the ibdata1 file and any ib_logfile log files (I just had ib_logfile0 and ib_logfile1).
  5. Add innodb_file_per_table under the [mysqld] heading in your my.cnf file.
  6. Start MySQL.
  7. Import all your databases.

After doing this, my 'mysql' directory with all my databases only took up about 3 GB (there are a few large databases I regularly work with... but +/-3 GB is a lot less painful than 10+ GB!

I also took this opportunity to flush out some other testing databases that I had on my local computer for Drupal 4.7 (really!), 5, 6, 7 and 8 testing. It's easy enough to create a new database when the need arises, and with drush, it's easier than ever to create and sync databases and files for my Drupal sites.

On most of the production servers I manage, I don't worry about setting innodb_file_per_table, because there are often only one, two or three databases, and they aren't constantly changing like on my local computer—they only grow over time, so the ever-increasing size of the ibdata1 file isn't concerning to me.

Jun 04 2012
Jun 04

For the past couple years, discussions about 'PSR-0', PHP standards, and some sort of framework standardizations have been popping up here and there. It wasn't until a bunch of 'PSR-0 Interoperability' patches started popping up in the Drupal core issue queues that I decided to take a closer look at PSR. (The latest? PSR-1 (Basic Coding Standard) and PSR-2 (Coding Style Guide) have been accepted).

There's a great FAQ that was just posted by Paul M. Jones explaining the PHP-FIG (PHP Frameworks Interoperability Group), which will give a little backstory to the group and its purpose. Drupal is a member of this group, with Crell (Larry Garfield) representing Drupal's vote for standards guidelines. You can see group members and discussions in the PHP Standards Working Group Google Group, and you can follow along with proposed and ratified group standards in the php-fig GitHub repository.

A lot of the larger PHP frameworks, CMSes and developer communities are represented, but—importantly—this group does not intend to represent PHP as a whole (that's probably the main reason it's now called the 'Framework Interoperability Group' instead of the 'PHP Standards Working Group'). Rather, it represents the mainstream PHP developer, and countless professional PHP developers working with and for the projects in the group. The main premise is that there are many large development groups working with PHP, and it would be helpful if these large groups could use a common set of coding standards, naming standards, and the like when developing their projects so things like the fruitful relationship between Symfony and Drupal can flourish (we're already seeing positive results here in the Drupal community!).

Having set standards that many organizations follow (such as PSR-0, PSR-1, etc.) also helps unify PHP development and bring it to a higher level; many others have (often rightfully) criticized the PHP language and developers for being fragmented, inconsistent and amateurish. I'm going to adopt PSR standards in my own PHP side projects (heck, most of my code already conforms, so it's not a big deal to me), and I'm glad many organizations are working towards adopting the standards as well. It will let our community spend more time working on making better end results and useful classes than arguing over whitespace, bracket placement, and control structure formatting (to name a few things...).

May 15 2012
May 15

[Update: As of Views 7.x-3.4, you can now use the new "Global: Combine fields filter" to combine fields for an exposed search. Just add the fields you want to search to the view's Fields section, then add a 'Global: Combine fields filter' and select all the fields you want to search. Simple as that!]

A common need I run into with a ton of Drupal sites and Views is searching/filtering content based on multiple fields. For example, a lot of people would like to search for content using either the Title or the Body for a particular content type.

There are two primary solutions offered for this situation, but they both have downsides or are overly complex, in my opinion:

  • Use the Computed Field module to create yet another field stored in the database, combining the two (or more) fields you want to search, then expose a filter for that field instead of both of the individual fields. (I don't like this because it duplicates content/storage, and involves an extra module to do so).
  • Use the Views Filters Populate to invisibly populate a second field that you've added to a views OR group (using Views OR in Views 2.x, or the built-in AND/OR functionality in Views 3.x). (This module is slightly limited in that you can only work with strings, and again, it involves an extra module).

Instead of using an extra module, I simply do the following to achieve a multi-field search:

  1. Add an and/or group to the filters in Views 3.x (next to 'Filter criteria', click the 'Add' drop down and choose 'and/or, rearrange').
  2. Put the main field you'd like to search into the new filter group (in my case, the Title field), and set the new group to OR.
  3. Implement hook_views_query_alter() in a custom module. In the query alter, you'll simply get the keyword parameter, and add a join and where clause (if you want to join to another table, like the 'body' data table). The code I'm using in this particular instance is below:

* Implements hook_views_query_alter().
* Allow users to search the in the 'help' view by title OR body.
function custom_views_query_alter(&$view, &$query) {
// Only do anything when using the 'help' view.
if ($view->name == 'help') {
// Get the keyword used for the search.
$keyword = isset($_GET['title']) ? $_GET['title'] : '';    // Add a new LEFT JOIN and WHERE clause for the help node body.
$join = new views_join();
$join->construct('field_data_body', 'node', 'nid', 'entity_id');
$query->table_queue['node__field_data_body'] = array(
'table' => 'field_data_body',
'num' => 1,
'alias' => 'node__field_data_body',
'join' => $join,
'relationship' => 'node',
// The first parameter selects the 'AND/OR' group this WHERE will be added to.
    // In this case, we add it to the second group (the first one is an AND group for
    // 'status = published' and 'type = help').
$query->add_where(2, 'node__field_data_body.body_value', '%' . $keyword . '%', 'LIKE');

The documentation for a views_join() and add_where() are somewhat vague, but basically, the code above only runs on the 'help' view, it gets the keyword from the URL parameters (works with or without AJAX-enabled views), then adds a join from the node table (where the 'Title' is) to the field_data_body table (where the content is), and adds a 'where' clause to the new 'OR' group we created in steps 1-2 above.

If you want to dig deeper into the query, just use the Devel module's dpm() function to show the $query object (dpm($query);).

(Note: This illustrates a pretty simple two-field search. I've used the same technique to search on more fields, just adding more where clauses, and making sure there are joins to all the tables where I'm searching... in one example, I searched a list of users by their username, real name (fields), phone number (a field), or email address).

Mar 19 2012
Mar 19

I was inspired today to get XHProf working on my Mac, using MAMP PRO 2.0.5/PHP 5.3.6, after reading @Dave Reid's tweet. Since I'm not leaving for DrupalCon until tomorrow, what else could I do today? There's an excellent article on Lullabot that will help you get 85% of the way towards having XHProf up and running on your Mac, working with your Drupal sites, but there are a few missing pieces and little tips that will help you get XHProf fully-armed and operational.

XHProf Callgraph example
Ooh, pretty visualizations!

First, after you've installed and configured XHProf on your Mac (and restarted MAMP/Apache so the configuration takes effect), you need to do a few things to get it working well with Drupal. For starters, if you have the Devel module installed, head over to its configuration page (at admin/config/development/devel), and check the box that says "Enable profiling of all page views and drush requests."

Now, enter the following values in the two fields that appear (note: these paths could be different depending on where you installed xhprof, and how you have your Sites folder/localhost set up. For simplicity, I kept the xhprof stuff in MAMP's htdocs folder):

  • xhprof directory: /Applications/MAMP/htdocs/xhprof
  • xhprof url: http://localhost/xhprof/xhprof_html

Save the configuration, and refresh the page. Scroll down to the bottom of the page, and click on the newly-added link in the Developer information section titled 'XHProf output'. You should see a huge table with large, menacing numbers. Don't worry about interpreting them just yet. (If you got an error, or something other than a table of a bunch of functions and numbers, then XHProf is not configured correctly).

Now, click on the [View Full Callgraph] link towards the top of the page. You'll probably get an error like:

Error: either we can not find profile data for run_id [ID-HERE] or the threshold 0.01 is too small or you do not have 'dot' image generation utility installed.

This is because GraphViz (which provides the 'dot' utility) is not installed on your computer, or it's not in your $PATH. So, go ahead and download the OS X-compiled version of GraphViz appropriate to your computer (I downloaded the Intel version 2.14.1), and install it (it uses the normal Mac installer, and puts the files in /usr/local/graphviz-2.14).

The final step to get dot working correctly is to make a symlink to the dot binary using ln -s (in my case, /usr/local/bin is in my $PATH, as defined in ~/.bash_profile):

$ sudo ln -s /usr/local/graphviz-2.14/bin/dot /usr/local/bin/dot

NOW, go ahead and jump back over to your fancy XHProf data table page, and click the Callgraph link. Wait a minute, and you'll be rewarded with a beautiful graphical representation of where Drupal/PHP spends all its time, with colors, arrows, lines, and numbers to your heart's content!

The last step for getting XHProf would be to install the XHProf module on your site and get the data displaying inside Drupal—but I haven't been able to install it yet on my own site (there was an installation error), and the standard interface that I get (provided by XHProf itself) is good enough for me.

(Remember to clean out the directory where you're saving your XHProf runs every now and then (this directory is configured in php.ini as the xhprof.output_dir variable); each run will be 100-200KB, and that adds up as you load and reload tons of pages!).

Mar 18 2012
Mar 18

Flocknote is a large web application that lets churches easily manage communications with their members via email, text message, and phone calls. Many of the core features of email marketing services like MailChimp and Constant Contact are implemented in flocknote similarly, such as list management and mass emailing (and many features like shared list/member information management, text messaging, etc. are unique to flocknote).

Until recently, few groups using flocknote didn't have subscription lists that were big enough to hit our relatively high PHP max_time_limit setting when importing and exporting subscriber data. Since we're getting bigger, though, I've started implementing Batch API all over the place so user-facing bulk operations could not only complete without resulting in a half-finished operation, but could also show the end user exactly how much has been done, and how much is left:

Exporting List Subscribers - Batch API CSV Export

I've seen many tutorials, blog posts, and examples for using Drupal's Batch API for importing tons of data, but very few (actually, none) for exporting tons of data—and specifically, in my case, building a CSV file with tons of data for download. The closest thing I've seen is a feature request in the Webform issue queue: Use BatchAPI to Export very large data sets to CSV/Excel.

Before I get started, I want to mention that, for many people, something like Views Data Export (for getting a ton of data out of a View) or Node Export (specifically for exporting nodes) might be exactly what you need, and save you a few hours' time working with Batch API. However, since my particular circumstance ruled out Views, and since I was exporting a bit more customized data than just nodes or users, I needed to write my own batch export functionality.

Quick Introduction to the Batch API

I'm sure most of you have encountered Drupal's awesome Batch API at some point or another. It lets your site perform a task (say, updating a few thousand nodes) while the user can see a progress bar (which is always nice for UX) without running into the dreaded PHP timeout. Sometimes increasing PHP's max_time_limit can help, but if you want things to scale, and if you want to keep your PHP configuration sane, you should instead split the large operation up into smaller chunks of work—which is what Batch API does.

In my case, I wanted to integrate the batch operation with a form that allowed users to select certain parameters for their export. There's an excellent example in the Examples for Developers module here: batch_example.module. I'd suggest you read through that file to get the basics of how Batch API works along with a form submission.

Basically, when you submit the form, you set a batch with batch_set(), then kick off the batch processing using batch_process().

Processing the Batch: Building a CSV file

For my huge CSV file, I needed to do a couple things to make sure I (a) didn't overwrite the file each time my batch process was called, and (b) had all the right data in all the right places.

In the function MYMODULE_export_list_subscribers_batch() (defined as the only operation in the $batch passed to batch_set()), I process 50 subscribers at a time, getting their profile data using some helper functions. I also check (in the first part) to see if the file has been created yet (basically, if this is the first or a later pass at the process function), and if it has not, I create the file, get the column headers for this particular list using a helper function, and store the filepath in the $batch's context.

I create a temporary file, which will automatically get cleaned out by the system after a while, because I just need the file to persist for a short while, until the user has downloaded the generated file (to get the filepath, I use file_directory_temp() and create a file with a list-specific file name.

On each pass of the batch process operation, I add in 50 more subscribers to the file (using fopen(), with the 'a' flag, so it adds on to the end of the file), and then store the current location of the file in the batch's context.

Code speaks louder than words, though, so here's the main batch operation function in all its glory (a few details are missing, but the relevant parts are all there):

* Batch operation to export list subscribers.
function MYMODULE_export_list_subscribers_batch($list_id, $option, &$context) {
// Start working on a set of results.
$limit = 50;
$context['finished'] = 0// Create the CSV file with the appropriate column headers for this
  // list/network if it hasn't been created yet, and store the file path and
  // field data in the $context for later retrieval.
if (!isset($context['sandbox']['file'])) {
$list = node_load($list_id);    // Get field names for this list/network. (I use a helper function here).
$field_labels = array(
'Member ID',
'First Name',
'Last Name',
'Email Address',
'Phone Number',
'Last Updated',
// Create the file and print the labels in the header row.
$filename = 'list_' . $list_id . '_subscriber_export.csv';
$file_path = file_directory_temp() . '/' . $filename;
$handle = fopen($file_path, 'w'); // Create the file.
fputcsv($handle, $field_labels); // Write the labels to the header row.
fclose($handle);    // Store file path, fields, subscribers, and network in $context.
$context['sandbox']['file'] = $file_path;
$context['sandbox']['fields'] = $fields;
$context['sandbox']['subscribers'] = MYMODULE_retrieve_list_subscribers($list->nid, TRUE);
$context['sandbox']['subscribers_total'] = count($context['sandbox']['subscribers']) - 1;    // Store some values in the results array for processing when finshed.
$context['results']['filename'] = $filename;
$context['results']['file'] = $file_path;
$context['results']['list_id'] = $list_id;
// Accounting.
if (!isset($context['results']['count'])) {
$context['results']['count'] = 0;
// Open the file for writing ('a' puts pointer at end of file).
$handle = fopen($context['sandbox']['file'], 'a');  // Loop until we hit the batch limit.
for ($i = 0; $i < $limit; $i++) {
$number_remaining = count($context['sandbox']['subscribers']) - 1;    if ($number_remaining) {
$uid = $context['sandbox']['subscribers'][$context['results']['count']];
// I use a helper function to get the data for each subscriber.
$subscriber_data = MYMODULE_retrieve_account_data_for_export($uid, $context['sandbox']['fields'], $context['sandbox']['network']);
fputcsv($handle, $subscriber_data);      // Remove the uid from $context.
unset($context['sandbox']['subscribers'][$context['results']['count']]);      // Increment the counter.
$context['finished'] = $context['results']['count'] / $context['sandbox']['subscribers_total'];
// If there are no subscribers remaining, we're finished.
else {
$context['finished'] = 1;
// Close the file.
fclose($handle);  // Show message updating user on how many subscribers have been exported.
$context['message'] = t('Exported @count of @total subscribers.', array(
'@count' => $context['results']['count'],
'@total' => $context['sandbox']['subscribers_total'],

There are a few things I can do to further optimize this, if need be; for example, I could probably run through the subscriber list in a better way, besides storing the whole thing (a bunch of integers) in an array, which doesn't scale infinitely. But those are micro-optimizations that I'll worry about if/when they become a problem.

Finishing the Batch: Delivering the CSV file

Because I want to deliver a .csv file download to the end-user, and not just display a simple message like 'Congratulations! We built your CSV file... but you have to click here to download it!', I decided to actually have the batch operation set CSV file download path in the user's session data, and then redirect to my own page for the end of the batch operation (to do this, I pass in the final path to batch_process() when I call it in the form submit function).

Here's the 'finished' function for the batch, where I simply set a message, and set a couple session variables that will be used later:

* Finish the export.
function MYMODULE_export_list_subscribers_finished($success, $results, $operations) {
// The 'success' parameter means no fatal PHP errors were detected. All
  // other error management should be handled using 'results'.
if ($success) {
$message = format_plural($results['count'], 'One subscriber exported.', '@count subscribers exported.');
  else {
$message = t('There were errors during the export of this list.');
drupal_set_message($message, 'warning');  // Set some session variables for the redirect to the file download page.
$_SESSION['csv_download_file'] = $results['file'];
$_SESSION['csv_download_filename'] = $results['filename'];

Here's the page building function for the path that I have the user go to at the end of the batch operation (after the _finished function is called above)—this page's path redirect is set by passing it into batch_process() as a simple string, way back in the form submit function:

* Interim download step for downloading CSV file.
function MYMODULE_download_csv_file_interim($list_id) {
$base_url;  if (empty($_SESSION['csv_download_filename']) || empty($_SESSION['csv_download_file'])) {
t('Please visit your list subscribers page to begin a list download.');
$list = node_load($list_id);  // Redirect to the download file.
$redirect = base_path() . 'path/to/download/csv/' . $list_id;
drupal_add_js('setTimeout(function() { window.location.href = "' . $redirect . '"; }, 2000);', 'inline');  $download_link = l(t('click here to download the file'), 'path/to/download/csv/' . $list_id);
$output = '<p>' . t('Your subscriber list is now ready for download. The download should begin automatically. If it does not begin downloading within 5 seconds, please !download_link.', array('!download_link' => $download_link)) . '</p>';
$output .= '<p>' . l(t("&#8592; Back to %list subscribers", array('%list' => $list->title)), 'node/' . $list_id . '/subscribers', array('html' => TRUE)) . '</p>';

I used JavaScript/setTimeout() on this page, and redirected to another path that actually delivers the CSV file to the end user, because otherwise, most browsers will block the download (without user intervention), or go to the downloaded file and show a blank white page. Here's the code that's used to deliver the actual CSV file at the redirect path defined above:

* Download a list subscriber CSV file.
function MYMODULE_download_csv_file($list_id) {
// For added security, make sure the beginning of the path is the same as that
  // returned by file_directory_temp() (to prevent users from gaining access to
  // arbitrary files on the server).
if (strpos($_SESSION['csv_download_file'], file_directory_temp()) !== 0) {
'Access denied.';
// Add HTTP headers for CSV file download.
drupal_add_http_header('Content-Type', 'text/csv; utf-8');
drupal_add_http_header('Content-Disposition', 'attachment; filename=' . $_SESSION['csv_download_filename'], TRUE);  // Allow caching, otherwise IE users can't dl over SSL (see issue #294).
drupal_add_http_header('Cache-Control', 'max-age=300; must-revalidate');  // Read the file to the output buffer and exit.

There are other ways to simply deliver a CSV file, but this seems to work the best for the widest variety of browsers. Setting the Cache-Control header is necessary to allow IE users to download files over SSL (due to caching settings and file path persistence in Windows/IE). Chrome, FireFox and Safari work fine without it...


I hope this example has helped you figure out how to use Batch API for more than just importing; it's a little more involved to build a file or something else using Batch API than to just do something that doesn't require extra steps afterwards. But with this example, hopefully you can start flexing Batch API's muscles to do a bit more for you!

If possible, I would always try using Views Data Export, as it's so much simpler to integrate with my custom data sets, and Views is really fast and easy to implement. But in this case, I had to pull in access-controlled data from user profile fields, from Profile2 profile fields specific to each list, and from some other data sources, all into one CSV file, and this just wasn't going to happen with Views.

I've tested this Batch processing with up to 50,000 users, and it takes a few minutes to generate the resulting ~5MB file. It's much nicer to see that the file is being built over time (the way it is now) than to have to wait while the page is loading (with no feedback), and then get a WSOD because the page timed out after about 10,000 subscribers.

Mar 11 2012
Mar 11

Preparing for your first DrupalCon? Even if this isn't your first, here are a few tips and tidbits I've learned from my first DrupalCon last year, and would like to pass on to you. (I'm posting this now so you have time to order the things you need to make your conference experience better and get it shipped!).

Keep things you need handy

I expected to have some downtime every now and then to run back to my hotel room and grab something I needed for later in the day (like a power cord), but quickly realized that I wouldn't have downtime. Instead, I ended up attending many awesomesauce presentations, BoFs (Birds of a Feather gatherings), core conversations, and informal meetings continuously, from the time I got into the convention floors until about 8 p.m. (and later!).

Bring a bag large enough to hold your laptop or iPad, a charger, a few snacks (granola bars are great!), and any other little devices or chargers you'll need during the day.

Power to the People!

Monster Outlets to GoHotels and convention centers have a very low AC outlet / conference attendee ratio. Usually something like 1:100. Most laptops' batteries last 3-5 hours. You're going to have your laptop on and with you all day, and the battery will die if you don't charge up every now and then.

One of the best things you can do, especially if you want people to not hate you for hogging an entire outlet for one laptop charger, is buy a travel power strip, like the one I bought for this year's DrupalCon—Monster's Outlets to Go Powerstrip*. There are a few other options out there, but I like this one the most due to its compactness. Some adapters even include or two USB plugs (though not all are created equal—check to make sure the USB plugs provide enough power to charge your device!).

Instead of hogging a wall jack all to yourself, you can now power one or two of your own devices, and let one or two other people charge their devices.

For non-US residents, be sure you have the proper power adapters for your devices!

Don't only go to sessions

I made the mistake of trying to attend every session that piqued my interest last year. It wasn't until the last day of the conference that I hopped out of a session that had lost my interest and found that I was missing some of the best parts of DrupalCon:

  • Birds of a Feather gatherings (people basically come together and talk about/work through things things they have in common, like newspaper websites, Church sites, or a passion for DevOps!).
  • Core Conversations (people who want to make Drupal and better come together and, well, make Drupal and better).
  • The Expo area (talking to some of the people in Drupal consultancies, or people from hosting providers, or anyone else on the expo floor, is pretty enriching).
  • The community (getting to meet people I converse with every week on, in IRC, etc. is awesome).

Don't get me wrong; the sessions are awesome, but there's so much more to DrupalCon. Don't miss out!

Are you presenting? Don't forget these things!

Apple power extension adapterIf you're presenting, don't presume that everything will be ready for you. Even the best planned events sometimes go a little awry—there's no power outlet, the projector only has HDMI when you only have a DVI adapter, etc.

A couple things that I never forget when traveling and presenting:

  • My extended power cord for my MacBook Air laptop—without it, I only get about 5' between my laptop and an outlet. With it, I get almost 10'. I never present without the laptop plugged in (see: Murphy's Law).
  • Every Mini Display Port-to-anything adapter I have. VGA, DVI, HDMI. Bring adapters for your own laptop... though you can usually borrow one if you need it.
  • A presenter's remote (if you want to move about during your presentation). My favorite is the Kensington Wireless Presenter.

Even if the presenter's manual says you'll be provided with something (power, cables, a microphone...), be prepared for the worst.

Bring an Ethernet cable (or two!)

Even an incredibly-well-planned conference like DrupalCon is a WiFi network administrator's nightmare. With a few thousand attendees, you're talking about 5,000+ wireless devices (everyone seems to have a laptop, tablet, and smartphone). At times, even cell service can be spotty (especially if you use AT&T or Verizon, since a couple thousand other attendees use the same cell as you!).

No matter the planning and number of access points, a wired connection will almost always beat out wireless. And there are usually a few areas you can find someone that has a hub set up to tie into the wired network. If you need to do some things that require a stable connection, having an ethernet cable (and, if you're like me, the proper USB adapter for your MacBook Air) can be a godsend!

Come for the Community

Whatever you do, talk to people! Drupal is awesome because it's a great platform, but it's even more amazing because of the people who use it, develop it, promote it, etc. Talk to other attendees, meet people you only know through their profiles, and have a fun time!

Anything I missed? Share it in the comments.

* Whenever I link to Amazon products, I use my affiliate links. You can just search for the items if you don't want to use the affiliate links, but it helps me get a few cents if you buy something through my affiliate links :) It has also been pointed out to me that Monster may not be a very nice company. YMMV :-/

Mar 09 2012
Mar 09

I had a rather interesting feature to implement on flocknote lately (after doing a pretty vast redesign of the UX/UI on the site over the past month... it was refreshing to dig into PHP again!):

We want to allow insertion of YouTube and Vimeo (and potentially other) videos into 'Notes' on the site, and there are a few moving parts in this equation:

  • I had to create a text format filter similar to the 'Embedded media inline' module in Drupal 6 so people could simply put a 'merge tag' in their Note (like [video=URL]) where they want the video to appear.
  • When a user views the embedded video on the site, the video should show at a uniform width/height, and be able to play the video (basically, a merge tag the user enters should be converted to the proper embed code for the provider (in this case, an <iframe> with the proper formatting).
  • When a user sees the video in the note email, the video can't actually play since very few email clients support any kind of video embedded in an email. So, instead, the video shows as a frame with a play button on top (this is the trickiest part), and links to the video on YouTube, Vimeo, etc.

Creating my own Image Effect for a Video Play Button

What I wanted to end up with was an image that had a custom-made iOS-style play button (play icon in a circle with a translucent grey background) right in the middle (I like the simple look of videos on my iPad...):

Video Play Button Example

So, I decided to work with Drupal's Image Effect API and expose a new image effect, aptly named 'Video Play Button', to Drupal's simple set of 'Resize, Scale, etc.' image effects. This is a pretty simple process:

  1. Implement hook_image_effect_info() to tell Drupal about the new effect.
  2. Process the image (in $image->resource) in the 'effect callback' that you defined in hook_image_effect_info().

In my case, I calculated the center of the image to be processed, then subtracted half the play button's width and height (respectively) from the center dimensions, and used those dimensions, along with the image handle ($image->resource) and the play button image (I used drupal_get_path() to get the path to my custom module directory, and put the image in 'images/play-button.png') to build the final graphic using PHP GD library's imagecopy() function.

Here's the image effect info hook implementation and callback I wrote to put the play button on top of the image:

* Implements hook_image_effect_info().
function mymodule_image_effect_info() {
  return array(
'mymodule_video_play_button' => array(
'label' => t('Video Play Button'),
'help' => t('Adds a video play button in the middle of a given image.'),
'effect callback' => 'mymodule_video_play_button_callback',
'dimensions passthrough' => TRUE,
* Video Play Button image callback.
* Adds a video play button on top of a given image.
* @param $image
*   An image object returned by image_load().
* @return
*   TRUE on success. FALSE on failure to colorize image.
function mymodule_video_play_button_callback(&$image) {
// Make sure the imagecopymerge() function exists (in GD image library).
if (!function_exists('imagecopymerge')) {
watchdog('image', 'The image %image could not be processed because the imagecopymerge() function is not available in this PHP installation.', array('%file' => $image->source));
// Verify that Drupal is using the PHP GD library for image manipulations
  // since this effect depends on functions in the GD library.
if ($image->toolkit != 'gd') {
watchdog('image', 'Image processing failed on %path. Using non GD toolkit.', array('%path' => $image->source), WATCHDOG_ERROR);
// Calculate the proper coordinates for placing the play button in the middle.
$destination_x = ($image->info['width'] / 2) - 35;
$destination_y = ($image->info['height'] / 2) - 35// Load the play button image.
$play_button_image = imagecreatefrompng(drupal_get_path('module', 'mymodule') . '/images/play-button.png');
imagealphablending($play_button_image, TRUE); // Preserve transparency.
imagealphablending($image->resource, TRUE); // Preserve transparency.  // Use imagecopy() to place the play button over the image.
$image->resource, // Destination image.
$play_button_image, // Source image.
$destination_x, // Destination x coordinate.
$destination_y, // Destination y coordinate.
0, // Source x coordinate.
0, // Source y coordinate.
70, // Source width.
70 // Source height.
);  return TRUE;

...and a PSD of the play button is attached, in case someone else wants to save themselves 10 minutes' drawing in Photoshop :)

There's another great example image effect, if you want to look at more examples, in the Examples for Developers modules' image_example.module.

imagecopy() vs. imagecopymerge()

...and Photoshop Save for Web vs. PNGOut optimization...

I spent almost an hour working on a couple different problems I encountered caused partly by the fact that I was using a compressed/optimized PNG file, and partly by the fact that I was misreading the documentation for two GD library image copy functions, imagecopy() and imagecopymerge().

First of all, instead of spending a ton of time struggling with weird file dimension issues, transparency issues, etc., and thinking your code is causing the problem—even though it may—also try different image files or try exporting the image file you're manipulating/using a different way. In my case, the image I was using was run through PNGout to remove any extraneous data, but apparently too much data was removed for PHP's GD library to understand the file correctly—in my case, the file's dimensions were distorted, the alpha transparency was not respected, and the image had lines of interpolation... all because I had tried to use an optimized PNG instead of the direct 'Save for Web...' image from Photoshop.

With regard to GD image functions, imagecopy() allows you to put one image on top of another one, hopefully preserving alpha transparency, etc., while imagecopymerge() puts an image on top of the other without preserving alpha transparency, but while allowing you to set the opacity of the source image manually (from 0-100%). I was originally trying to get imagecopymerge() to put a circle 'play' button (iOS-style) on top of the video image, but I found that the function was putting a square frame with a grey background instead of the nice transparent area around the circle. Switching to imagecopy() seemed to preserve the 24-bit PNG alpha transparency better.

This bug report on was especially enlightening when I was researching why imagecopymerge() wasn't working for me.


There are a few other moving parts to this equation, like retrieving the YouTube or Vimeo video frames, building the proper markup for different displays (on-site, email, mobile, etc.), etc., that I haven't gone into here, but I figured I'd share my experience creating a custom image effect here in case someone else wants to do something similar (like put watermarks on images for a photo site, or something like that).

Jan 04 2012
Jan 04

Most of the time, Drupal's convention of printing comments and the comment form inside the node template (node.tpl.php) is desirable, and doesn't cause any headaches.

However, I've had a few cases where I wanted to either put comments and the comment form in another place on the page, and in the most recent case, I asked around to see what people recommended for moving comments out of the normal rendering method. I found a few mentions of using Panels, and also noticed the Commentsblock module that does something like this using Views.

However, I just wanted to grab the normal comment information, and stick it directly into a block, and put that block somewhere else. I didn't want Views' overhead, or to have to re-theme and tweak things in Views, since I already have a firm grasp of comment rendering and form theming with the core comment display.

So, I set out to do something similar to this comment on (which was also suggested by Jimajamma on Drupal Answers).

First, I had to hide the comments from the normal rendering pipeline in node.tpl.php, which involved using template_preprocess_node() to set 'comment' to 0, and a check in node.tpl.php to make sure $content['comments'] would only be rendered if $comment evaluated to TRUE:

function THEMENAME_preprocess_node(&$variables) {
// For note nodes, disable comments in the node template.
if ($variables['type'] == 'note') {
$variables['comment'] = 0;

Then, I simply built a block in my custom module, and used the magic of comment_node_page_additions() to render the comments and comment form, just as they would render under the node, except in my own, spiffy comment block:

* Implements hook_node_view().
function MODULENAME_node_view($node, $view_mode) {
$node_comments// Store node comments in global variable so we can put them in a block.
if ($node->type == 'note' && isset($node->content['comments'])) {
$node_comments = $node->content['comments'];
* Implements hook_block_info().
function MODULENAME_block_info() {
$blocks['note_comments'] = array(
'info' => t('Note Comments'),
'cache' => DRUPAL_NO_CACHE,
* Implements hook_block_view().
function MODULENAME_block_view($delta = '') {
$block = array();
  if (
$delta == 'note_comments') {
// Get the active menu object.
if ($node = menu_get_object()) {
// Make sure user is viewing a note.
if ($node->type == 'note') {
$block['content'] = '';
// Set the title of the block.
$block['subject'] = NULL;
// Render the comments and comment form (access checks, etc. are done
        // by comment_node_page_additions()).
$block['content'] .= drupal_render($node_comments);

Then, after a quick trip to the Configure > Blocks page, where I assigned my block to a region, I had a slick comments block that I could render anywhere!

Dec 24 2011
Dec 24

apachebench is an excellent performance and load-testing tool for any website, and Drupal-based sites are no exception. A lot of Drupal sites, though, need to be measured not only under heavy anonymous traffic load (users who aren't logged in), but also under heavy authenticated-user load. has some good tips for ab testing, but the details for using ab's '-C' option (notice the capital C... C is for Cookie) are lacking. Basically, if you pass the -C option with a valid session ID/cookie, Drupal will send ab the page as if ab were authenticated.

Instead of constantly going into the database and looking up session IDs and such nonsense, I have a simple script, which is quite revised from the 2008-era script originally from 2bits that worked with Drupal 5, which will give you the proper ab commands for stress-testing your Drupal site under authenticated user load. Simply copy the attached script (source pasted below) to your site's docroot, and run the command from the command line as follows:

$ /path/to/drupal/root/ab-testing-cli.php 2 10

You'll get back the command to paste into the cli in order to test the URL you provided as an authenticated user. (Note: The sessions table needs to be populated for this to work, so someone (or a few someones) will need to have logged in during the past few hours/days for this to work correctly).

Here's the full code (file attached to bottom of post):

* @file
* Script to generate ab tests for logged in users using sessions from database.
* This script is based on an older script by 2bits for load testing Drupal 5,
* located at:
* Place this script into the webroot of your Drupal site.
* Usage (from command line):
*   $ php /path/to/drupal/root/ab-testing-cli.php 2 200
* After the script runs, it will output a list of commands for you to use to
* test your website as a logged-in user.
*/// Set the variable below to your Drupal root (on the server).
$drupal_root = '/path/to/drupal/root/';// If arguments not supplied properly, warn user.
if ($argc != 5) {
$prog = basename($argv[0]);
"Usage: $prog host url concurrency num_requests\n";
// Get the arguments for ab.
$url = $argv[2];
$number_concurrency = $argv[3];
$number_requests = $argv[4];// Set this directory to your drupal root directory.
chdir($drupal_root);// Set up required variables to help Drupal bootstrap the correct site.
$_SERVER['HTTP_HOST'] = $argv[1];
$_SERVER['PHP_SELF'] = basename(__file__);
define('DRUPAL_ROOT', getcwd());// Boostrap Drupal.
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);// Get as many sessions as the user calls for.
$results = db_query_range("SELECT sid FROM {sessions} WHERE uid > 1", 0, $number_concurrency)->fetchAll();// Loop through the results and print the proper ab command for each session.
foreach ($results as $result) {
$cookie = session_name() . '=' . $result->sid;
"ab -c 1 -n $number_requests -C $cookie$url\n";
Dec 14 2011
Dec 14

You can do a lot of great things with field display in Drupal 7's 'manage display' tab for a content type. You can control the order and label position of each field attached to a node type in that tab for Full node displays, Teasers, and RSS displays (or other displays you set up).

However, there's no way to change certain aspects of a node's display inside an RSS Feed, such as the 'creator' tag, the 'link' tag, or the 'title' tag. For a news aggregation site I run, I wanted to modify the <link> tag when displaying 'story' nodes, and make the link tag give an absolute URL to the original source instead of to my drupal site (so, instead of, it would go to

A lot of blogs also use this kind of format for reposted blog items (such as Daring Fireball), so users go straight to the source when they click on the title of an item in their RSS reader of choice. My method below can be modified to conditionally change a link if a field has a value (say, a 'RSS absolute URL' field or something like that).

For Drupal 6, some people had suggested using Views RSS for this purpose (it would let me manage a Views-provided feed display with fields instead of using Drupal's built-in node/teaser display), but this module doesn't have a stable D7 release, and it won't help me change things for Drupal's built in feeds.

For Drupal 7, all you need to do is implement hook_node_view() in a custom module, and change the $node->link value to whatever you want:

* Implements hook_node_view().
* For story nodes in RSS feeds, use field_story_url for link element.
function custom_node_view($node, $view_mode, $langcode) {
  if (
$node->type == 'story' && $view_mode == 'rss') {
$node->link = $node->field_story_url[$node->language][0]['url'];

Easy peasy. If you want to conditionally change the feed item <link> (say, only change the link value if $field_story_url has a value), change the line to:

->link = (empty($node->field_story_url)) ? $node->link : $node->field_story_url[$node->language][0]['url'];

You can also change things like $node->title to change what's in the RSS feed's <title> tag.

Dec 07 2011
Dec 07

After reading A successful Git branching model [], which I consider one of the best graphical/textual depictions of the ideal Git model for development teams (and most large projects), I simply wanted to adapt a similar (but way less complex) model for some of my smaller sites and multisite Drupal installs.

Since I'm (almost always) the only developer, and I develop locally, I don't want the complexity of working on many branches at once (master, hotfixes, develop, release, staging, etc...), but I do want to have a clean separation between what I'm working on and the actual live master branch that I deploy to the server.

So, I've adopted a simple 'feature branch model' for my smaller projects:

  • master - the live/production code. Only touch when merging in a feature or simply fixing little bugs or really pressing problems.
  • [issue-number]-feature-branches - Where I work on stuff.


Feature branch model

Any time I work on something more complicated than a simple styling tweak, or a fix for a WSOD or something like that, I simply create a feature branch (usually with an issue number that matches up to my internal tracking system). Something like 374-add-node-wizard:

# create (-b) and checkout the 374-add-node-wizard branch.
$ git checkout -b 374-add-node-wizard

While I'm working on the node wizard (which could take a week or two), I might make a couple little fixes on the master branch. After I make the fixes on master (switch to it using $ git checkout master), I switch back to my feature branch and rebase my feature branch:

$ git checkout 374-add-node-wizard # switch back to the feature branch
$ git rebase master # pull in all the latest code from the master branch

I can also create simple .patch files off a branch to pass my work to another server or a friend if I want (I like using patches instead of pushing around branches, simply because patch files are easier for people to grok than more complicated git maneuvers):

# create a diff/patch file from the checked out branch.
$ git diff master..374-add-node-wizard > 374-add-node-wizard-patch.patch

When I finish my work on the feature branch, I switch back to master, merge in the branch, and delete the branch. All done!

$ git checkout master # switch back to master
$ git merge --no-ff 374-add-node-wizard # merge feature branch back into master
$ git branch -d 374-add-node-wizard # delete the feature branch

Finally, I test everything to make sure it's working fine in master, and then push the code changes up to the server.

Since I'm developing alone, this is a lot easier than a more complicated branching setup, and it allows me to work on as many features as I want, without fear of messing things up on master, or having merge conflicts (I rebase early and often).

(Note: I usually work in the command line, because I'm more comfortable knowing what git is doing that way... but I often open up Tower (imo, the best application for visual Git) to inspect branches, commits, and merges/rebases... some people would probably rather just use Tower for everything).

(Note 2: When creating patches to send to someone that include binary files (like a png or a gif, jpeg, whatever), make sure you use $ git diff --full-index --binary [old]..[new] > patchfile.patch so git doesn't barf when you try applying the patch on someone else's end...).

Nov 30 2011
Nov 30

Update: See comments below, and completely ignore this post. Nothing to see here...

module_load_include() is a great way to add code from other module's include files, but it doesn't always work as you'd expect. Recently, I was building a form in one module that pulled up a validation function from another module when a particular submit button was pressed:

('inc', 'another_module', 'includes/another_module.forms');
$form['actions']['submit'] = array(
'#type' => 'submit',
'#value' => t('Awesome Submit Button'),
'#validate' => array('another_module_form_validate_function'),

I thought just adding in the module_load_include() would work, but alas, there was more to it than that. Instead of going about it this way, I had to call a local form validation function, and in that function, I could load the include from another module and call it's validation function:

'#validate' => array('same_module_form_validate'),
same_module_form_validate($form, &$form_state) {
// Pass off validation to other module.
module_load_include('inc', 'another_module', 'includes/another_module.forms');
another_module_form_validate_function($form, $form_state);

This helps me uphold DRY principles, and reuse specific form validation functions from other module's include files. (Typically, though, if I were going to be validating a particular element, or a bunch of different forms, using the same validation function, I would include that validation in the .module file itself so I could call it anywhere without a module_load_include()... but in this case, I didn't want that particular validation function to have to be in memory on every Drupal page request :).

Nov 08 2011
Nov 08

This morning, I was presented with quite the conundrum: one of my servers suddently started having about 4x the normal MySQL traffic it would have in a morning, and I had no indication as to why this was happening; traffic to the sites on the server was steady (no spikes), and I couldn't find any problems with any of the sites.

munin mysql traffic spike

However, after inspecting the Apache (httpd) error logs for the Drupal 6 sites, I found a ton of PHP warnings on almost all the sites. Something like the following:

[Tue Nov 08 11:25:51 2011] [error] [client IP] PHP Warning:  date_default_timezone_get(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in /path/to/drupal6/sites/ on line 149

As it turns out, the fix for date.timezone problems with PHP 5.3.x and Drupal 6.x mentioned in (namely, adding ini_set('date.timezone', date_default_timezone_get()); to the other ini_set() functions in settings.php) doesn't work that well for daylight savings time.

So, I've changed all those ini_set() functions in my Drupal 6 sites' settings.php files to explicitly set the default server timezone (in my case, ini_set('date.timezone','America/Chicago');), and now the error logs and watchdog errors written to the database are much more compact :)

I always leave watchdog database logging on for one or two of the sites on a server for precisely this reason: if something goes haywire, I can quickly notice something's awry in my server's munin stats. Then I hop over to the apache error logs and see exactly what's up.

Oct 19 2011
Oct 19

I've had a nice go at making private messaging capabilities for flockNote work a lot nicer than the out-of-the-box Privatemsg module experience, by simplifying everything to the point that it's closer to the Facebook Direct Message system than the normal Privatemsg UX. (Privatemsg is the premiere way of handling private messaging in Drupal. It's already awesome out of the box... just needed a bit more help for our particular site ;-).

One thing I had wanted to do for a while is prefill the subject field of certain messages. I already have the new private message page appear inside an overlay popup after a user clicks on a link to send a private message to another user on the site.

Privatemsg prefill subject

I wanted users sending direct messages regarding certain comments or nodes to have a subject line of 'RE: [node title]' or 'RE: [comment title]' in them so they didn't have to write out a subject on their own. I was prepared to implement a hook_form_alter for the privatemsg form, send a query fragment containing the private message subject in the URL to the new private message page, and then check for it in my form alter to fill it in as the subject... but it turns out the Privatemsg module already has this capability built in!

All you need to do is throw in the subject as an extra argument in the url like so: messages/new/[uid]/[subject]. (You can still throw on query fragments to the end if you need to. In my case, I put a destination on the end so the overlay closes after a message is sent).

Or, in code...

= user_load($node->uid);
$pm_link_text = t('Send a PM to Author');
$pm_url = privatemsg_get_link($account) . '/' . t('RE: @title', array('@title' => $node->title));
$pm_link = l($pm_link_text, $pm_url, array('query' => array(drupal_get_destination())));

Aug 31 2011
Aug 31

Email is such a pain (I should know, as I'm currently working on a site that's sending 10-20,000 emails per day to 40,000+ users. Spam prevention, SPF records, bounce handling, abuse reports, deliverability, send rates, etc. are all huge hassles that must be dealt with when handling more than a few hundred emails a day.

For testing, I often like throwing in a quick bit of code to send me or someone else a simple email with a few bits of information when something happens on the site, or to test email addresses or formatting. Therefore I like having a quick one-line function call to send an email. In Drupal 6, there was a handy drupal_mail_send() function that would use some default settings and allow you to quickly shoot off a simple email (not translated, not pluggable, etc., but easy to implement).

Drupal 7 did away with that function, and instead, the simplest way to send an email in Drupal 7 requires some 20+ lines of code. Not fun when I'm trying to set up a few quick one-off emails that just need a 'from', 'to', 'subject', and 'message'. For these emails, I don't care about message translation, mail altering, etc. I just want an email shot off as quickly and simply as possible.

So, I wrote a quick wrapper function that I've placed in a custom.module that lets me just throw in the four default parameters, and sends an email. It doesn't hook into any of the system's mail handling capabilities, and isn't super-robust, but it lets me develop much faster:

* Simple wrapper function for drupal_mail() to avoid extraneous code.
function custom_drupal_mail($from = 'default_from', $to, $subject, $message) {
$my_module = 'custom';
$my_mail_token = microtime();
  if (
$from == 'default_from') {
// Change this to your own default 'from' email address.
$from = variable_get('system_mail', 'My Email Address <[email protected]>');
$message = array(
'id' => $my_module . '_' . $my_mail_token,
'to' => $to,
'subject' => $subject,
'body' => array($message),
'headers' => array(
'From' => $from,
'Sender' => $from,
'Return-Path' => $from,
$system = drupal_mail_system($my_module, $my_mail_token);
$message = $system->format($message);
  if (
$system->mail($message)) {
  else {

Now, sending an email is as simple as:

('default_from', 'John Doe <[email protected]>', 'Test Email Subject', 'Test Email Body.');

[Edit: Updated with some suggestions from API comments and RichardLynch on IRC].

Aug 27 2011
Aug 27

I haven't seen much about this feature yet, so I figured I'd put it through its paces and share what I found. WYSIWYG editing on iOS devices is finally here! For a long time, contentEditable support has been lacking on iPads, iPhones, and iPod Touches, and it's been slightly annoying, as the only way to add richly-formatted text on these devices was doing a two-step through finding the carat characters and writing the HTML yourself.

Plus, some WYSIWYG editors (like TinyMCE) simply disabled the WYSIWYG from attaching to a textarea if it detected an iOS device. No longer, however: I've tested CKEditor (latest nightly) and TinyMCE (latest nightly), and both work perfectly (surprisingly well, in fact!) on the iPad running iOS 5 beta 6:

iPad 2 WYSIWYG TinyMCE Editing

The above screenshot was taken while editing a page on a Drupal site (flockNote) using the WYSIWYG module and the latest nightly build of TinyMCE. You can get nightly builds under TinyMCE's 'Develop' section.


Here's a video of me using TinyMCE on my iPad (it's fast, and works great!):

Some notes on WYSIWYG usage in iOS 5:

  • To solve the problem of scrolling in WYSIWYG-enabled textareas, it looks like Apple decided to just expand the area to fit all the contents. So no scrolling whatsoever inside the body field.
  • TinyMCE's resize widget doesn't work - it would be klunky if it did anyways, and the note above resolves any issues that would cause anyways.
  • All the buttons I tested (image, link, bold/italic/underline, alignment, font color, style, table, etc.) worked perfectly, just as they would on a desktop computer.
  • The only major annoyance is that the full onscreen keyboard pushes everything up, and a lot of scrolling up and down in the narrow viewport is required to format text. (However, if editing with an external keyboard, the onscreen keyboard doesn't hinder anything).
  • The popover bubble that appears when you select text in the top couple of lines hides the WYSIWYG toolbar, so you might have to add a few carriage returns at the top of your post before making selections to the top few lines of text.

Other Wishes:

Now that iOS 5 seems to support rich text editing in the browser (a HUGE boon for online publishing in Drupal, Wordpress, Joomla, etc.), the only major flaw remaining is the inability to upload files (using the file select field). There's a workaround for this (at least, for Drupal: Post Photos/Images to Your Drupal Site from the iPad), but it's too cumbersome. I really want to just be able to select a photo from my camera roll and attach it to a post from time to time.

Mar 08 2011
Mar 08

As seen at DrupalCon Chicago (in the program book):

Web Developers - We are very sorry for Internet Explorer 6 - IE9 Drupalcon Ad

I think the IE development and marketing team at Microsoft 'gets' the situation, and is being very creative about it's efforts in promoting IE9 (case in point: IE6 Countdown). As to how far IE9 will ultimately go towards stemming web developers' collective hatred toward Explorer as a platform... that remains to be seen.

The ad above reads:

Dear Web Developer,

We are so very sorry about IE6.

Come have a drink on us at the Opening Night Party. Also, stop by our booth #67 and we will show you why IE9 is way better.

In my own testing of IE9, I've found it to be about on par with FireFox in how much I'd recommend it to users over any other browsers... that is to say, Chrome/Safari is still better, but I no longer need to tell people they should switch—IE9 is good enough for regular Internet users.

The only thing I really, really hope Microsoft starts doing is taking a more Chrome-like approach to adding in little bits of HTML5 and CSS3 goodness (and fixing some bugs) with point releases, rather than waiting 3-5 years for another IE release! (Of course, IE9 isn't released yet...).

What does this have to do with Drupal?

Observing Microsoft's recent inroads in the Drupal community (making sure Drupal runs well on IIS and Windows, and supporting the Drupal community here and there, for instance), I think Microsoft is doing a pretty good job of approaching an open source community with open arms, and changing people's minds (slightly) about their products and offerings.

For example, a couple years ago, there was no way I'd consider hosting a Drupal site on IIS. With Drupal 7 and a good hosting provider/server setup? Definitely.

Feb 10 2011
Feb 10

One question I'm often asked by many other diocesan web development teams/individuals is how we put together our online Mass Time search (also used for searching adoration and reconciliation times). We also get questions about how we do our online mapping—but I've already covered that (see: Beautiful, Easy Maps in Drupal using Views and Mapstraction).

Mass Times Search Interface
The Archdiocesan Mass Times search interface (click to enlarge)

We already have a database provided by the Archdiocesan IT department (they maintain it with the help of our diocesan Parish Support staff, and parish secretaries who can update their own schedules and information), so we needed to do the following on the web:

  • Import all the Sacrament time information and attach it to a parish node (so times/days could be affiliated with parishes).
  • Display the time information on parish node pages, in a meaningful way.
  • Allow users to search by Sacrament times, showing parishes on a map, and showing the Sacrament times in a list under the map.

I'll cover each of these important aspects of our website's functionality below.

Preliminary note: much of this code was provided originally by the great folks at Palantir, who helped us set up this and many other features on the Archdiocesan website...

Importing time information, attaching it to Parish nodes

The first step in the process is importing some 3,000+ parish event nodes (which contain data for each 'event' - the event time, the event type (Mass/Reconciliation/Adoration), whether the event is a 'Normal Service' or a special kind of Mass, the location of the event (often in a side chapel or somewhere else), the event day, and the reference for the parish to which the event is attached.

Our site uses the Migrate module to import all the data, and we have the module set up to import all the events first, then import the Parishes, attaching the events to parishes (through custom code) using a node reference.

The CSV file containing the parish event data contains over 3,000 lines of information like the following:


Our migrate import takes that line, creates a new node with the information, then later, while importing parish nodes, attaches all event nodes affiliated with that parish to the parish node itself. Then, as they say, the magic happens (via a nodereference field).

Here's the code we use to prepare our parish event node via the migrate import process:

<?php function dir_migrate_prep_parish_event(&$node, $tblinfo, $row) { // Just stick on a filler title $node-?>title = "Parish event #" . $row->peventkey; // Normalize the dates $node->field_event_start[0]['value'] = (int) substr(str_replace(':', '', $row->jos_dir_paevents_evtime), 0, 4); $node->field_event_end[0]['value'] = (int) substr(str_replace(':', '', $row->jos_dir_paevents_evendtime), 0, 4); // Add taxo from event types $mt = dir_migrate_get_map_table('pe-type'); $term = taxonomy_get_term(db_result(db_query("SELECT destid FROM $mt WHERE sourceid = %d", $row->jos_dir_paevents_fkpesptypekey))); if ($term) { $node->taxonomy[$term->tid] = $term; } // Add taxo from event days $mt = dir_migrate_get_map_table('pe-time'); $term = taxonomy_get_term(db_result(db_query("SELECT destid FROM $mt WHERE sourceid = %d", $row->jos_dir_paevents_fkevday))); if ($term) { $node->taxonomy[$term->tid] = $term; } } ?>

We basically sanitize the dates coming in from the database (we want them in standard time/0000 format), and then we add taxonomy terms to the dates.

While importing parish nodes, among other things, we attach the parish event nid to the parish node's masstimes/adorationtimes/reconciliationtimes nodereference fields:

<?php $mt = dir_migrate_get_map_table('parish-event'); $fields = dir_migrate_evtype_field_mapping(); $result = db_query("SELECT mt.destid AS nid, e.fkpettypekey AS type FROM jos_dir_parish AS p INNER JOIN jos_dir_paevents AS e ON p.pnumber = e.fkpnumber INNER JOIN $mt AS mt ON e.peventkey = mt.sourceid WHERE p.pnumber = %d", $row-?>jos_dir_parish_pnumber); while ($record = db_fetch_object($result)) { $node->{$fields[$record->type]}[] = array('nid' => $record->nid); } ?>

Displaying Event Time Information in the Nodes

To display the time information in a particular node, we simply did a bit of theming magic. It's not the most highly performant bit of code in the world, but it works.

First, we set up a field_formatter and theme function for parish event times (the following code samples are all from our site's custom.module):

<?php /** * Implementation of hook_field_formatter_info() */ function custom_field_formatter_info() { return array( 'parish_event_times' =?> array( 'label' => 'Parish Event Times', 'field types' => array('nodereference'), 'multiple values' => CONTENT_HANDLE_MODULE, ), ); } /** * Implementation of hook_theme(). */ function custom_theme() { return array( 'custom_formatter_parish_event_times' => array( 'arguments' => array('element'), ), ); } ?>

These two functions just tell Drupal that we're defining a custom display formatter for parish event times (that can be used in Views, on node teasers, and in full node displays), and then defines a theme function in which we'll tell drupal how to format everything for display.

This next function is a doozy - it basically does all the display dirtywork, and causes a performance burden on the site—if we tried displaying the mass time information for all 200 parish nodes on the site at once, the queries/processing would probably take 20-30 seconds! Therefore, we cache everything aggressively so people don't have to wait for the following theme function to do its work—after it's been done once in a day, it doesn't have to go again, as we cache the resulting page for 18 hours.

<?php /** * Theming function for the "Parish Event Times" formatter. */ function theme_custom_formatter_parish_event_times($element) { $days = array(); // @TODO - Order the $element's children by day order from the taxonomy sort, then by time // @SEE - // Loop through all the parish event times, and build a nice array // of days and the (multiple) corresponding times foreach (element_children($element) as $key) { // Load the node $node = node_load($element[$key]['#item']['nid']); // Parse and format the time // Pad start time with leading zero if only 3 digits if (strlen($node-?>field_event_start[0]['value']) == 3) { $node->field_event_start[0]['value'] = '0'.$node->field_event_start[0]['value']; } // Account for perpetual adoration start time of '0' (midnight) if (strlen($node->field_event_start[0]['value']) == 1) { $node->field_event_start[0]['value'] = '0000'; } // Pad end time with leading zero if only 3 digits if (strlen($node->field_event_end[0]['value']) == 3) { $node->field_event_end[0]['value'] = '0'.$node->field_event_end[0]['value']; } // Account for perpetual adoration end time of '2400' (midnight) if ($node->field_event_end[0]['value'] == '2400') { $node->field_event_end[0]['value'] = '2359'; } $time = date('g:i a', strtotime($node->field_event_start[0]['value'])); if ($node->field_event_end[0]['value'] > 0) { $time .= ' – '.date('g:i a', strtotime($node->field_event_end[0]['value'])); } // Node contains taxonomy if (!empty($node->taxonomy)) { $time_data = array(); // Add event type (if not "Normal Service") foreach ($node->taxonomy as $term) { if ($term->vid == 24 and $term->name != 'Normal Service') { $time_data[] = $term->name; } } // Also add event language (if not "English") if (!empty($node->field_event_lang[0]['value']) and $node->field_event_lang[0]['value'] != 'English') { $time_data[] = $node->field_event_lang[0]['value']; } // Add event location (if any) from the field_event_loc if (!empty($node->field_event_loc[0]['value'])) { $time_data[] = 'in ' . $node->field_event_loc[0]['value'] . ''; } // Slap it on the end of the time if (!empty($time_data)) { $time .= ' ('.join(' - ', $time_data).')'; } } // Day of the week foreach ((array)$node->taxonomy as $term) { if ($term->vid == 21) { // Grab the weight of the term for sorting (see below) $days[$term->name]['weight'] = $term->weight; // Grab all the times $days[$term->name]['times'][] = $time; break; } } } // Sort the Days using the weight above (this could be improved...) // @see asort($days); // Print the days and times $output = ''; foreach ($days as $day => $elements) { foreach ($elements['times'] as &$time) { $time = '


'; } $output .= ''.$day.''; $output .= ''.implode('', $elements['times']).''; } $output .= ''; return $output; } ?>

What we basically do here is load each referenced node, then grab all the metadata for that parish event from the parish event node. Then, we display all the metadata in a nice definition list, which gets themed to look like the following:

Sacramental Time Information Display on Parish Node

Looks nice, eh? Using the asort() function, we were able to sort the times in the order of our Taxonomy listing (so we could control which days would appear first...).

Allow Users to Search by Time/Day using Views

The final step in the process was to allow users to search on the website by Mass Time (or other Sacrament times), and since we were using Views for all our other search/filtering needs, we decided to use Views to do the time search as well.

Inside our dir_migrate.module (though this could live just as easily in our custom.module), we added a views handler, "dir_migrate_views_handler_filter_inttime."

In dir_migrate/dir_migrate.module:

<?php /** * Implementation of hook_views_api(). */ function dir_migrate_views_api() { return array( 'api' =?> '2.0', 'path' => drupal_get_path('module', 'dir_migrate') . '/views', ); } ?>

In dir_migrate/views/

<?php function dir_migrate_views_handlers() { return array( 'info' =?> array( 'path' => drupal_get_path('module', 'dir_migrate') . '/views', ), 'handlers' => array( 'dir_migrate_views_handler_filter_inttime' => array( 'parent' => 'views_handler_filter_numeric', ), ), ); } function dir_migrate_views_data_alter(&$data) { $data['node_data_field_event_start2'] = $data['node_data_field_event_start']; $field = &$data['node_data_field_event_start2']['field_event_start_value']; unset($field['field'], $field['argument'], $field['sort']); $field['title'] = t('Start Time (Formatted)'); $field['help'] = t('Filter handler that translates from int storage to time of day'); $field['filter']['handler'] = 'dir_migrate_views_handler_filter_inttime'; } ?>

In dir_migrate/views/ (this is where we define our custom views filter...):

<?php class dir_migrate_views_handler_filter_inttime extends views_handler_filter_numeric { function option_definition() { $options = parent::option_definition(); $options['operator'] = array('default' =?> 'between'); $options['exposed'] = array('default' => TRUE); $options['value']['contains']['min'] = array('default' => 500); $options['value']['contains']['max'] = array('default' => 2200); return $options; } function operators() { return array( 'between' => array( 'title' => t('Is between'), 'method' => 'op_between', 'short' => t('between'), 'values' => 2, ), ); } function value_form(&$form, &$form_state) { // Get the basic loadout from the parent parent::value_form(&$form, &$form_state); $options = int_time_increments_assoc(); // Make the minor modifications $form['value']['min'] = array( '#type' => 'select', '#title' => t('Between'), '#options' => $options, '#default_value' => $this->value['min'], ); $form['value']['max'] = array( '#type' => 'select', '#title' => t('And'), '#options' => $options, '#default_value' => $this->value['max'], ); } } ?>

...and finally, some helpful functions for our integer/time CCK field/formatting, found in dir_migrate/dir_migrate.module:

<?php // ==================== CCK Bits function int_time_theme() { return array( 'int_time' =?> array('arguments' => array('element' => NULL)), 'int_time_formatter_default' => array('arguments' => array('element' => NULL), 'function' => 'theme_int_time_generic'), ); } function theme_int_time($element) { return $element['#children']; } /** * Declare information about a formatter. * * @return * An array keyed by formatter name. Each element of the array is an associative * array with these keys and values: * - "label": The human-readable label for the formatter. * - "field types": An array of field type names that can be displayed using * this formatter. */ function int_time_field_formatter_info() { return array( 'default' => array( 'label' => t('As time of day'), 'field types' => array('number_integer'), ), ); } function theme_int_time_generic($element) { return int_time_int_as_time($element['#item']['value']); } function int_time_int_as_time($int) { $string = (string) $int; if (empty($string)) { return ''; } while (strlen($string) = 12) { $ex = 'PM'; $hour -= 12; } else { $ex = 'AM'; } $hour = ltrim($hour, '0'); $hour = empty($hour) ? '12' : $hour; return "$hour:$minute $ex"; } /** * Helper function to return all possible inttimes in 15-minute increments. */ function int_time_increments() { return array( 0, 15, 30, 45, 100, 115, 130, 145, 200, 215, 230, 245, 300, 315, 330, 345, 400, 415, 430, 445, 500, 515, 530, 545, 600, 615, 630, 645, 700, 715, 730, 745, 800, 815, 830, 845, 900, 915, 930, 945, 1000, 1015, 1030, 1045, 1100, 1115, 1130, 1145, 1200, 1215, 1230, 1245, 1300, 1315, 1330, 1345, 1400, 1415, 1430, 1445, 1500, 1515, 1530, 1545, 1600, 1615, 1630, 1645, 1700, 1715, 1730, 1745, 1800, 1815, 1830, 1845, 1900, 1915, 1930, 1945, 2000, 2015, 2030, 2045, 2100, 2115, 2130, 2145, 2200, 2215, 2230, 2245, 2300, 2315, 2330, 2345 ); } function int_time_increments_assoc() { static $assoc; if (is_null($assoc)) { $assoc = array(); foreach (int_time_increments() as $int) { $assoc[$int] = int_time_int_as_time($int); } } return $assoc; } ?>

Wow... this is probably the longest post/code-dump I've ever written... sorry about that! Complex issues demand complex solutions, I guess?

Some Things Could Be Improved...

Well, actually, a lot of things could be improved. For instance, we could avoid a lot of this custom code if there were a way to create Date fields without a month or year attached—basically, a timestamp without a fully-compliant 'date' attached to it—but this is currently not possible.

Right now, I'm focusing on a few other projects, but someday I really want to tackle issue #499 on the Archdiocesan Development website: Create timefield module for Time CCK/Field. I envision a module that allows you to add time information to a node like "Saturday, from 4 p.m. to 5 p.m.," and then be able to filter Views results by time values alone... but I don't know if/when I'll get the time to do this :(

Any other thoughts or ideas?

Feb 03 2011
Feb 03

I've been asked about the Archdiocese of St. Louis's online parish search mapping functionality enough times that I finally made a quick video walkthrough of how it was done. The video below explains it all—basically, we use the Location module to attach addresses to nodes and geocode (get lat/lon) the addresses, and we use Views + Mapstraction to make the spiffy maps all over the site.

The functionality was originally set up by the kind folks at Palantir, and tweaked a bit over time by me to make what you see today.

You can watch the video in HD on Vimeo, to see fine details. (Recorded with iShowU HD).

Jan 05 2011
Jan 05

Get Started with Drupal 7

Today, January 5, Drupal version 7.0 was released (download Drupal here). Drupal 7 release parties will be held worldwide on January 7 (which also happens to be my birthday - yay!).

I'll be posting my experiences in upgrading to and extending Drupal 7 both here and on my blog at Midwestern Mac, LLC (see D7 stories).

Congratulations to the team of almost 1,000 developers who helped make Drupal 7 a reality, and congratulations to Dries Buytaert, the founder of Drupal, and webchick, the person who shepherded (and continues herding) the community as the Drupal 7 core maintainer!

This website is still running on Drupal 6 (on a multi-site installation with about 5 other sites), but I'm slowly beginning the process of redesigning and upgrading the rest of my sites (notably, so far, Midwestern Mac, LLC) to Drupal 7. The Archdiocesan website and St. Louis Review will take a bit longer, since there's a lot of custom code that needs to be refactored.

If you run a website, have you checked out Drupal before? It's a lot more extensible (in my experience) than Joomla or Wordpress, the two other top contenders. If it's good enough for large sites like the White House and, it's good enough for you ;-)

Dec 15 2010
Dec 15

About a year-and-a-half after releasing my first contributed theme for Drupal, Airy Blue, I have finished and release my second contributed theme, MM - A Minimalist Theme.

Minimalist Theme Screenshot

MM is my first HTML5 theme, and my first for Drupal 7 (which, by the way, is awesome!). I have been working on refreshing my LLC website, Midwestern Mac, for the past few months since I scrapped my first hacked-together theme from about 2.5 years ago, and I finally decided to take the plunge and go Drupal 7 for the redesign.

MM is based on Boron, an HTML5 base theme that is still in beta for Drupal 7 (thus, I can't have a final release of my subtheme until Boron is final as well).

The theme has a few nice features:

  • No images whatsoever (cuts down on page load times, since there are less resources to load).
  • HTML5 markup (tested in IE7-9, FF 3+, Safari 4+, Chrome)
  • Progressive enhancement - we're using box-shadow, border radius, and some other CSS3 elements that only work in newer browsers at this point.

I figured I'd like to help get more themes on the docket for Drupal 7's release—right now there are very few, and I think it would be nice if people downloading D7 and wanting to tinker could have more than two or three themes to play with.

Plus, it's just a nice thing to do for an open source project that has given me a career.

Nov 24 2010
Nov 24

In my always-continuing quest to find the perfect online calendar display/management solution, I have found the next level of calendar display/management bliss.

Previously, I was pinning all my hopes on Drupal's very robust, but often complex and confusing, Calendar.module (in use by almost 50,000 websites—and for good reason—it's extremely adaptable). The module provides many different displays, and gives you the ability to link directly to a specific day/month/week... but it (a) is relatively slow to allow switching from month to month, (b) requires a rather complex view, with arguments, which can be confusing for first-time users, and (c) it takes patience to theme it well.

I love the Calendar module, and I still use it on a few sites where necessary, but I've found a new contender that has nothing to do but improve; that condender is the FullCalendar module, which is based on the great fullcalendar.js jQuery-based calendar library by Adam Shaw.

Fullcalendar Display
This is IE. It's easy enough, though, to add better styling to a fullcalendar.

FullCalendar is simply a views display that takes a list of event nodes (as long as your node has a date/time attached, it will work), and displays them in a beautiful calendar display that works across all modern browsers, and even most mobile browsers (I've tested Android, iOS 4, FF, Chrome, Safari, and IE so far).

I had a little trouble getting the calendar to display in IE6/7, but I supplied a quick patch to fix that issue.

One thing I have yet to test is the performance of fullcalendar when displaying large batches of calendar items (in this case, calendar.module might be better—if you need to show thousands of events on a calendar from many years prior). The biggest calendar I have right now displays about 200 items. As time goes on, I could either simply let the list build to the point where fullcalendar slows a bit, or limit the date range so events from only the past few months show.

Nov 11 2010
Nov 11

Many of my favorite websites offer a nice little feature, immediately following the body of the page, that highlights 3-5 "possibly related" stories or blog posts. I wanted to do this on OSC and some other sites, but found that it's difficult to add regions inside of nodes—the closest I could get with the default theme/block behavior is to have it appear after comment section, which is too far down the page to be relevant.

I decided to use the Featured Content module to create my blocks, as it offers a good amount of customization as to what kind of algorithms it uses to find related nodes... performance considerations aside. There are other ways to go about creating lists of related nodes, but this was quick and easy.

Adapting a solution I found here, I created a simple function inside my template.php file that allowed me to print a block from inside my node.tpl.php template.

Inside template.php:

 * Helper function for retrieving block code for insertion into templates.
 * @see
function osc_block_retrieve($module, $delta) {
$block = (object) module_invoke($module, 'block', 'view', $delta);
$block->module = $module;
$block->delta = $delta;
theme('block', $block);

Inside node.tpl.php:

<?php if ($page): ?>

    <div class="block-in-node">
      <?php print osc_block_retrieve('featured_content', '1'); ?>
  <?php endif; ?>

I chose to rank related nodes first by similar taxonomy terms, then by how many views the node received (I'm using the statistics module on this site).

Alternatively, you could do one of the following to accomplish the same kind of thing:

  • Set up a region inside your nodes, in the node.tpl.php file. This seems to be a little burdensome, though, unless you're planning on doing many different things inside said region(s).
  • Use the Panels module to add blocks inside of nodes, or in a different kind of page layout.
Oct 29 2010
Oct 29

Podcast LogoAfter having created a few different podcasts on different Drupal sites for different purposes, using a variety of methods, I can speak with a little authority on which methods are the best, easiest, etc. There is an Audio module, and an iTunes module, which help with more advanced podcasting needs... but most people just want a podcast which will allow visitors to either listen while on the website, or to be able to subscribe to the podcast in iTunes or another media player.

If your needs are relatively simple, it's quite easy to get a podcast up and running on your Drupal website:

Step 1 - Make a Podcast-ready Content Type

In one of your content types, add a filefield that allows the uploading of MP3 and/or M4A files, and create a podcast episode or two. Make sure you set the filefield's display to 'Generic Files' for the RSS display settings (at admin/content/node-type/[your-node-type]/display/rss).

(You can also use something like SWF Tools to display a player for the file for regular ("Basic") node views—so people can play the audio file without having to open your feed in iTunes. Otherwise files will just display as links to the downloadable file).

Step 2 - Make a View with a Page and Feed Display

Now, create a view, and in that view, create a list of all your podcast nodes (nodes that have the MP3/M4A files attached) on a Page display, then create a Feed display that shows a few episodes, and attach that Feed display to the Page display.

Step 3 - Avoid a Hassle by Burning Your Feed

FeedBurner LogoMany would recommend trying to get the feed working properly with iTunes by using the aforementioned iTunes module or some other hackery, but there's a much easier option: FeedBurner. You need to set up an account with FeedBurner. Then use FeedBurner to 'burn' one of your feeds—be sure to check the 'I am a podcaster!' checkbox. Fill out all the relevant details, and then look at what your FeedBurner URL is (mine, for example, is - this is for the ReviewCast).

Install the FeedBurner module on your Drupal site, and then go to its settings page. Click the 'Burn feed' tab to add your new feed - you'll need to know the path to the Feed display that you configured in your View earlier, and the FeedBurner URL (just use the part that's bolded above).

Step 4 - Profit?

Now, clear all your caches (in case the feed you had in your views was cached already), and you should have a podcast page which not only allows site visitors to play files directly, but also offers iTunes (or other service) subscriptions. You can submit your FeedBurner URL to iTunes' Podcast directory if you'd like to be included.


Oct 29 2010
Oct 29

DrupliconAt a recent St. Louis area Drupal meetup (details here), I presented a quick session on how to build a drupal module, geared towards beginning Drupal developers (I don't consider myself too advanced, but I have found that my experiences can often help others).

I have attached to this post the custom module (a .zip) file that I included for examples in the presentation, and I also uploaded the slideshow (quick and easy - just 12 slides!) to slideshare. I've embedded the slideshow below:


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web