Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough

April 2016 – Off the Island: Drupal

Apr 01 2016
Apr 01
Mar 09 2015
Mar 09

Here's an update from the Documentation Working Group (DocWG) on what has been happening in Drupal Documentation in the last month or so. Because this is posted in the Core group as well as Documentation you can not comment on this post. If you have comments or suggestions, please see the DocWG home page for how to contact us.

Notable Documentation Updates

Most of the hook_help texts for the Drupal 8 core modules have been finished, but there are still a few that some review. If you can help, hop over to https://www.drupal.org/node/1908570 to find the remaining issues. Most of these help texts need to a review because they have been written over a course of two years and Drupal 8 has evolved since then. If you want to help reviewing them, check https://www.drupal.org/node/2283477.

Thanks for doocumenting

February 2015 was a extraordinary productive documentation month with 245 people revising and writing more than 1000 pages. The most active editors were:

Many thanks go out to everyone that helped improving Drupal's on line documentation.

Documentation Priorities

The Current documentation priorities page is always a good place to look to figure out what to work on, and has been updated recently.

The two meta issues to update and review of help texts currently have a high priority: Not only do we need them for a good user experience of Drupal 8, they also need to be translated once they are ready.
Working on them is also a good way to find out already how Drupal 8 will work for site builders and site administrators.

If you're new to contributing to documentation, these projects may seem a bit overwhelming -- so why not try out a New contributor task to get started?

Upcoming Events

Report from the Working Group

In February, we met with the Drupal Association's Joshua and Tatiana who presented us the ideas for the future of on line documentation on drupal.org. It seems that in the future we will be able to use several different types of documentation on drupal.org, most noticeably we will be able to have documentation that is much tighter integrated with contributed modules and themes. We were able to give useful feedback on the DA's plans and some of our long wished improvements, as outlined in https://www.drupal.org/governance/docwg-goals seem to be going to be realised. Despite that, some of our requests have not been met yet and some doubts remain over the implementation of the DA's plans.

Jan 02 2013
Jan 02

Happy New Year! We're kicking off 2013 with some FREE videos to get people up and running with our Drupal community tools. There are a lot of aspects of the Drupal community that many people take for granted. Even something as "simple" as figuring our what community websites are out there, and how to use them, is often overlooked when talking to people new to Drupal. So, if you want to really dive into this Drupal thing in 2013, here is a gentle orientation to help get you started. We've added three new videos to our free Community category that walk you through the various community websites, how to get an account, and what you can do with it, from customizing your dashboard, to editing and creating new documentation on Drupal.org. We also take a look at how to use the main search on Drupal.org so you can start finding the things you're looking for.

We hope you find these videos helpful, and we plan to keep creating more community videos over the coming months. Let us know if there is something in particular about our community that is mysterious to you, and we'll add it to our list.

Dec 06 2012
Dec 06

Even with the emerging push to get off paper and on to the screen with the use of rapid prototyping, it is still important to write down what you are going to build before you start building it. There are several strategies for how to communicate to both stakeholders and developers exactly what they’re getting themselves into. Here are a few I use and love.

Wireframe Annotations

I’ve gone on before about lowering the fidelity of your wireframes in order to get them to the screen more quickly.  I still believe in this concept, but the part you can’t skip in your wireframes is the annotations. They should not be formal business-analyst-sounding functional requirements starting with, “The system shall execute the…” mostly because no one will read that. What they should be is a handy reference for clients and developers alike to communicate how the sketches they see will actually get on the page.

The more you write down, the more you are forced to think about how something will work, and it will be easier to see where the holes are. It can be really difficult to decipher precise functionality from wireframes alone. The client/stakeholder needs to have their expectations set in reality. If they’re just looking at a picture, they have no idea how it’s going to work.

This will help the developers out too, because they don’t have to think so much when they are building. For example, you’re creating a news site that has articles on it. The article pages have a box in the right rail for “related headlines”. There are a multitude of ways those headlines can get on this page, but how does the client want them selected? Will they have the staff in place to manually decide which related articles are the most relevant? Do they have a tagging system in place that you can use to automate the selection of the articles? Does the order in which they appear matter to the client? If so, how is it determined? These are just a few questions you’ll ask yourself and your client as you go through the process of annotating your wireframes. In the end, you should have a solid solution for how to proceed with building the page, which will save your developers tons of time and make your project manager love you.

Interactive Mockups

A lot of times clients have already gone through the wireframing and/or design process themselves and hand over a set of comps to start build off of. Inevitably, there are going to be issues that arise as you begin laying out your plan for building the site. A good solution for working through these issues with your clients is to throw the documents into an interactive mockup using a service such as Balsamiq or InVision (or a host of others). This allows you to have a running dialog directly on the comps/wires via a comment/reply scenario.

This is a very simplified version of what you can do using these interactive mockup tools, but it’s still very effective. It enables conversations to happen asynchronously, giving both parties time to think about how they want something to work or offer a better solution.

Write it Down

Sometimes having multiple conversations going via wireframe annotations, interactive mockups, emails, etc. can lead to more confusion than solutions. In this case, it’s often best to write it all out into one cohesive strategy in a centralized location such as a Writeboard on Basecamp or a notebook in OpenAtrium.

Overall, just remember that your life will be easier if you always write it down.

Oct 24 2012
Oct 24

My last week at DoSomething I spent some time working on getting better metrics on which panel pages are slow. One half of that was to use New Relic's PHP API to provide better transaction names that included the node type and panel name:

<?php
/**
* Implements hook_page_alter().
*
* We want to provide more detail to New Relic on the transaction and late in
* the page build seemed like the simplest place.
*/
function example_page_alter(&$page) {
  if (!
extension_loaded('newrelic')) {
    return;
  } 
$name = NULL// Look for a panel page...
 
$panel_page = page_manager_get_current_page();
  if (isset(
$panel_page['name'])) {
   
// If it's a node page put the argument's node type into the transaction
    // name.
   
if ($panel_page['name'] == 'node_view') {
      if (isset(
$panel_page['contexts']['argument_entity_id:node_1']->data)) {
       
$node = $panel_page['contexts']['argument_entity_id:node_1']->data;
       
$name = 'page_manager_node_view_page/' . $node->type;
      }
    }
   
// If it's a page_manager page use the panel name.
   
else if ($panel_page['task']['task type'] == 'page') {
     
$name = 'page_manager_page_execute/' . $panel_page['name'];
    }
  }
  else {
   
$menu_item = menu_get_item();
    if (
$menu_item['path'] == 'node/%') {
     
// Looks like panels didn't have a variant and it's falling back to
      // node_page_view.
     
$name = 'node_page_view/' . $menu_item['page_arguments'][0]->type;
    }
  }  if (
$name) {
   
newrelic_name_transaction($name);
  }
}
?>

So once you know which panels are slowing down your site you can use the new Panels, Why so slow? module to put the blame on the specific panes.

Jul 26 2012
Jul 26

The instructions still need some work. I'd did some updating but haven't tried using it with a clean install yet. After reading this it sounds like there's some bigger changes. I've also been trying to switch from macports to homebrew so that'll also mean some changes to this.

Install XCode

Install XCode from the App Store. Run Xcode and open its Preferences (⌘+,) select the Downloads tab and then the Components sub-tab. Click the Install button on the Command Line Tools component.

Install MacPorts

Become root

To follow these instructions you need to be running as the root user using the default sh shell. If you've got administrator permissions you can open up a Terminal window and switch users using the sudo command then provide your password.

[email protected]:~% sudo su
Password:
sh-3.2#

Install MySQL

Use port to install MySQL:

/opt/local/bin/port install mysql55-server

You'll need to create the databases:

sudo -u _mysql /opt/local/lib/mysql55/bin/mysql_install_db

Let launchd know it should start MySQL at startup.

/opt/local/bin/port load mysql55-server

Secure the server and set a new admin password:

/opt/local/lib/mysql55/bin/mysql_secure_installation

Create a configuration file:

cp /opt/local/share/mysql55/support-files/my-large.cnf /etc/my.cnf

Edit /etc/my.cnf using your editor of choice and make the following changes to the [mysqld]:

  • Change the maximum packet size to 16M:

    max_allowed_packet = 16M

  • Enable network access by ensuring the first line is commented out but add the second to limit access to the localhost with the second line:

    #skip-networking
    bind-address = 127.0.0.1

Restart MySQL to have the settings changes take effect:

port unload mysql55-server
port load mysql55-server

A last, optional, step is to create some symlinks for the executables so they're in the path:

ln -s /opt/local/lib/mysql55/bin/mysql /opt/local/bin/mysql
ln -s /opt/local/lib/mysql55/bin/mysqldump /opt/local/bin/mysqldump
ln -s /opt/local/lib/mysql55/bin/mysqlimport /opt/local/bin/mysqlimport

PHP

You need to create a php.ini file:

if ( ! test -e /private/etc/php.ini ) ; then cp /private/etc/php.ini.default /private/etc/php.ini; fi

Now open /private/etc/php.ini and set the correct location for MySQL's socket by finding:

mysqli.default_socket = /var/mysql/mysql.sock

And changing it to:

mysqli.default_socket = /opt/local/var/run/mysql5/mysqld.sock

Repeat for both mysql.default_socket and pdo_mysql.default_socket.

While you're editing php.ini you might as well set the timezone to avoid warnings. Locate the date.timezone setting uncomment it (by removing the semi-colon at the beginning of the line) and fill in the appropriate timezone:

date.timezone = America/New_York

Enable PHP by opening /private/etc/apache2/httpd.conf in the editor of your choice and making the following changes.

  • Uncomment this line:

    #LoadModule php5_module        libexec/apache2/libphp5.so

  • Find and change this one:

        DirectoryIndex index.html

    To this:

        DirectoryIndex index.php index.html

Then restart Apache:

apachectl graceful

Install PEAR / PECL

I scratched my head for a while on this one before finding this setup guide.

php /usr/lib/php/install-pear-nozlib.phar

Then add this line to your php.ini:

include_path = ".:/usr/lib/php/pear"

Now you can update the channels and upgrade the packages:

pear channel-update pear.php.net
pecl channel-update pecl.php.net
pear upgrade-all

Drush

If you're doing anything with Drupal you'll find Drush to be indispensable.

pear channel-discover pear.drush.org
pear install drush/drush

Memcache

You don't need this to run Drupal but I use it on production servers and I want to try to match the setup.

Use port to install and start memcached:

/opt/local/bin/port install memcached
/opt/local/bin/port load memcached

Since pecl won't let us pass --with-libmemcached-dir=/opt/local to the configure script, a simple work around is to just add some symlinks:

ln -s /opt/local/include/libmemcached /usr/include/
ln -s /opt/local/include/libmemcached-1.0 /usr/include/
ln -s /opt/local/include/libhashkit /usr/include/
ln -s /opt/local/include/libhashkit-1.0 /usr/include/
ln -s /opt/local/lib/libmemcached.dylib /usr/lib/
ln -s /opt/local/lib/libhashkit.dylib /usr/lib/

Then we can install the module:

pecl install memcached

You'll need to edit your /etc/php.ini and add the following line:

extension=memcached.so

If you want to clean up the symlinks (which will prevent pecl upgrade from being able to upgrade the module) here's how you do it:

unlink /usr/include/libmemcached
unlink /usr/include/libmemcached-1.0
unlink /usr/include/libhashkit
unlink /usr/include/libhashkit-1.0
unlink /usr/lib/libmemcached.dylib
unlink /usr/lib/libhashkit.dylib

XDebug

This is also optional, but I find it's very hand to use with MacGDBp to debug those tricky issues. It's also nice to use with webgrind for profiling.

Use pecl to install XDebug:

pecl install xdebug

You'll need to edit your /etc/php.ini uncomment the following line:

zend_extension="/usr/lib/php/extensions/no-debug-non-zts-20090626/xdebug.so"

Then add this one:

xdebug.profiler_enable_trigger = 1


Which lets you enable the profiler by appending XDEBUG_PROFILE=1 in the query of a URL.

My VirtualHost Setup

I like being able to have multiple Drupal sites a few keystrokes away so I create virtual hosts for d5, d6 and d7 using the following procedure.

Edit /etc/apache2/users/amorton.conf and add a VirtualHost to the Apache config:

# This should really be in httpd.conf but i'm keeping it simple by doing it here:
NameVirtualHost *:80

<VirtualHost *:80>
    ServerName d7
    DocumentRoot /Users/amorton/Sites/d7
    <Directory /Users/amorton/Sites/d7>
        AllowOverride All
        Allow from all
    </Directory>
</VirtualHost>

<VirtualHost *:80>
    ServerName d8
    DocumentRoot /Users/amorton/Sites/d8
    <Directory /Users/amorton/Sites/d8>
        AllowOverride All
        Allow from all
    </Directory>
</VirtualHost>

Obviously you'd want to replace amorton with your username.

Add an entries to the /private/etc/hosts file:

127.0.0.1       d7
127.0.0.1       d8

May 31 2012
May 31

Posted May 31, 2012 // 0 comments

Wouldn't it be great if there was an easy way to access php.net or other documentation offline or on a plane?

UPDATE: Sadly, as this blog post went to press, two important updates came out that change the usefulness of this blog post. Dash is now ad-supported, and secondly, it ships with a Drupal DocSet available for download, so that's one fewer step you have to perform to have all the docs that matter to you in Dash.

There's a free as in beer application called Dash (available on the Mac App Store at http://itunes.apple.com/us/app/dash/id458034879?ls=1&mt=12) available for Mac OS X. Dash is a nice-looking documentation browser featuring several useful features, such as the ability to query it with a custom URL string (dash://YOURQUERY), which lends itself for use in tools like Alfred.

Dash can also download additional documentation sets for many open source technologies, including MySQL, PHP, and jQuery. It can be handy to search through the latest PHP API documentation no matter what kind of connection you're on, like so:

Dash - Documentation

In addition, Dash also has the ability to browse any API documentation that you have installed through XCode onto your system. (In fact, any files in DocSet format that are located inside the ~/Library/Developer/Shared/Documentation/DocSets directory can be read by Dash.)

In addition to the freely available DocSets that are available for major open-source technologies, it's easy to make your own DocSets using doxygen. I went ahead and made a DocSet for Drupal 7.x using doxygen. Not every method that's available at api.drupal.org is here, but it's a great start, especially if you want a single offline app where you can query offline documentation.

  1. Unzip the file
  2. Move org.drupal.docset to ~/Library/Developer/Shared/Documentation/DocSets/
  3. Launch Dash and start searching, like so.
Dash - Documentation

As Director of Engineering with Phase2, Steven Merrill is instrumental in propelling into its position as a leader in Drupal architecture and performance. His work in cloud-based hosting architecture, sophisticated caching structures, and ...

Apr 30 2012
Apr 30

In a perfect world, every Drupal module would come with online documentation and support Earl Miles' Advanced Help module for in-depth instructions. In that same perfect world, ever site builder and administrator would read the INSTALL.txt and README.txt files that ship with complex modules before trying to install them. Alas, neither of those dreams is likely to become a reality anytime soon. In the meantime, there's Module Instructions -- a simple one-trick-pony that puts the contents of Readme and Install files right on Drupal's module administration page.

Module Instructions additions to the module administration form

There's not much to say about the interface: when you visit the module administration page, Module Instructions scans each module's directory, hunting for install or readme files. if it finds them, it adds links to that module's line on the administration form. It's simple, it's effective, and while it isn't of much use on a production web site, it's a great tool to have when you're experimenting with new modules or sorting out the installed modules for an in-progress site. (The editor of Module Monday, for example, feels the acute pain of installing dozens of modules for testing purposes. Anything that consolidates their documentation is a good thing...)

Module instructions displayed inside of an Overlay window

If you're a developer interested in customizing Drupal 7's module administration form, Module Instructions is also a useful example of how to alter and bend that notoriously complex administration page. It doesn't interfere with the normal documentation or configuration links used by Drupal core, and plays nicely with other administration-page tweaks like Module Filter.

*/
Feb 15 2012
Feb 15

I was half way done adding some info how to setup pecl/pear to my guide to running Drupal 6 on OS X 10.6 before I realized I'd been running Lion for almost nine months. So it seemed like a good excuse to update it for Lion. These might be a little wonky since I did an upgrade rather than a clean install so if you notice anything please drop me a line.

Note:I'll save you the trouble of commenting, I am familiar with MAMP but would rather punch myself myself in the face than use it. If you'd like to, go right, but I'm going to continue to compile my own so I know where everything ends up.

Install XCode

Install MacPorts

Become root

To follow these instructions you need to be running as the root user using the default sh shell. If you've got administrator permissions you can open up a Terminal window and switch users using the sudo command then provide your password.

[email protected]:~% sudo su
Password:
sh-3.2#

Install MySQL

Use port to install MySQL:

/opt/local/bin/port install mysql5-server

You'll need to create the databases:

/opt/local/bin/mysql_install_db5 --user=mysql

Let launchd know it should start MySQL at startup.

/opt/local/bin/port load mysql5-server

Secure the server and set a new admin password:

/opt/local/bin/mysql_secure_installation5

Create a configuration file:

cp /opt/local/share/mysql5/mysql/my-large.cnf /etc/my.cnf

Edit /etc/my.cnf using your editor of choice and make the following changes to the [mysqld]:

  • Change the maximum packet size to 16M:

    max_allowed_packet = 16M

  • Enable network access by ensuring the first line is commented out but add the second to limit access to the localhost with the second line:

    #skip-networking
    bind-address = 127.0.0.1

Restart MySQL to have the settings changes take effect:

/opt/local/etc/LaunchDaemons/org.macports.mysql5/mysql5.wrapper restart

A last, optional, step is to create a symlink for the mysql5 executable so can be invoked as mysql and mysqldump5 as mysqldump:

ln -s /opt/local/bin/mysql5 /opt/local/bin/mysql
ln -s /opt/local/bin/mysqldump5 /opt/local/bin/mysqldump

PHP

You need to create a php.ini file:

if ( ! test -e /private/etc/php.ini ) ; then cp /private/etc/php.ini.default /private/etc/php.ini; fi

Now open /private/etc/php.ini and set the correct location for MySQL's socket by finding:

mysqli.default_socket = /var/mysql/mysql.sock

And changing it to:

mysqli.default_socket = /opt/local/var/run/mysql5/mysqld.sock

Repeat for both mysql.default_socket and pdo_mysql.default_socket.

While you're editing php.ini you might as well set the timezone to avoid warnings. Locate the date.timezone setting uncomment it (by removing the semi-colon at the beginning of the line) and fill in the appropriate timezone:

date.timezone = America/New_York

Enable PHP by opening /private/etc/apache2/httpd.conf in the editor of your choice and making the following changes.

  • Uncomment this line:

    #LoadModule php5_module        libexec/apache2/libphp5.so

  • Find and change this one:

        DirectoryIndex index.html

    To this:

        DirectoryIndex index.php index.html

Then restart Apache:

apachectl graceful

Install PEAR / PECL

I scratched my head for a while on this one before finding this setup guide.

php /usr/lib/php/install-pear-nozlib.phar

Then add this line to your php.ini:

include_path = ".:/usr/lib/php/pear"

Now you can update the channels and upgrade the packages:

pear channel-update pear.php.net
pecl channel-update pecl.php.net
pear upgrade-all

Memcache

You don't need this to run Drupal but I use it on production servers and I want to try to match the setup.

Use port to install and start memcached:

/opt/local/bin/port install memcached
/opt/local/bin/port load memcached

Since pecl won't let us pass --with-libmemcached-dir=/opt/local to the configure script, a simple work around is to just add some symlinks:

ln -s /opt/local/include/libmemcached /usr/include/
ln -s /opt/local/include/libmemcached-1.0 /usr/include/
ln -s /opt/local/include/libhashkit /usr/include/
ln -s /opt/local/include/libhashkit-1.0 /usr/include/
ln -s /opt/local/lib/libmemcached.dylib /usr/lib/
ln -s /opt/local/lib/libhashkit.dylib /usr/lib/

Then we can install the module:

pecl install memcached

You'll need to edit your /etc/php.ini and add the following line:

extension=memcached.so

If you want to clean up the symlinks (which will prevent pecl upgrade from being able to upgrade the module) here's how you do it:

unlink /usr/include/libmemcached
unlink /usr/include/libmemcached-1.0
unlink /usr/include/libhashkit
unlink /usr/include/libhashkit-1.0
unlink /usr/lib/libmemcached.dylib
unlink /usr/lib/libhashkit.dylib

XDebug

This is also optional, but I find it's very hand to use with MacGDBp to debug those tricky issues.

Use pecl to install XDebug:

pecl install xdebug

You'll need to edit your /etc/php.ini and add the following lines:

zend_extension="/usr/lib/php/extensions/no-debug-non-zts-20090626/xdebug.so"
xdebug.profiler_enable_trigger = 1

My VirtualHost Setup

I like being able to have multiple Drupal sites a few keystrokes away so I create virtual hosts for d5, d6 and d7 using the following procedure.

Edit /etc/apache2/users/amorton.conf and add a VirtualHost to the Apache config:

# This should really be in httpd.conf but i'm keeping it simple by doing it here:
NameVirtualHost *:80

<VirtualHost *:80>
    ServerName d6
    DocumentRoot /Users/amorton/Sites/d6
    <Directory /Users/amorton/Sites/d6>
        AllowOverride All
    </Directory>
</VirtualHost>

<VirtualHost *:80>
    ServerName d7
    DocumentRoot /Users/amorton/Sites/d7
    <Directory /Users/amorton/Sites/d7>
        AllowOverride All
    </Directory>
</VirtualHost>

Obviously you'd want to replace amorton with your username.

Add an entries to the /private/etc/hosts file:

127.0.0.1       d6
127.0.0.1       d7

Feb 02 2012
Feb 02

Posted Feb 2, 2012 // 0 comments

Well, it's been a few weeks since the 1st of the year, so of course most of us are either dreading or hoping for the question: "How are you doing with your New Year's resolution?" Today, I'm excited to report on one of mine!

At Phase2, we made a New Year's resolution for community management around our products: to increase the amount of documentation we have for OpenPublic, OpenPublish, and Open Atrium. In November and December, we added documentation on section fronts and boxes for OpenPublish and provided information on our Omega-based responsive design themes in both OpenPublic and OpenPublish.

While I'm pleased that we have increased the amount of information available about our products, we want to work hard this year to add more to what we already have. I have taken both the suggestions I have received from the community as well as used my knowledge of what is in each product to create a plan of what documentation we should add. And thanks to members in the community, some of this documentation has already been written!

We will add the following documentation for OpenPublic:

  • Introduction to OpenPublic
  • Documenting the inclusion of security standards (functionality coming soon!)
  • How to add a media gallery node in OpenPublic content
  • Detailed overview of apps that are included in OpenPublic and how to configure them
  • List of Features specific to OpenPublic
  • Accessibility in OpenPublic
  • Upgrade path

We will add the following documentation for OpenPublish:

  • Product tour
  • Explanation of users and pre-defined roles
  • How to configure your site using OpenPublish
  • More details about content
    • Content types 
    • Images
    • Multimedia Content 
    • Relating Content
    • Features specific to OpenPublish
  • Detailing site structure and sections
    • Themes and Regions
    • Taxonomy Basics
  • Translation
  • FAQs
  • Detailed overview of apps that are included in OpenPublish and how to configure them
  • Documenting our new magazine theme (theme coming soon!)

We will add the following documentation for Open Atrium:

  • How to set up content for anonymous users 
  • Limiting the number of items in the Recent Activity feed
  • How to hide a default page in the notebook
  • Creating a new content type for a calendar 
  • Preventing iCal imports from duplicating
  • Theming
  • Translation
  • Adding icons
  • Creating minisites using Atrium

While it looks like a lot, I know we can get it done! We have had several community members contribute to our documentation already and, as you know, I always encourage you to get involved!  We have a documentation wiki for OpenPublish and a documentation wiki for OpenPublic. All you have to do is visit the wiki, sign up for an account if you don't have one already, and you can start creating and editing pages.; We also have our community.openatrium.com site that allows you to create and edit pages once you have a log in. Additionally, if you have other ideas, please feel free to share them in the OpenPublic Drupal.org group, the OpenPublish Drupal.org group, or on community.openatrium.com

I look forward to a year when I can say that I have met my New Year's resolution! I can't wait to work with all of you to make it possible!

As our Community Manager, Danielle is responsible for the communities around our products, which include OpenPublic, OpenPublish, Open Atrium, Managing News, and Tattler, as well as the Drupal modules we maintain.  She is also ...

May 11 2011
May 11

I was trying to find some docs on how to use Drupal's JavaScript behaviors system to send to some people at work and realized that two years after D6 was released it was still poorly documented. The JavaScript and jQuery page had good examples of how to get JavaScript onto the page from a module or theme but didn't really discuss what to do from that point. I spent some time adding some documentation to the page on drupal.org but wanted to put a copy here for Google's benefit.

After announcing the change on twitter Tim Plunkett pointed out that there were already some D7 docs so incorporated those.

JavaScript closures

It's best practice to wrap your code in a closure. A closure is nothing more than a function that helps limit the scope of variables so you don't accidentally overwrite global variables.

// Define a new function.
(function () {
  // Variables defined in here will not affect the global scope.
  var window = "Whoops, at least I only broke my code.";
  console.log(window);
// The extra set of parenthesis here says run the function we just defined.
}());
// Our wacky code inside the closure doesn't affect everyone else.
console.log(window);

A closure can have one other benefit, if we pass jQuery in as a parameter we can map it to the $ shortcut allowing us to use use $() without worrying if jQuery.noConflict() has been called.

// We define a function that takes one parameter named $.
(function ($) {
  // Use jQuery with the shortcut:
  console.log($.browser);
// Here we immediately call the function with jQuery as the parameter.
}(jQuery));

In Drupal 7 jQuery.noConflict() is called to make it easier to use other JS libraries, so you'll either have to type out jQuery() or have the closure rename it for you.

JavaScript behaviors

Drupal uses a "behaviors" system to provide a single mechanism for attaching JavaScript functionality to elements on a page. The benefit of having a single place for the behaviors is that they can be applied consistently when the page is first loaded and then when new content is added during AHAH/AJAX requests. In Drupal 7 behaviors have two functions, one called when content is added to the page and the other called when it is removed.

Behaviors are registered by setting them as properties of Drupal.behaviors. Drupal will call each and pass in a DOM element as the first parameter (in Drupal 7 a settings object will be passed as the second parameter). For the sake of efficiency the behavior function should do two things:

  • Limit the scope of searches to the context element and its children. This is done by passing context parameter along to jQuery:
    jQuery('.foo', context);
  • Assign a marker class to the element and use that class to restrict selectors to avoid processing the same element multiple times:
    jQuery('.foo:not(.foo-processed)', context).addClass('foo-processed');

As a simple example lets look at how you'd go about finding all the https links on a page and adding some additional text marking them as secure, turning <a href="https://example.com">Example</a> into <a href="https://example.com">Example (Secure!)</a>. Hopefully you can see another important reason for using the marker class, if our code ran twice the link would end up reading "Example (Secure!) (Secure!)".

In Drupal 6 it would be done like this:

// Using the closure to map jQuery to $.
(function ($) {
  // Store our function as a property of Drupal.behaviors.
  Drupal.behaviors.myModuleSecureLink = function (context) {
    // Find all the secure links inside context that do not have our processed
    // class.
    $('a[href^="https://"]:not(.secureLink-processed)', context)
      // Add the class to any matched elements so we avoid them in the future.
      .addClass('secureLink-processed')
      // Then stick some text into the link denoting it as secure.
      .append(' (Secure!)');
  };  // You could add additional behaviors here.
  Drupal.behaviors.myModuleMagic = function(context) {};
}(jQuery));

In Drupal 7 it's a little different because behaviors can be attached when content is added to the page and detached when it is removed:

// Using the closure to map jQuery to $.
(function ($) {
  // Store our function as a property of Drupal.behaviors.
  Drupal.behaviors.myModuleSecureLink = {
    attach: function (context, settings) {
      // Find all the secure links inside context that do not have our processed
      // class.
      $('a[href^="https://"]:not(.secureLink-processed)', context)
        // Add the class to any matched elements so we avoid them in the future.
        .addClass('secureLink-processed')
        // Then stick some text into the link denoting it as secure.
        .append(' (Secure!)');
    }
  }  // You could add additional behaviors here.
  Drupal.behaviors.myModuleMagic = {
    attach: function (context, settings) { },
    detach: function (context, settings) { }
  };
}(jQuery));
Feb 01 2011
Feb 01

This weekend at the Drupal Developer Days in Brussels, I'm giving a presentation about documentation systems for/in Drupal. The main focus of the presentation will be different new technologies that we've been working on to improve Drupal's documentation technology stack. We believe that these could revolutionize the way we do documentation for Drupal projects and at the same time open up a new market for our favorite CMS.

To test the relevancy of my recommendations for the community, I've put together a questionnaire of which the results should help us better understand your needs and priorities. To make it even more worth your while, we are giving away Mr T-shirts to people that submit a filled out questionnaire:

  • a t-shirts will be handed out to the first 3 people to complete the questionnaire
  • 3 t-shirts will go to people we'll select at random from participants that filled out the questionnaire and tweeted about it
  • 3 more t-shirts will be given to people we'll select at random from the remaining participants

We won't be shipping the t-shirts, if you win you or a friend will have to come pick them up at BDDD or at Drupalcon Chicago.

Bookmark/Search this post with:

Jan 18 2011
Jan 18

GVS has had a busy December and great start to 2011!

NYC and Denver Drupal 7 release parties

The NYC Drupal 7 Launch Party, hosted by Growing Venture Solutions, Treehouse Agency, and ThinkDrop took place on January 7, 2011.

We were delighted at the turnout and enthusiasm for Drupal 7!

Crowd of attendees
Happy crowd!

Group photo of organizers and speakers
The speakers were Rob Purdie (Economist), Todd Ricker (New York Stock Exchange) and Thomas Turnbull (Zagat). The organizers were Lisa Rex, Claudina Sarahe, Ezra Gildesgame, Amy Cham and Ben Jeavons.

Photo of Dries on Skype
We did a video link up with the Boston party. Oh look, it's Dries Buytaert!

Photo of Greg Knaddison and Kevin Bridges on Skype
And then we did a video link up with the Denver and Austin parties. Everyone say hi to Greggles and Kevin Bridges!

Photo of Jacob
And Jacob Redding wore his Drupal suit and tie!

Photo of giving prizes away
We gave away raffle prizes from Lullabot, Packt Publishing and Mastering Drupal, and there was some additional sponsorship support from CivicActions and Bluestocking Collective.

The official photographer for the NYC Drupal 7 party was Matt Cham. Additional d7rp_nyc party photos are on Flickr.

GVS also co-hosted the Denver party, with Greg Knaddison in attendance. The next three photos are by Keith Stansell.

Danomanion, Andrea, Aaronnott, Greggles, Justin Christoffersen and friend
Dano Manion, Andrea, Aaron Nott, Greg Knaddison and Justin Christoffersen. I'm holding up a nice Certified to Rock sticker ;)

Matthew Saunders and Andy of Examiner.com
Andy Packard and Matthew Saunders from the team at Examiner.com.

GVS <3 sprints

GVS and Treehouse sponsored a D7CX code sprint at the offices of GVS neighbor HUGE in Brooklyn, NY.
Progress made at the sprint included working automated tests for the Nodequeue module, a stable Drupal 7 release of the Security review module, amongst other work.

Ezra, Vikram Yelanadu, Thomas Turnbull, Andrew Morton, Steven Merrill

Meanwhile, Lisa participated in the Drupal Docs sprint weekend in Vancouver where the core documentation team hashed out directions and problems the Docs team face (see issues tagged docsyvr2010 for things that were raised during the meeting). On Saturday, there was a Docs team sprint with special guests chx, dmitrig01, and webchick!

Dec 03 2010
Dec 03

The station's website will be build using Drupal an extremely powerful, open source content managment system written in PHP.

Drupal uses some PHP functions that require the installation of additional ports. You'll need:

  • devel/php5-pcre - Perl regular expressions.
  • textproc/php5-xml - XML parsing.
  • textproc/php5-simplexml - Simple XML.
  • databases/php5-mysqli - MySQL support for PHP.
  • www/php5-session - Session support.
  • ftp/php5-curl - cURL support.
  • graphics/php5-gd - Image handing. Optional, some modules need it.
  • converters/php5-mbstring - Unicode support. Optional, but Drupal prefers that it be installed.

Create the database

Create Drupal's database:

$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3 to server version: 5.1.11-beta

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> CREATE DATABASE drupal;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON drupal.* TO [email protected] IDENTIFIED BY 'a secret password';
Query OK, 0 rows affected (0.00 sec)

mysql> quit;
Bye

Download

Check out Drupal with Drush and give the web server permission to read it:

$ cd /usr/local/www
$ drush dl drupal-6.x --drupal-project-rename
$ chown -R root:www drupal

Now that we've created the webroot directory, we need to (re)start Apache:

$ /usr/local/etc/rc.d/apache22 restart
Performing sanity check on apache22 configuration:
Syntax OK
apache22 not running? (check /var/run/httpd.pid).
Performing sanity check on apache22 configuration:
Syntax OK
Starting apache22.

Create Drupal's files directory:

$ cd /usr/local/www/drupal/sites/default
$ mkdir files
$ chmod g+w files/

Since Drupal 6 does the rest of the setup via the web, we need to create a configuration file and temporarily allow the web user to modify it:

$ cp default.settings.php settings.php
$ chmod g+w settings.php

Setup

Open up your browser and point it to your webserver. You should be presented with a wizard that will walk you through the setup. One step will ask for your database name, user and password:

You will also be prompted for credentials for n admin user.

Once the installation is complete, remove the write access from Drupal's configuration file with the following command:

$ chmod g-w /usr/local/www/drupal/sites/default/settings.php

Cron jobs

Drupal has different tasks that need to be run periodically. To do so it provides a script named cron.php that is run via the web server. We can use FreeBSD's cron program to automate the process of running the script.

Add a crontask to web server's user account:

$ crontab -e -u www

And add the following line the file to run the script once an hour. Remember to change the URL:
Nov 28 2010
Nov 28

In February LeeHunter posted his wish list of features for "an awesome technical communication CMS". I've copied his list and processed the comments in the discussion and some of my own, and added our current status implementing these features and ideas of how any missing features could be implemented in the near future.

I'm very much aware that in the coming months most documentation efforts will probably go into getting the documentation updated for Drupal 7, and it's strategically a bad idea to at the same time change from a freeform format to a structured documentation format, rebuild the infrastructure and do a mayor content update.

Nothing prevents us however from dreaming up the ideal infrastructure, and if I know what the doc team is dreaming about, I might be able to steer our (Pronovix') work on the DITA documentation system into that direction ;)

There might also be certain technologies that could make the documentation upgrade less painful (e.g. using the versioning system mentioned below).

*************************

VERSIONS - Instead of having separate files for every version of Drupal core, it would be way faster and easier to manage if we could have 1 document, with specially marked up sections for the different versions of Drupal. You could still split the display for the documentation in different tabs. This would significantly reduce the work on future core upgrades.

  • Ready: this is part of the DITA standard, displaying it would be a matter of using different XSLT's. We (Pronovix) have built a UI for this in Spezzle before using RDFa markup in the semantic filters and layers.

IMPORT TOOL - Ability to import XML-based content in DITA, DocBook, or custom formats. Able to monitor file folders or external databases/repositories for changed content and synch automatically or on demand.

  • Ready: We can now import content in DITA as individual file upload and by monitoring a file folder using our feeds extension, this could be used to import from a folder on the Drupal site in which you checkout a GIT repository.
  • Future: Using CMIS it should be possible to connect to Sharepoint, Alfresco and other document repositories. For Drupal.org that would however not immediately useful.

AUTHORING TOOL - WYSIWYG interface for creating and editing schema-compliant content. The writer can easily apply tags to content (but only the permitted tags for a given context!). The author can be given suggestions and prompts. There is a mechanism for conditional text (i.e. the writer can tag text and images so that different versions can appear in different publications). All this would require a lot of JQuery love to be usable.

  • Ready: With poorman's DITA users can create valid DITA without getting lost in the markup options.
  • Selective enrichment WYSIWYG: We would probably want to have some basic WYSIWYG editor that lets users add a subset of the attributes and sub-elements of the main elements inside the CCK fields. A full fledged WYSIWYG editor that is capable of implementing the full DITA spec and that is still easy to use for beginners would be an epic project.
  • full WYSIWYG: My colleague Yorirou has an idea for a technically achievable editor that would be able to deal with any type of XML specifications.

PUBLISHING/EXPORT TOOL - Provides drag-and-drop interface for organizing content into single sourced publications (i.e. the same chunk can appear in many places). The publications can be web-based, PDF, XML, online Help etc. Different versions can be built using tags and conditional text.

  • Ready: We've made a system that transforms the .mm format mindmaps from Graphmind into DITA maps (XML), that are in DITA the basis for making documentation from the "single source" topics. We've also implemented an export functionality that uses the DITA Open Toolkit to export to PDF, XHTML, archive, etc. With this system it's possible for users to create their own "Complete guide to building an xyz site" outlines
  • Future: Right now the mindmaps are not validated in the UI against their DITA map convertability (it's possible to create mindmaps that will fail conversion). It would be nice to have input validation in the UI. There are some other modifications to Graphmind that would make it easier to manage documentation (e.g. create new documentation topics from Graphmind, being able to edit topics in the map, etc.)

DOCUMENTATION CLIENT & SERVER - Let's you use and update documentation from inside a Drupal installation context. This feature would make it easier to bridge the gap to end-user documentation in individual projects. This could do for the documentation team what the localization server and client did for the translations teams worldwide.

  • Proof of concept: We've got a first proof of concept for a documentation server that lets you connect specific form elements in your installation with help topics on the documentation server.
  • Future: Advanced help and Help Inject make it fairly easy to package and reference help topics with html files that can be shipped with an installation. In the future it would be great to export all the help topics for a specific project into a help feature that than makes these topics available in their respective contexts.
  • Ready: Technically this is solved, we would need to decide what content we would want to agregate from where and how.

REUSING EXISTING DRUPAL CONTENT - As Andyoram mentions in his comment there is a lot of documentation snippets out there spread across all the Drupal blogs that are currently only searchable with Google. We should be able to map these snippets from the documentation site and possible reuse them in more extensive documentation topics.

  • Ready: with prepopulate it's easy to make a bookmarklet that would let users submit these snippets to their documentation trail on the documentation server.
  • Future: We've previously developed faceted insert, a proof of concept for a search plugin that let's you search for nodes on a site using faceted search and than insert the found text into a text area. Right now that works with faceted search, but it should be not too complex to make this pluggable and use snippets from Solr.

INTEGRATING WITH API.DRUPAL.ORG - Currently the API documentation and the docs on d.o. are not really integrated (except for individual in content references) It would be nice to more closely integrate both systems.

  • Future: Ideally API documentation would be available as reference DITA topics, maybe there is a way to automatically map the docs to DITA references?

WORKFLOW - We would need to figure out what workflow/permission set will get us the best quality/number of contributions trade-off.

  • Ready: Technologically this should be an issue, Drupal contrib has everything we need for this. This is more an organizational issue

SCHEMA - We could make a DITA specialization for Drupal e.g. have a topic type for modules, themes, features and installation profiles.

  • Ready: Technically this is not hard, we can use CCK to build a couple of extra poorman's DITA topic types specifically for these purposes and extend the module to transform the form into valid DITA. We however would need to figure out the types of information we would like to include in these forms.

TAXONOMY - With RDFa in Drupal 7, keywords be will marked up in RDFa and sites will become queriable with SPARQL. Now would be a great time to make a central Drupal vocabulary for blogs that write about Drupal, so that the Drupal blogosphere becomes queriable as a whole.

  • Ready: Neologism which was developed by DERI is built for this purpose. If we can get this tested and documented, we could have several Drupal shops use the vocabulary once they switch to D7.

FLAGGING - As jhodgdon mentions it would be great if users could flag content in the documentation with flags like "to look at later" or "daily reference".

  • Ready: Easy win with the flag module
Nov 23 2010
Nov 23

Technological leadership

Active in product innovation and open source software services, we build lasting value for our customers and the Drupal community.

Empowering relationships

Caring about clients and the outcome of their projects, we get involved and make meaningful relationships that give our customers maximum independence.

Community engagement

Dedicated to fostering the Drupal open source community, we catalyze event and development initiatives, resulting in real change and growth.

Nov 16 2010
Nov 16

Big news in Affinity Bridge offices today! The announcement went out on Drupal.org this morning that myself (arianek) and Jennifer Hodgdon (jhodgdon) from Seattle have been appointed by Dries Buytaert as the new Drupal Documentation Co-Leads! 

This has been a long time in the making. It's really exciting to have the work I've been doing recognized, and also have the ability to leverage that work to improve documentation and help build a stronger Docs Team. On top of that, my community spotlight was also posted today, which is incredibly heartwarming.

As someone who loves both working with Drupal and giving back to the Drupal community, participating as a member of the Docs Team has been an incredible experience for me. I've learned so much through this work, and been privileged to learn from some of the brightest Drupal contributors, and to call them friends after many hours spent on IRC and at various conferences and camps working on docs.

Why do I love working on docs?

  • I like editing. Style guides, consistent and clear language, and correct formatting and grammar make me happy.
  • I like writing. I have always enjoyed it, and this has been a great opportunity for me apply that towards something that will be useful to many people.
  • I like learning! I have learned an immense amount about Drupal's functionality and the rationales behind much of how it works by working on docs. Helping out with docs is one of the best ways to learn more about Drupal, because it always gets you testing and tinkering. It's also a great opportunity to learn from the community's developers and themers, who are always happy to answer questions for Docs Team members.
  • I like mentoring. I'm still cultivating this skill, but I really enjoy helping others learn how to contribute, and trying to get other people enthused about working on docs.
  • I like giving back to the Drupal project. Open source projects depend heavily on the work of passionate and dedicated contributors, and Drupal has given so much to me, it's the least I can do.

Big thank-you!

I'd like to say a huuuuuge thanks to Mack. His ongoing support for my work in the community has been incredibly generous and motivating for me. As of about six months ago, he's increased Affinity Bridge's sponsorship to cover all of my work on Drupal docs/core, community event organization, and running docs sprints. Without his ongoing support I'm sure I wouldn't have been able to dedicate as much time as I do to this work. He's certainly walking the walk when it comes to being a nurturing business owner within the Drupal community, and deserves some major mackh++'s.

Want to work on docs too?

Sep 23 2010
Sep 23

In an earlier post I explained that Graphmind could be used as a tool for building ditamaps. In this post I'll explain how we could do a 1 to 1 translation of native .mm features to essential ditamap features.

Graphmind is a Flex frontend that uses the Freemind .mm mindmap XML format. In it's current implementation it requires a Drupal backend (it uses Drupal services and expects node objects) but it is not unconceavable to generalize the API so that Graphmind could be used on top of other backends.

Ditamaps are the backbone of a DITA documentation set and can be used to repackage part of a DITA documentation repository into a derivative documentation document (e.g. PDF, Windows help files, etc.).

There is already an XSL document that transforms .mm into .ditamaps that implements the basic hierarchy into a ditamap it's a very basic transformation but demonstrates with 25 lines of code that it is should be fairly straigthforward to get this feature working (e.g. it doesn't handle attributes properly and it doesn't add a topichead). The output looks like this:

With 2 additional conventions we could have a full function ditamap:

  • Root level mindmap nodes could become topicheads for a branch
  • Local hyperlinks could be used to indicate relationships that in DITA can be used to build the relationship table

One way to automatically add those local hyperlinks would be a little auto-link function in the graphmind interface that lets you connect mindmap nodes that share specific attributes. This way you could one time tag the DITA topics with relevant keywords in Drupal and than automatically generate the relationship table in the ditamap that links topics that share keywords in Drupal.

With those additions we could in a fairly short time frame (a couple of weeks of development) make a tool that would be very useful for the DITA community: just import the DITA topics that were created with an external tool into Drupal nodes and use the Drupal views services to create DITA maps in the Graphmind interface. This first version could than be used as a bridgehead to get feedback and contributions from the DITA community to build the actual editor...

Besides these plans, we are also thinking about the following features for Graphmind:

  • a really nice feature for Graphmind would be concurrent multi-user editing.
  • We've also been playing with the idea to develop a HTML5 implementation of graphmind that would run on iphones and ipads
  • and we have plans to make Graphmind an RDF browser that query RDF stores and build dynamic mindmaps.

The last 2 plans are only plans, but we might have found a sponsor for the multi-user editing.

Sep 19 2010
Sep 19

Video: 

In this video I explore a couple of tools and design decisions that we are making for the DITA documentation distribution we are building in Drupal.

I talk about:
-Translating DITA into a field structure
-XML WYSIWYG editor
-Bookmarklet for submission of related content
-Building DITA maps in a drag & drop UI

Bookmark/Search this post with:

Sep 14 2010
Sep 14

A few weeks ago, I embarked on my first overseas trip to go to Copenhagen for this year's European DrupalCon. It was my 4th DrupalCon to date, but I've been wanting to attend one of the European ones for a while, as they have a reputation for having a different vibe than the North American ones (and of course so I could finally see some of Europe!)

The Core Dev Summit (+ Code Sprint Day)

Like the last conference in San Francisco, it was prefaced with the Core Developer Summit, which is a full day of presentations, discussions, and code sprinting on the core Drupal platform. The Core Dev Summit is the single day (twice a year at this point), where a good number of the people who work on Drupal core come together to take a step back and discuss in-depth any ideas or concerns. This often leads into some dedicated sprinting on core related issues (as well as some of the most crucial contributed modules).

I attended mainly for two purposes: to keep on top of what all the core developers are up to and get some face time with them (since I usually only talk to them online), and to make sure there was some representation from the Drupal Docs team there.

I've been working on the online Drupal documentation a lot lately, helping to prepare the it for the Drupal 7 launch, and ended up leading an impromptu docs sprint when several people volunteered to work on the handbook for the second half of the day. It was great to get some help from both people who were new to docs as well as a couple fairly hardcore long time developers. Big thanks go to Djun Kim (aka. puregin) for working on the handbook page for the new-to-Drupal-7 File module, and to Ken Rickard (aka. agentrickard) for working on the new-to-Drupal-core Field and Field UI handbook pages. It was fantastic having help from some great developers writing these, and Ken actually found a pretty big permissions bug while writing the page.

...when you write documentation, you are forced to take a bit of code and really understand it. You [read] through it, make sure it does what you're saying it does, and test it. Guess what happens when you dig into code that deeply? You find bugs!

And because it's so encouraging (and true), I have to add this other bit he posted:

If you are interested in getting involved in core, working the docs queue is the single best way to do it. You find bugs other people miss, the patches are generally easy to get committed, you get used to the issue queue and creating patches, and best of all the patches are enormously valuable. Get to it!

(Off-but-on-topic, Angie Byron, aka. webchick, just put up a great post on contributing documentation on the Lullabot blog today, go read!)

Neil Drumm (aka. drumm) who works on the API docs and is currently helping manage the Drupal.org redesign was there as well, so I got to review some of the docs.drupal.org in-progress redesign with him. The redesign team has been doing a fantastic job, and I'm really looking forward to the relaunch and some of the freedom that will be afforded by having a separate subdomain for documentation.

I was also really pleased to get the opportunity to participate in a discussion about the CVS application process, which was has been a hot topic recently. Sam Boyer (sdboyer), who is working on the Drupal git migration, led a discussion to get feedback from many long time core contributors. Mainly, we talked about what is still broken in the process, what needs to change, and what small but effective changes could be made during the git migration to help improve matters. Main suggestions focused around ideas about how to manage namespace and numbers of modules, how to mentor new applicants, and the need to recruit more reviewers.

The post-conference Code and Docs Sprint Day was also extremely productive even though I was feeling a bit off and had to lead the docs sprint from back at the apartment! We did a kickoff over Skype then worked over IRC the rest of the day, and powered through a TON more of the core module handbook docs and some work on the install and upgrade guides. I really missed not being able to work in-person with everyone, but still want to thank all who turned up and cranked out some awesome docs work, namely: Steve Kessler (DenverDataMan), Alex Pott (alexpott), Barry Madore (bmadore), Marika Lundqvist (marikalu), Miro Scarfiotti (smiro2000), Paul Krischer (SqyD), Carolyn Kaminski (Carolyn), Khalid Jebbari (DjebbZ), and last but not least Boris Doesborg (batigolix) who I am really sad not to have met in person, as he worked a bunch with me on the D7 Help initiative over the winter. Next time! You all rock, hope to see you around the docs queue and IRC till the next con. 

The rest of DrupalCon...

I had to agree with what I'd heard about the European cons, as I did feel a lot more of a community vibe (probably due to the smaller size, being the same size as my first DrupalCon in Boston in 2008), and did not see a lot of the corporate aspects that have become part of the North American cons of late. Those are, of course, part of Drupal's growth, but they do change the atmosphere.

The sessions I went to were all really fantastic. I think my three favourites had to be:

  1. The Managing a Drupal Consulting Firm panel (video) - Todd Nienkerk and Aaron Stanush (Four Kitchens), Thomas Barregren (NodeOne), Vesa Palmu (Mearra), Matt Cheney (Chapter Three), Liza Kindred (Lullabot), Eric Gundersen (Development Seed), and Tiffany Farriss (Palantir) sharing stories and tips for how to be a successful and happy Drupal consulting firm. Great ideas, and bonus high comedic value!
  2. Jeff Miccolis' (jmiccolisFor Every Site a .make File - great review of .make files and associated development practices (couldn't find the video, if anyone knows where it is, comment please!)

And though I didn't attend it, Amitai Burstein's session on Group, which is the Drupal 7 iteration of Organic Groups (video) was the crowd favourite, and highly recommended as one to watch online.

What else can I say? It was a fantastic week with a bunch of fantastic people. As @timbertrand put it:

"Dear Proprietary Social SW Vendor -
this is only a taste of our development team"

See you next time!

Aug 19 2010
Aug 19

The following is a part of a first proposal for the specification of the Documentation system we want to build as part of the modulecraft project. It is by no means complete, and it strongly needs your feedback. This is our first encounter with DITA and our ideas should really be proof checked by technical writers that have extensive experience using DITA. The actual specification is being built as a wiki at groups.drupal.org. comments can be added here or you can edit the wiki directly any feedback will be incorporated into the wiki.

Why DITA

Several people from the community have indicated DITA as the ideal architecture for a new redesigned Drupal documentation.

Darwin Information Type Architecture is an open XML standard curated by the Oasis consortium that was originally developed by IBM. DITA was built to enable single source documentation: you make one central set of documentation topics that can than be reorganized in new so-called DITA maps to serve a different documentation purpose.

Drupal needs single sourced documentation: this would solve our current need to have 1 documentation structure that needs to serve all purposes and that needs to include all topics. We could build DITA maps for different user types, distributions, projects, etc.

There are existing tools that can than convert DITA output into a number of formats.

Store as XML or XHTML

DITA is currently mostly managed in a dedicated tool that directly edits the XML. To display the documentation afterwards on a website it needs to be converted to XHTML. Drupal has some tools to work on XML, but since Drupal is mostly used to publish HTML, these tools (e.g. XML filter) are few and little tested.

Storing everything in 1 XML formatted text blob would make it harder to edit the documentation and would require to either edit in XML or periodically convert back the changed XHTML back to DITA (IMHO this defeats the whole purpose of single sourcing). Since in 90% of the time we'll be working with web content it's better to store in XHTML and convert to DITA only when needed.

Using fields to simplify the interface

Adding a fixed set of fields to a form makes it easier to enforce/simplify the input of structured data. Using the Drupal (CCK) fields system to build those parts of the DITA structure that are main sections of a topic (e.g. title, description) will make it easier for people with less experience working with DITA formatted information to use the format. Inside these fields we can than use a markup editor (could be WYSIWYG or something else, see later article) to add the 'freestyle' markup, that can be added inside those main containers (only offering valid markup options).

Potentially such a system with CCK and an interface that allows for the creation of new tags could make it really easy to make specializations (this is what the D (Darwin) in DITA stands for: evolutionary extensibility of the basic topic types). If there is budget and sufficient demand we could potentially make a system that automatically derives the definition/specification of custom DITA topics from the CCK/markup settings.

Using RDFa to add not always present XML elements

DITA (specialized) topics have elements that are primary branches in the XML tree that if present are used once and mostly on the same place. Concepts have for example a title, a shortdesc, conbody and related-links. The task element has for example the taskbody that in turn contains the prereq (prerequisites) and steps element.

The step element however has several useful child elements that are not always required and that are more free-form in their use e.g. the info (additional information about the step), stepxmp (example that illustrates a step), substeps, choices (the user needs to choose one of several actions) and stepresult (expected outcome of a step).

These in-field elements could be added using an RDFa vocabulary that we derive from the DITA markup. That way we would in the first place create valid XHTML that can be converted to DITA when it needs to be exported. The RDFa markup could be applied using a specialized WYSIWYG editor that is context aware (so that only valid child elements are added).

Do you agree with the above? Have any remarks? Have an other solution? Let us know in the comments!

Bookmark/Search this post with:

Jul 30 2010
Jul 30

This week we launched modulecraft.com a fundraising tool that we want to use to raise interest, involvement and money for the development of a series of tools for Drupal professionals. Pure donation systems like chip-in have a pretty bad track record, but a donation/reward system has to our knowledge not yet been tried in the Drupal community. When you donate you will be contributing to the community AND getting something valuable in return.

We launched the platform with Documentation+, our first fundraising effort which primary aim is the development of a Documentation distro for Drupal.

For a couple of years now, people in the documentation team have been wanting to implement a DITA architecture for the documentation. DITA is an open standard managed that was initially developed by IBM that is now managed by Oasis. It is fairly young, but has gained a lot of momentum in the documentation industry.

We want to build a documentation distribution that uses a similar approach as the localization server and that enables a distributed/federated documentation architecture for the Drupal project. As a Drupal user you'll be able to get a set of documentation from the drupal.org docs server imported into your own site. You will than be able to edit it and build subsets of the documentation for your own projects. You'll also be able to submit topics that were edited or created by you on your own infrastructure and add them as suggestions to the Drupal documentation server. DITA has a modular format that makes it possible to reuse the same documentation topics in different maps, so you'll be able to create dedicated documentation sets for your projects.

You'll than be able to export the documentation in the DITA format, which can with the help of the DITA Open Toolkit in turn be transformed into XHTML, Microsoft Compiled HTML Help (aka Windows help or .chm), Eclipse Help, Java Help, Oracle Help and Rich Text Format and PDF through XSL-FO | reference here.

Using a newer version of the semantic editor/filter/layer system that we developed before, you'll be able to inline add DITA attributes, such as for example audience type (technical, management, end user) or platform (Windows, Linux, Mac).

The DITA distribution, just like the localization server, will be useful for projects outside of the Drupal community. A lot of people are looking for a WIKI style web interface for building DITA documentation, the documentation distribution will give them a platform on which they can build Drupal sites that fit their individual needs.

With this fundraising website we want to involve and commit as much people into the process as possible. If you want to help: submit or vote on user stories, follow us on twitter, blog or tweet about us (check out our banners) and if you can, consider donating.

Bookmark/Search this post with:

Jun 04 2010
Jun 04

Twice today I've had to deal with writing a SQL query that needed data in a CCK field. The naive approach is to just look at the table and field names and plug them into your query:

<?php
$result
= db_query("SELECT COUNT(*) AS count FROM {node} n
INNER JOIN {term_node} tn ON n.vid = tn.vid
INNER JOIN {content_type_date} ctd ON n.vid = ctd.vid
WHERE tn.tid = 25 AND ctd.field_date_value > NOW() AND n.changed > %d"
, $newtime);
?>


Often this will work just fine but since CCK can dynamically alter the database schema (when you add a field to a second content type or change the number of values) the query may break.

Fortunately CCK provides functions for finding a field's table and column names so it's simple to do it correctly:

<?php
$field
= content_fields('field_date');
$db_info = content_database_info($field);
?>

A var_dump($db_info) gives:

array(2) {
  ["table"]=>
  string(17) "content_type_date"
  ["columns"]=>
  array(2) {
    ["value"]=>
    array(6) {
      ["type"]=>
      string(7) "varchar"
      ["length"]=>
      int(20)
      ["not null"]=>
      bool(false)
      ["sortable"]=>
      bool(true)
      ["views"]=>
      bool(true)
      ["column"]=>
      string(16) "field_date_value"
    }
    ["value2"]=>
    array(6) {
      ["type"]=>
      string(7) "varchar"
      ["length"]=>
      int(20)
      ["not null"]=>
      bool(false)
      ["sortable"]=>
      bool(true)
      ["views"]=>
      bool(false)
      ["column"]=>
      string(17) "field_date_value2"
    }
  }
}

After noting that the field has two columns and making our choice, we've got the pieces to plug into the query:

<?php
$field
= content_fields('field_date');
$db_info = content_database_info($field);
$result = db_query("SELECT COUNT(*) AS count FROM {node} n
INNER JOIN {term_node} tn ON n.vid = tn.vid
INNER JOIN {"
. $db_info['table'] ."} ctd ON n.vid = ctd.vid
WHERE tn.tid = 25 AND ctd."
. $db_info['columns']['value']['column'] . " > NOW() AND n.changed > %d", $newtime);
?>

The query is a bit harder to read, but you've future proofed your code so you won't be back to fix six months from now when you reuse that date field on another node type.

Mar 07 2010
Mar 07

If you've got Drush installed—and you really should—you can use the following recipe to setup a backup system that will maintain daily backups for the last two weeks. Most of the logrotate configuration is based on a Wikibooks book that I found.

Find the pieces

Make sure logrotate is installed:

whereis logrotate

Which should print something like:

logrotate: /usr/sbin/logrotate /etc/logrotate.conf /etc/logrotate.d /usr/share/man/man8/logrotate.8.gz

So for this site we'll use the full path /usr/sbin/logrotate to run the program.

If you don't know where drush is installed you'll probably want to repeat the process to determine its location. The site I'm working on right now is hosted by May First, a very Drupal friendly ISP (and an amazing progressive group), so they've installed drush at /usr/bin/drush.

drush needs to be able to find the correct settings.php file to connect to your database. Specify the root of your Drupal site using the -r switch. You can test that it's able to locate your settings using the following command:

/usr/bin/drush -r ~/dev.rudemechanicalorchestra.org/web sql conf

If it works you'll see an array with your database connection information.

Hook 'em up

Create the state and configuration files:

touch ~/.logrotate.state ~/.logrotate.config

Edit ~/.logrotate.config insert the following text:

~/backup/dev.sql.gz {
        rotate 7
        daily
        nocompress
        nocopytruncate
        postrotate
          /usr/bin/drush -r ~/dev.rudemechanicalorchestra.org/web/ sql dump | gzip > ~/backup/dev.sql.gz
        endscript
}

logrotate expects that the file will already exist so we need to use drush to create the first one:

/usr/bin/drush -r ~/dev.rudemechanicalorchestra.org/web/ sql dump | gzip > ~/backup/dev.sql.gz

Test that logrotate will work correctly:

/usr/sbin/logrotate --state=~/.logrotate.state ~/.logrotate.config --debug

If everything is working correctly you'll see something like:

reading config file /home/members/rmo/sites/dev.rudemechanicalorchestra.org/users/rmodev/.logrotate.config
reading config info for "/home/members/rmo/sites/dev.rudemechanicalorchestra.org/users/rmodev/backup/dev.sql.gz"

Handling 1 logs

rotating pattern: "/home/members/rmo/sites/dev.rudemechanicalorchestra.org/users/rmodev/backup/dev.sql.gz"  after 1 days (7 rotations)
empty log files are rotated, old logs are removed
considering log /home/members/rmo/sites/dev.rudemechanicalorchestra.org/users/rmodev/backup/dev.sql.gz
  log does not need rotating

Schedule it

Edit your crontab:

crontab -e

And add the following line which will run logrotate at midnight:

0 0 * * *       /usr/sbin/logrotate --state=~/.logrotate.state ~/.logrotate.config

Sleep a little better

That's it, you should now have two weeks of daily backups. You'll want to check back on it tomorrow and make sure that the backups are actually occurring and that the old ones are being renamed to .sql.gz.1, .sql.gz.2, etc.

Sep 12 2009
Sep 12

Running Drupal on OS X 10.5 was a pretty huge pain in the ass. It's much easier in in 10.6 since it includes PHP 5.3 with GD and the PDO out of the box. And Drupal 6.14 resolves the PHP 5.3 incompatibilities.

In this guide I'll walk through the process I used for reinstalling OS X, then installing MacPorts and using it to install MySQL.

Note: I've shortened this up a bunch since it was first posted (originally it was using PHP 5.2 from MacPorts). I also want to make it clear that I am familiar with MAMP but would rather punch myself myself in the face than use it. If you'd like to, go right ahead since it's probably easier—and as evidenced by the commenters below—you're in good company. But I'm going to continue to compile my own so I know where everything ends up.

A Note for Those Upgrading From 10.5

One thing to note before we start. These instructions assume a clean installation. Apple doesn't come right out and say it but the $29 10.6 disc can be used for new installations or upgrades.

If you followed my previous guides for compiling PHP and Apache I'd recommend the following upgrade procedure. I want to be very clear that this worked fine for me on two computers but I won't take any responsibility if it doesn't work as well for you. Consider yourself warned.

  1. Use something like SuperDuper to make a bootable back up of your system to an external drive.
  2. Boot off the external drive (holding down the option key will let you choose the boot volume) and ensure that everything works correctly.
  3. Unplug your backup drive.
  4. Insert the OS X DVD and boot into the installer.
  5. Use the Disk Utility to erase your computer's hard drive.
  6. Install OS X
  7. After the reboot re-attach your hard drive and use the Migration Assistant to restore your Users, Applications and Settings but uncheck the Other files and folders option.
  8. Manually move any of the other files which may include MySQL databases in /opt/local/var/db/mysql5/.

At this point you should have a clean installation with the majority of your data migrated. I'd suggest keeping the backup drive around for a while incase you find that you've missed something.

Install XCode

Install the XCode package from Optional Installs directory on the install DVD.

Install MacPorts

Follow the directions to install Mac Ports. As of early November 2010, due to dependency issues, you'll need to install the Java for Mac OS X 10.6 Update 3 Developer Package before you can install most ports.

Become root

To follow these instructions you need to be running as the root user using the default sh shell. If you've got administrator permissions you can open up a Terminal window and switch users using the sudo command then provide your password.

[email protected]:~% sudo su
Password:
sh-3.2#

Install MySQL

Use port to install MySQL:

/opt/local/bin/port install mysql5-server

You'll need to create the databases:

/opt/local/bin/mysql_install_db5 --user=mysql

Let launchd know it should start MySQL at startup.

/opt/local/bin/port load mysql5-server

Secure the server and set a new admin password:

/opt/local/bin/mysql_secure_installation5

Create a configuration file:

cp /opt/local/share/mysql5/mysql/my-large.cnf /etc/my.cnf

Edit /etc/my.cnf using your editor of choice and make the following changes to the [mysqld]:

  • Change the maximum packet size to 16M:

    max_allowed_packet = 16M

  • Enable network access by ensuring the first line is commented out but add the second to limit access to the localhost with the second line:

    #skip-networking
    bind-address = 127.0.0.1

Restart MySQL to have the settings changes take effect:

/opt/local/etc/LaunchDaemons/org.macports.mysql5/mysql5.wrapper restart

A last, optional, step is to create a symlink for the mysql5 executable so can be invoked as mysql and mysqldump5 as mysqldump:

ln -s /opt/local/bin/mysql5 /opt/local/bin/mysql
ln -s /opt/local/bin/mysqldump5 /opt/local/bin/mysqldump

PHP

You need to create a php.ini file:

if ( ! test -e /private/etc/php.ini ) ; then cp /private/etc/php.ini.default /private/etc/php.ini; fi

Now open /private/etc/php.ini and set the correct location for MySQL's socket by finding:

mysqli.default_socket = /var/mysql/mysql.sock

And changing it to:

mysqli.default_socket = /opt/local/var/run/mysql5/mysqld.sock

Repeat for both mysql.default_socket and pdo_mysql.default_socket.

While you're editing php.ini you might as well set the timezone to avoid warnings. Locate the date.timezone setting uncomment it (by removing the semi-colon at the beginning of the line) and fill in the appropriate timezone:

date.timezone = America/New_York

Enable PHP by opening /private/etc/apache2/httpd.conf in the editor of your choice and making the following changes.

  • Uncomment this line:

    #LoadModule php5_module        libexec/apache2/libphp5.so

  • Find and change this one:

        DirectoryIndex index.html

    To this:

        DirectoryIndex index.php index.html

Then restart Apache:

apachectl graceful

XDebug

Totally optional steps here.

Use pecl to install XDebug.

pecl install xdebug

You'll need to edit your /etc/php.ini (you'll need to copy one of the sample .ini files) and add the following lines:

zend_extension="/usr/lib/php/extensions/no-debug-non-zts-20090626/xdebug.so"
xdebug.profiler_enable_trigger = 1

My VirtualHost Setup

I like being able to have multiple Drupal sites a few keystrokes away so I create virtual hosts for d5, d6 and d7 using the following procedure.

Edit /etc/apache2/users/amorton.conf and add a VirtualHost to the Apache config:

# This should really be in httpd.conf but i'm keeping it simple by doing it here:
NameVirtualHost *:80

<VirtualHost *:80>
    ServerName d6
    DocumentRoot /Users/amorton/Sites/d6
    <Directory /Users/amorton/Sites/d6>
        AllowOverride All
    </Directory>
</VirtualHost>

<VirtualHost *:80>
    ServerName d7
    DocumentRoot /Users/amorton/Sites/d7
    <Directory /Users/amorton/Sites/d7>
        AllowOverride All
    </Directory>
</VirtualHost>

Obviously you'd want to replace amorton with your username.

Add an entries to the /private/etc/hosts file:

127.0.0.1       d6
127.0.0.1       d7

Aug 25 2009
Aug 25

There was a great question on Drupal developers mailing list the other day—one to which I've "rediscovered" the solution to a few times—so I wanted to make sure that everyone was aware of it.

The basic question is:

When a node is being saved, how can you see what values have changed?

The short answer is:

Use the 'presave' operation to load a copy of the node before it's saved, stick it back into the node object, and in your 'update' operation code compare the "before" and "after" versions:

<?php
/**
* Implementation of hook_nodeapi().
*/
function example_nodeapi(&$node, $op, $a3, $a4) {
 
// We want to compare nodes with their previous versions. Ignore new
  // nodes with no nid since there's no previous version to load.
 
if ($op == 'presave' && !empty($node->nid)) {
   
// We don't want to collide with values set by other modules so we'll
    // use the module name as a prefix and a long name to be save.
   
$node->example_presave_node = node_load($node->nid);
  }
  elseif (
$op == 'update') {
   
// On update we pull the previous version out of the node and compare
    // it to the newly saved one.
   
$presave = $node->example_presave_node;
   
// Pretend we're comparing a single value CCK number field here.
   
$field_name = 'field_example';
    if (
$node->$field_name != $presave->$field_name) {
     
drupal_set_message(
       
t("The node's value changed from %previous to %current.", array(
         
'%previous' => $presave->$field_name[0]['value'],
         
'%current' => $node->$field_name[0]['value'],
        ))
      );
    }
  }
}
?>

Aug 21 2009
Aug 21

For some reason I find myself rewriting this little bit of code every time I need to update a bunch of nodes on a site. Going to post it here to save myself some time. Be aware that this might time out if you've got a large number of nodes, designed for up to a couple hundred nodes:

<?php
// TODO: Set your basic criteria here:
$result = db_query("SELECT n.nid FROM {node} n WHERE n.type = '%s'", array('task'));
while (
$row = db_fetch_array($result)) {
 
$node = node_load($row);
  if (
$node->nid) {
   
$node->date = $node->created;    // TODO: Test and set your own value here:
   
if (empty($node->field_task_status[0]['value'])) {
     
$node->field_task_status[0]['value'] = 'active';
     
$node = node_submit($node);
     
node_save($node);
     
drupal_set_message(t('Updated <a href="http://drewish.com/content/2009/08/simple_loop_to_update_nodes/!url">%title</a>.', array('!url' => url('node/'. $node->nid), '%title' => $node->title)));
    }
  }
}
?>

Addison Berry on Herding Cats in the Drupal Documentation Community

Jun 14 2009
Jun 14
Jun 14 2009
Jun 14

Addison Berry, aka @add1sun, presented about that comely wench experience as documentation lead fer th' tha Drrupal content management system project th' other day at th' Writing Open Source conference in Owen Sound. In that comely wench role as chief cat-herder, she found that th' most difficult scallywags aren't poisonous. Instead they just dern't know how t' communicate with th' curmmunity, an' they need t' translate where they're comin' from t' th' way th' curmmunity operates. It's hard work, she reports, t' turn them into a contributor. The winsome lass referred th' audience t' the "Poisonous People" presentation by the Subversion people, as yet unwatched by yours truly.

Addison talked about religi'us wars that occasionally break out. That is, th' crux o' th' issue is more important than th' resolution, an' often leads t' inaction. The winsome lass also discussed th' differences betwixt recruitin' in th' corporate world an' recruitin' in th' open source world. For private companies, they hire a skillset that they can filter fer by listin' th' job requirements, either explicitly or implied. In open source, she says, ye have th' skillset first an' ye work with it. Many cats scratchin' their own itch, hence th' herdin' t' get them t' scratch th' curmmunity's itches too. The scallywags ye get workin' on a project have a rich background, both in terms o' skills an' life history. Skillsets include a lot o' non-technical backgrounds in open source (Addison has an anthropology degree, fer example, an' me education is in political science).

tha Drrupal has a large mass o' documentation, an' Addison is tryin' t' whoop up energy in managin' th' base o' existin' documentation fer tha Drrupal 5 an' 6 while gearin' up fer writin' th' documentation fer th' upcomin' tha Drrupal 7.

Open source has a natural passion that brin's scallywags together. Showin' th' example o' a rowin' team on that comely wench slide illustrated th' need t' hire a coach t' tell them when t' row. Herdin' involves keepin' lines o' communication open an' openin' up new ones as well as bangin' on pots about documentation. Instead o' tellin' scallywags what they can do, empower them by includin' them in th' conversation. Addison, as leader, knows what she won't do an' has so far been able t' find scallywags who will. Trackin' metrics aroun' th' documentation—answerin' a question I had before I had th' chance t' ask it—Addison is not interested in, but she found someone who is. Many "soft-skills", such as facilitation, have come in handy even if th' person with th' skill does not claim membership in th' software curmmunity. Also universities an' their students have found time an' energy t' contribute usability testin' as part o' course credit or as part o' their graduate studies.

Lettin' go an' gettin' out o' th' way: Addison wanted th' vision t' be perfect, but quickly understood that she can't lead th' charge or drag it out all th' time: instead she recognized th' need t' let scallywags run with thin's an' support them. Gettin' scallywags t' trust ye that that's th' right direction.

Attending Writing Open Source June 12th to 14th

Jun 04 2009
Jun 04
Jun 04 2009
Jun 04

In a week, I will attend th' Writing Open Source conference in Owen Sound, Ontario. I'm excited t' meet some o' me colleagues in th' field o' open source documentation, havin' written th' bulk o' th' support materials fer Bryght, th' Drupal-powered hosted service. I'm particularly interested in meetin' those workin' t' document open source tools other than tha Drrupal, t' gain some perspective on what's out thar an' what's needed.

Writin' documentation were bein' me first task at Bryght back in 2004. I recall spendin' part o' that Christmas break furiously jottin' down th' important steps t' creatin' dynamic an' curmmunity websites. This included checklists, instructions an' descriptions o' module settin's an' how scallywags could take advantage o' them. The initial push o' documentation made th' subsequent job o' supportin' customers easy: instead o' each time havin' t' explain how t' do somethin', I quickly pointed t' th' documentation, either through a link or a copy & paste. Along th' way I even heard from non-customers thankin' me fer th' handy references. After th' second time someone asked we documented th' answer. (We even wrote documentation after th' first time someone asked a question.) Sometimes it di'nae work, an' sometimes th' documentation weren't all that great or hard t' find. We allowed comments an' opened th' forums an' listened t' feedback when what we wrote di'nae make a whole lot o' sense. That's th' experience I'd like t' share with th' conference, an' I'd like t' hear o' others' experiences in makin' complex software more understandable.

After th' weekend conference, I'll spend a couple o' full days in Toronto proper, gettin' some much needed distance from Vancouver. I'd like t' meet with some o' th' Toronto tha Drrupal heads, an' others I know (but ha'nae met) from other on the plank communities I'm part o'. Sadly, me favourite baseball squadron, th' Toronto Blue Jays, play on th' sea in late June. Surely a local pub will have th' games in HD?

The themes at Writin' Open Source have a lot in common with two sessions I attended at FSOSS (th' Free an' Open Source Symposium) in 2006. I wrote two well-received pieces about th' symposium, both notes on sessions at th' conference:

(Audio an' video fer both presentations be available at http://fsoss.senecac.on.ca/2006/recordin's/)

I'm lookin' forward t' th' sessions in Owen Sound next week, an' t' sharin' what I learn thar!

Jun 02 2009
Jun 02

My rule of thumb for deciding what to post on this blog has been to document anything I've spent more than an hour trying to figure out. Today I've got a good one for anyone trying to create CCK fields as part of a module's installation process.

Back in Drupal 5 the Station module was made up of lot of custom code to track various values like a playlist's date or program's genre and DJs. During the upgrade to Drupal 6 I migrated that data into locked, CCK fields that were created when the module was installed. As people started to install the 6.x version of module I began getting strange bug reports about the Station Schedule that I couldn't seem to replicate on my machine.

Eventually after trying it on a fresh installation, I discovered the problem was that its fields weren't being created correctly by the hook_install() implementation when CCK and/or the field modules were installed at the same time as the Station modules. Meaning that the user who setup a new Drupal site, downloaded all the modules, checked the Station Schedule check box on the module list and let Drupal figure out the dependencies from the .info files would think the modules had installed correctly but they'd actually be missing several required field instances which would cause errors down the line. My first response to this problem was to add a hook_requirements() implementation that prevented the Schedule from being installed at the same time as the other modules:

<?php
/**
* Implementation of hook_requirements().
*/
function station_schedule_requirements($phase) {
 
$requirements = array();
 
$t = get_t();
  if (
$phase == 'install' && !module_exists('userreference')) {
   
$requirements['station_schedule_userreference'] = array(
     
'description' => $t('Sadly the Station Schedule cannot be installed until the User Reference module has been fully installed. User Reference should now be installed, so please try installing Station Schedule again.'),
     
'severity' => REQUIREMENT_ERROR,
    );
  }
  return
$requirements;
}
?>

This at least removed the "Surprise, you've got a broken site!" element, but it was annoying to have to reinstall the module. When I realized that it wasn't just the Schedule that was suffering from this problem—but also the Program and Playlist modules—I decided to look for a better solution.

After six hours of debugging via print statement—technically the Devel module's dsm() function (yes, I know the time would have been better spent figuring out how to get a proper PHP debugger running on OS X)—I found it boiled down to two issues:

  1. The field's columns weren't being populated because the fields' .module files weren't being included.
  2. CCK uses drupal_write_record() to record the field information but it was failing because content_schema() wasn't being called.

The first was simple enough to correct, I could manually include the module files. The second was much trickier, drupal_get_schema() calls module_implements() so that it only returns schema information for enabled modules but drupal_install_modules() installs the group of modules then enables the group. I was expecting that when hook_install() was called the required modules would be both installed and enabled. So in order to create my fields in station_schema_install() I'd need to get CCK and the fields enabled first. Feeling close to one of those head slapping moments I started studying module_enable() and realized it seemed safe to call from within a hook_install() implementation. It had the added bonus of including the module which solved the first problem.

I love it when you figure out the right way to do something and it turns out to also be the short way. It's really this simple:

<?php
/**
* Implementation of hook_schema().
*/
function station_schedule_install() {
 
drupal_install_schema('station_schedule');  // To deal with the possibility that we're being installed at the same time
  // as CCK and the field modules we depend on, we need to manually enable the
  // the modules to ensure they're available before we create our fields.
 
module_enable(array('content', 'userreference'));  $dj_field = array (
   
// FIELD DEFINITION OMITTED.
 
);  // Create the fields.
 
module_load_include('inc', 'content', 'includes/content.crud');
 
content_field_instance_create($dj_field);
}
?>

As always, I hope this saves someone else some trouble.

Mar 19 2009
Mar 19

For some work projects we've started making all the configuration changes via update functions. These get checked into version control and from there deployed to the staging site for testing, and then eventually deployed on the production site. The nice thing about update functions is that you can test it on staging and be sure that exactly the same changes will occur on the production site.

Here's a few examples, I'll continue to update it as I get more good examples.

Installing a module

Simple one liner to enable several modules:

<?php
function foo_update_6000(&$sandbox) {
 
$ret = array();
 
drupal_install_modules(array('devel', 'devel_node_access'));
  return
$ret;
}
?>

Batch based update to regenerate PathAuto aliases

More elaborate update that uses the BatchAPI to avoid timeouts while regenerating the path aliases for two node types:

<?php
function foo_update_6000(&$sandbox) {
 
$ret = array();  if (!isset($sandbox['progress'])) {
   
// Set the patterns
   
variable_set('pathauto_node_foo_pattern', 'foo/view/[nid]');
   
variable_set('pathauto_node_bar_pattern', 'bar/view/[nid]');    // Initialize batch update information.
   
$sandbox['progress'] = 0;
   
$sandbox['last_processed'] = -1;
   
$sandbox['max'] = db_result(db_query("SELECT COUNT(*) FROM {node} n WHERE n.type IN ('foo', 'bar')"));
  } 
// Fetch a group of node ids to update.
 
$nids = array();
 
$result = db_query_range("SELECT n.nid FROM {node} n WHERE n.type IN ('foo', 'bar') AND n.nid > %d ORDER BY n.nid", array($sandbox['last_processed']), 0, 50);
  while (
$node = db_fetch_object($result)) {
   
$nids[] = $node->nid;
  }  if (
$nids) {
   
// Regenerate the aliases for the nodes.
   
pathauto_node_operations_update($nids);    // Update our progress information for the batch update.
   
$sandbox['progress'] += count($nids);
   
$sandbox['last_processed'] = end($nids);
  } 
// Indicate our current progress to the batch update system. If there's no
  // max value then there's nothing to update and we're finished.
 
$ret['#finished'] = empty($sandbox['max']) ? 1 : ($sandbox['progress'] / $sandbox['max']);  return $ret;
}
?>

Change node settings

Make a few changes to the node type settings:

<?php
function foo_update_6001() {
 
$ret = array();  // Change the teaser label to 'teaser text'.
 
$ret[] = update_sql("UPDATE content_node_field_instance SET label = 'Teaser text' WHERE field_name = 'field_teaser'");  // Change the 'description' and 'biography' labels to 'body text'.
 
$ret[] = update_sql("UPDATE content_node_field_instance SET label = 'Body text' WHERE field_name IN ('field_description', 'field_bio')");  // Rename the front node type 'Front Page' to 'Front Page Configuration'
 
$ret[] = update_sql("UPDATE node_type SET name = 'Front Page Configuration' WHERE type = 'front'");  return $ret;
}
?>

Delete a bunch of views

I exported the site's views into a default views and needed to remove the existing ones from the database.

<?php
function foo_update_6001(&$sandbox) {
 
$ret = array();  // Since we're shipping default views delete the versions from the database.
 
if (!isset($sandbox['progress'])) {
   
// Initialize batch update information.
   
$sandbox['progress'] = 0;
   
$sandbox['views'] = array(
     
'season',
  
// ...
     
'nodequeue_1',
    );
   
$sandbox['max'] = count($sandbox['views']);
  } 
module_load_include('module', 'views');
 
$view_id = $sandbox['views'][$sandbox['progress']];
  if (
$view = views_get_view($view_id)) {
   
$view->delete();
   
$view->destroy();
  }
 
$sandbox['progress']++;  // Indicate our current progress to the batch update system. If there's no
  // max value then there's nothing to update and we're finished.
 
$ret['#finished'] = empty($sandbox['max']) ? 1 : ($sandbox['progress'] / $sandbox['max']);  return $ret;
}
?>
Feb 06 2009
Feb 06

I wasted more time that I want to admit do trying to figure this out. I was trying theme a specific CCK field named field_images on all the nodes where it appears. The devel_themer module was listing content-field-field_images.tpl.php as a candidate:

But after copying CCK's content-field.tpl.php into my theme and renaming it I couldn't seem to get the theme to pick it up. Roger López gave me the frustratingly simple answer on irc: "i think you need to have both templates in place"... duh. Copied content-field.tpl.php into my theme and everything worked great.

Dec 24 2008
Dec 24

Finding myself in need of a PostgreSQL server to test some patches for Drupal core, I've decided to do a follow up to my guide to getting PHP + GD + MySQL installed on OS X.

Fortunately for me John VanDyk wrote up Beginning with Drupal 6 and PostgreSQL on OS X 10.5 Leopard which covers the nitty gritty of getting PostgreSQL server installed. He doesn't address recompiling PHP so I'll pick up the story there.

Last Updated: June 1, 2009

First, a couple of notes on the formatting of this guide. The blocks of shell command typically include the shell prompt (sh-3.2#) when you're copying-and-pasting the command make sure you don't grab that part. Since this is long enough I've omitted large blocks of the compiler output and used [...] to indicated the omission.

Switch to the root account

To follow these instructions you need to be running as the root user using the default sh shell. If you've got the correct administrator permissions you can switch users using the sudo command and providing your password.

[email protected]:~% sudo su
Password:
sh-3.2#

Install MacPorts

Since you have already followed John VanDyk's guide you'll have Mac Ports installed and only need to install a couple of extra packages.

Use port to grab a copy of wget and the GD dependency, jpeg, as well as freetype and t1lib for rendering fonts. You'll might see some other sub-dependencies installed during the process:

sh-3.2# /opt/local/bin/port install wget +ssl freetype t1lib jpeg
--->  Fetching expat
--->  Attempting to fetch expat-2.0.1.tar.gz from http://downloads.sourceforge.net/expat

[...]

--->  Fetching jpeg
--->  Verifying checksum(s) for jpeg
--->  Extracting jpeg

--->  Applying patches to jpeg
--->  Configuring jpeg
--->  Building jpeg with target all
--->  Staging jpeg into destroot
--->  Installing jpeg 6b_2
--->  Activating jpeg 6b_2
--->  Cleaning jpeg

Recompile Apache

Rather than just installing Apache from MacPorts I want to rebuild in the native OS X locations so I can use the built-in support. I tried to skip over this step but after wasting a bunch of time finally realized it was important. Grab the latest version of Apache 2.2:

sh-3.2# cd /tmp

sh-3.2# wget http://ftp.wayne.edu/apache/httpd/httpd-2.2.11.tar.bz2
--2008-12-15 18:04:55--  http://ftp.wayne.edu/apache/httpd/httpd-2.2.11.tar.bz2
Resolving ftp.wayne.edu... 141.217.1.55
Connecting to ftp.wayne.edu|141.217.1.55|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5230130 (5.0M) [application/x-tar]
Saving to: `httpd-2.2.11.tar.bz2'

100%[======================================>] 5,230,130    453K/s   in 9.1s   

2008-12-15 18:05:04 (561 KB/s) - `httpd-2.2.11.tar.bz2' saved [5230130/5230130]

Extract it:

sh-3.2# bunzip2 httpd-2.2.11.tar.bz2 sh-3.2# tar xf httpd-2.2.11.tar

Compile it:

sh-3.2# cd httpd-2.2.11

sh-3.2# ./configure --enable-layout=Darwin --enable-mods-shared=all

[...]

sh-3.2# make install

Recompile PHP

Download the latest PHP source:

sh-3.2# cd /tmp

sh-3.2# wget http://us3.php.net/get/php-5.2.9.tar.bz2/from/this/mirror
--2009-05-19 12:40:42--  http://us3.php.net/get/php-5.2.9.tar.bz2/from/this/mirror
Resolving us3.php.net... 209.41.74.194
Connecting to us3.php.net|209.41.74.194|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://us3.php.net/distributions/php-5.2.9.tar.bz2 [following]
--2009-05-19 12:40:43--  http://us3.php.net/distributions/php-5.2.9.tar.bz2
Connecting to us3.php.net|209.41.74.194|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10203122 (9.7M) [application/octet-stream]
Saving to: `php-5.2.9.tar.bz2'

100%[======================================>] 10,203,122   308K/s   in 34s    

2009-05-19 12:41:16 (296 KB/s) - `php-5.2.9.tar.bz2' saved [10203122/10203122]

Extract it:

sh-3.2# bunzip2 php-5.2.9.tar.bz2sh-3.2# tar xf php-5.2.9.tar

Compile it:

sh-3.2# cd php-5.2.9

sh-3.2# MACOSX_DEPLOYMENT_TARGET=10.5 CFLAGS="-arch ppc -arch ppc64 -arch i386 -arch x86_64 -g -Os -pipe -no-cpp-precomp" CCFLAGS="-arch ppc -arch ppc64 -arch i386 -arch x86_64 -g -Os -pipe" CXXFLAGS="-arch ppc -arch ppc64 -arch i386 -arch x86_64 -g -Os -pipe" LDFLAGS="-arch ppc -arch ppc64 -arch i386 -arch x86_64 -bind_at_load"

sh-3.2# ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-apxs2=/usr/sbin/apxs --with-config-file-path=/private/etc --sysconfdir=/private/etc --enable-cli --with-curl=/opt/local --enable-ftp --enable-mbstring --enable-mbregex --enable-sockets --with-ldap=/usr --with-ldap-sasl --with-kerberos=/usr --with-mime-magic=/etc/apache2/magic --with-zlib-dir=/usr --with-xmlrpc --with-xsl=/usr --without-iconv \
--with-gd --with-png-dir=/usr/X11R6 --with-xpm-dir=/usr/X11R6 --with-jpeg-dir=/opt/local --enable-exif \
--with-freetype-dir=/opt/local --with-t1lib=/opt/local \
--enable-pdo --with-pgsql=/opt/local/lib/postgresql83 --with-pdo-pgsql=/opt/local/lib/postgresql83

[...]

Thank you for using PHP.

sh-3.2# make install

Test that PHP has the GD and PostgreSQL modules installed:

sh-3.2# php -m |grep "gd\|pgsql"
gd
pdo_pgsql
pgsql

If you don't already have a php.ini file you'll need to create one by copying the default:

sh-3.2# if ( ! test -e /private/etc/php.ini ) ; then echo cp /private/etc/php.ini.default /private/etc/php.ini; fi

Restart Apache:

sh-3.2# apachectl restart

Clean up

Once you've got everything working correctly you can remove the source code:

sh-3.2# rm -r /tmp/php-5.2.* /tmp/httpd-2.2.*


This it actually pretty optional, when you reboot OS X cleans out the temp directory.
Dec 19 2008
Dec 19

After getting sick of closing issues in various module's issue queues that boiled down to people not knowing how to use Views 2's relationship feature I decided to make a screencast explaining it:

I think I need to get a microphone, and figure out all the features of the tool I was using but I'm excited to do more of these.

Update: The GotDrupal folks have a much more detailed—and more understandable—screencast on this topic: http://gotdrupal.com/videos/drupal-views-relationships

Aug 14 2008
ze
Aug 14

I've never blogged about Drupal Docs.

It's a project to get a new home to Drupal documentation and expand its power. The most important things here are:

  1. Get a nice place for documentation and make it easier for people to find what they want.
  2. Have a more organized distinction of different types of documentation content - handbooks, tutorials, tips and videocasts. If we have more space, why not get things organized? It will allow more concise handbooks and many many more tutorials and tips. Not to mention the videocast library, of course!
  3. Enable translation. As a non native English speaker I know how this matter for people who doesn't speak English. After all, Drupal is for everybody and ment to empower people all over the world.

This project does not involve writing or rewriting documentation, or restructuring handbooks. I'm not part of Drupal documentation team and the content is not my concern at all. The documentation team does an amazingly well-done job and Drupal Docs is only a bigger nicer home for them.

I'm developing a prototype at http://dev.chuva-inc.com/projetos/docs/ and everyone is welcome to see and give their opinion.

Most exciting things already implemented in this prototype:

  • Translation translation translation!
  • Different content types: handbooks, tips, tutorials, videocasts

Let's chat about it in Szeged?

Jul 13 2008
Jul 13

So building on my last post for creating CCK fields, here's some code I whipped up to migrate from the D6's core upload.module to the filefield.module. This isn't general purpose code but might help someone else out. The catch is I'd built a video node with and was using the upload module to attach exactly two files, an image and a video. The new node will have separate thumbnail and video fields. If you'll be moving to a multi-value field this code won't work for you.

The gist is the same as before, setup your field for video and your field for images then export using:

<?php
var_export
(content_fields('field_web_video', 'video'), 1);
?>

and

<?php
var_export
(content_fields('field_video_thumb', 'video'), 1);
?>


Then roll that into an update function that also moves the file data around in the database. Code is after the jump.

<?php
/**
* Add filefields to the video nodes and migrate the files.
*/
function foo_video_update_6000() {
 
// Make sure the filefield* modules are installed correctly.
 
drupal_install_modules(array('filefield', 'filefield_image', 'filefield_imagecache'));
 
drupal_flush_all_caches();  module_load_include('inc', 'content', 'includes/content.admin');
 
content_alter_db_cleanup();
 
 
// Need to load the CCK include file where content_field_instance_create() is defined.
 
module_load_include('inc', 'content', 'includes/content.crud');
 
 
$thumb_field = array (
//
// DROPPED THE CCK FIELD DEFINITION FROM HERE
//
 
);
 
content_field_instance_create($thumb_field);  $video_field = array (
//
// DROPPED THE CCK FIELD DEFINITION FROM HERE
//
 
);
 
content_field_instance_create($video_field); 
 
// Migrate the videos
 
$fids = array();
 
$result = db_query("SELECT n.nid, n.vid, f.fid, u.description, u.list FROM {files} f INNER JOIN {upload} u ON f.fid = u.fid INNER JOIN {node} n ON u.vid = n.vid WHERE n.type = 'video' AND filemime LIKE 'video/%'");
  while (
$file = db_fetch_object($result)) {
   
$fids[] = $file->fid;
   
// Check for a record... it adds a bunch more queries but it's simple and we only run this once.
   
if (db_result(db_query("SELECT COUNT(*) FROM {content_type_video} WHERE vid = %d", $file->vid))) {
     
db_query("UPDATE {content_type_video} SET field_web_video_fid = %d, field_web_video_description = '%s', field_web_video_list = %d WHERE vid = %d",
       
$file->fid, $file->description, $file->list, $file->vid);
    }
    else {
     
db_query("INSERT INTO {content_type_video} (nid, vid, field_web_video_fid, field_web_video_description, field_web_video_list) VALUES (%d, %d, %d, '%s', %d)",
       
$file->nid, $file->vid, $file->fid, $file->description, $file->list);
    }
  }
 
db_query("DELETE FROM {upload} WHERE fid IN (". db_placeholders($fids, 'int') .")", $fids);   // Migrate the images
 
$fids = array();
 
$result = db_query("SELECT n.nid, n.vid, f.fid, u.description, u.list FROM {files} f INNER JOIN {upload} u ON f.fid = u.fid INNER JOIN {node} n ON u.vid = n.vid WHERE n.type = 'video' AND filemime LIKE 'image/%'");
  while (
$file = db_fetch_object($result)) {
   
$fids[] = $file->fid;
   
// Check for a record... it adds a bunch more queries but it's simple and we only run this once.
   
if (db_result(db_query("SELECT COUNT(*) FROM {content_type_video} WHERE vid = %d", $file->vid))) {
     
db_query("UPDATE {content_type_video} SET field_video_thumb_fid = %d, field_video_thumb_description = '%s', field_video_thumb_list = %d WHERE vid = %d",
       
$file->fid, $file->description, $file->list, $file->vid);
    }
    else {
     
db_query("INSERT INTO {content_type_video} (nid, vid, field_video_thumb_fid, field_video_thumb_description, field_video_thumb_list) VALUES (%d, %d, %d, '%s', %d)",
       
$file->nid, $file->vid, $file->fid, $file->description, $file->list);
    }
  }
 
db_query("DELETE FROM {upload} WHERE fid IN (". db_placeholders($fids, 'int') .")", $fids);  // No more uploads on video nodes!
 
variable_set('upload_video', 0);  return array();
}
?>

Update: I posted some additional info on this topic over on: http://drupal.org/node/292904

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web