Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Mar 02 2020
Mar 02

As of Drupal 8.7, the Media and Media Library modules can be enabled and used out-of-box. Below, you'll find a quick tutorial on enabling and using these features.

out-of-box before media and media library

In the past there were two different ways to add an image to a page.

  1. An image could be added via a field, with the developer given control over its size and placement:

    Image field before media library
  2. An image could be added via the WYSIWYG editor, with the editor given some control over its size and placement:

    Image field upload choices screen

A very straightforward process, but these images could not be reused, as they were not part of a reusable media library.

reusing uploaded media Before Drupal 8.7

Overcoming image placement limitations in prior versions of Drupal required the use of several modules, a lot of configuration, and time. Sites could be set up to reference a media library that allowed editors to select and reuse images that had previously been uploaded, which we explained here.

This was a great time to be alive.

What is available with Media Library

Enabling the Media and Media Library modules extends a site's image functionality. First, ensure that the Media and Media Library core modules are enabled. 

Enable media library in drupal

A media entity reference field must be used with the Media Library. It will not work with a regular image field out-of-box.

Image field on manage display page

On the Manage form display page, select "Media library" widget. 

Media library widget on manage display page

On the "Node Add" and "Node Edit" forms, you’ll see the below difference between a regular image field and a field connected to the media library.

Media library field on node edit

Click on “Add media” and you’ll see a popup with the ability to add a new image to the library or to select an image that is already in the library.

Media field grid

With a simple configuration of the field, if multiple media types are allowed in the field, you’ll see vertical tabs for each media type.

Media grid with multiple media types

WYSIWYG configuration

The WYSIWYG editor requires a few steps when configuring the media library for a specific text format. First, a new icon will appear with a musical note overlapping the image icon. This should be added to the active toolbar and the regular image icon should be moved to the available buttons.

wysiwyg toolbar configuration

Under “Enabled filters,” enable “Embed media."  Under the filter settings, vertical tab settings can be chosen for media types and view modes. Once that configuration is saved, you’ll see on a WYSIWYG editor that you have the same popup dialog for adding a new image to the media library, or selecting an already-uploaded image.

wysiwyg media configuration

Once you are on a "Node Add or "Node Edit" page with a WYSIWYG element, you’ll see the media button (image icon plus musical note).

Media button on wysiwyg editor

Clicking on the media button brings up the same, familiar popup that we saw earlier from the image field:

media library grid

This article is an update to a previous explainer from last year. 

Aug 22 2016
Aug 22

As you'd be aware by now - Drupal 8 features lots of refactoring of form procedural code to object-oriented.

One such refactoring was the way forms are build, validated and executed.

One cool side-effect of this is that you can now build and test a form with a single class.

Yep that's right, the form and the test are one and the same - read on to find out more.


Firstly kudos here to Tim Plunkett who pointed this out to me, and to all of those who championed much of the refactoring that made this even possible.

Testing forms and elements in Drupal 7

In Drupal 7 to test a form, or an element you need the following:

  • A test module with:
    • A hook_menu entry
    • A form callback
    • (optional) A validate callback
    • (optional) A submit callback
  • A web-test (a test that extends from DrupalWebTestCase)

Drupal 8 forms

As you're probably aware from all the example code and posts out there, forms in Drupal 8 are objects that implement Drupal\Core\Form\FormInterface.

Luckily, you can write a test that both extends from KernelTestBase (Drupal\KernelTests\KernelTestBase) that also implements FormInterface. This means you don't need all of the additional routing plumbing you needed in Drupal 7.

Let's look at an example in Drupal - PathElementFormTest (\Drupal\KernelTests\Core\Element\PathElementFormTest). This test is to test core's PathElement (\Drupal\Core\Render\Element\PathElement) - a plugin that provides a form element where users can enter a path that can be optionally validated and stored as either a \Drupal\Core\Url value object or a array containing a route name and route parameters pair. So in terms of testing, the bulk of the logic in the element plugin is contained in the #element_validate callback - PathElement::validateMatchedPath.

There are several different combinations of configuration for the path element as follows:

  • Required and validated with no conversion
  • Required and non validated with no conversion
  • Optional and validated with no conversion
  • Optional, validated and converted into a route name/parameter pair
  • Required, validated and converted into a route name/parameter pair
  • Required, validated and converted into a Url object

So we need to set up several instance of the element on a test form.

So because our test extends from KernelTestBase, but also implements FormInterface, we just build a normal form array with all of these configurations in our implementation of FormInterface's ::buildForm method - see \Drupal\KernelTests\Core\Element\PathElementFormTest::buildForm to see how this is done. We're not interested in doing any additional validation or submission, so our implementation of FormInterface's ::submitForm and ::validateForm can be blank.

Testing the element behaviour

So to test the element validate works as expected for each of the fields, we need to trigger submission of the form. Now in a web-test, we'd use the internal test browser to visit the form on a route and then use a method like BrowserTestBase::submitForm to actually submit the form. But as we're using a kernel test here, there is no internal browser - so instead we can submit directly through the form_builder service (\Drupal\Core\Form\FormBuilderInterface). The code in PathElementFormTest looks something like this:

$form_state = (new FormState())
    'required_validate' => 'user/' . $this->testUser->id(),
    'required_non_validate' => 'magic-ponies',
    'required_validate_route' => 'user/' . $this->testUser->id(),
    'required_validate_url' => 'user/' . $this->testUser->id(),
$form_builder = $this->container->get('form_builder');
$form_builder->submitForm($this, $form_state);

So firstly we're building a new FormState object and setting the submitted values on it - this is just a key-value pair of form values. Then we're getting the form builder service from the container and submitting the form.

From here, if there were any errors, they'll be present on the form state object. So we can do things likes check for expected errors, or check for expected values.

For example, to check that there were no errors.

$this->assertEquals(count($form_state->getErrors()), 0);

Or to check that the conversion occurred (i.e. the input path was upcast to a route name/parameter pair or Url object).

$this->assertEquals($form_state->getValue('required_validate_route'), array(
  'route_name' => 'entity.user.canonical',
  'route_parameters' => array(
    'user' => $this->testUser->id(),

Or to check for a particular error.

$this->assertEquals($errors, array('required_validate' => t('@name field is required.', array('@name' => 'required_validate'))));

Summing up

So why would you want to use this approach?

Well for one, the test is damn fast. Kernel tests don't do a full site install, and because there is no HTTP to fetch and submit the form, you get fast feedback. And when you get fast feedback, you're more likely to practice good test driven development.

So if you're building an element plugin for a contrib or client project, I encourage you to start with a test, or rather a form, or rather both. Specify the various configurations of your element and test the expected behaviour.

I'm sure you agree, this is another clear case of Drupal 8 for the win.

Drupal 8 Drupal Development Testing
Jun 27 2016
Jun 27

The Workflow Initiative was announced just over a month ago and since then I have been working on the first phases full-time. Here’s what I’ve touched:

In Drupal 8.1.x the RevisionLogInterface was introduced, but not used. In 8.2.x the Node entity will use it. Also BlockContent entity will use it.

When we make everything revisionable we’ll need all entities to get their base fields from the base class. So from 8.2.x all content entities inherit their base fields.

To help people make sensible decisions Node revisions will now be enabled by default.

We got into a pretty annoying issue when trying to add revision fields to entities which already had data because the revision field couldn’t be set to NOT NULL. Andrei wrote an awesome patch to allow an initial from field to be set. We therefore set the revision id as the entity id.

When we make everything revisionable we’ll need an upgrade path, to test this the Shortcut entity is being upgraded to revisionable by a new service. This has still not been committed, so reviews are welcome.

We’re trying to get Workbench Moderation into core as Content Moderation. Still lots to do to make this happen, but the patch is there with passing tests.

Initial work has also started to get an archive field into all entities. This will allow us to have CRAP (create, read, archive, purge) workflow, create a trash system, and replicate entity deletion through environments.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by Disqus
Dec 21 2015
Dec 21

For the last 6 months I’ve been helping Dick and Andrei with a number of Drupal modules to enhance the management of content.


This module enhances the Drupal core Entity API by making all content entities revisionable. Revisions are enabled by default and not optional. This means that edits to users, comments, taxonomy terms etc are created as new revisions.

Another ground breaking advance is that deleting any of these entities now just archives it. Delete is a flag in the entity object, and just like any other update, it creates a new revision.

The concept of workspaces has also been added, this allows for a new instance of the site, from a content perspective, can be created. An example use case for workspaces would be to have a dev, stage and production workspace, and move content between them as it gets promoted through the workflow.


Now that deleting content entities just means a new revision marked as deleted we need a way to recover or purge them. The trash module is a UI on top of Multiversion allowing users to do just this.

Relaxed Web Services

Drupal 8 has always been about getting off the island, Relaxed Web Services furthers this by getting content off the island. It uses Drupal core’s REST API to expose CouchDB compatible endpoints. This means that replicating content is just a case of using CouchDBs replicator. Then creating a decoupled Drupal site is as simple as using PouchDB.

This works really well with Multiversion’s Workspaces, where each workspace is exposed as a separate CouchDB database.

CouchDB Replicator

So that we don’t need to depend on CouchDB for replication, the replicator has been rewritten in PHP. This will allow us replicator content from within Drupal or even via Drush.


There is a long history for Deploy in Drupal, but now in Drupal 8 it’s little more than a UI for the PHP based CouchDB replicator. It allows replication of content between workspaces, between Drupal sites, and between CouchDB databases.


Something we’re currently working on is Mango, inspired by MongoDB and based on Cloudant’s implementation for CouchDB. Mango will allow querying for content entities over the Relaxed Web Services API. This is going to be very interesting to those creating decoupled sites because PouchDB supports the same querying API.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Dec 18 2015
Dec 18

Drupal Content Hub

Creating a Content Hub using CouchDB and Drupal via the Deploy module.

[embedded content] Please enable JavaScript to view the comments powered by Disqus. blog comments powered by Disqus
Dec 18 2015
Dec 18

Deploy suite for Drupal 8

I just uploaded a screencast of how to use Deploy module in Drupal 8. It covers a few of the different possible use cases.

[embedded content] Please enable JavaScript to view the comments powered by Disqus. blog comments powered by Disqus
Oct 02 2015
Oct 02

This was a question I got from a client. So I set to work on finding a solution to alert the team who needed to know when a page has changed.

TL;DR: CacheTags.

Drupal has an awesome caching layer. It makes use of cache tags, these tags can be invalidated when something changes. For example if this is a view called “frontpage”, it would output the cache tag “view:frontpage”. This view lists a bunch of nodes, each with their own cache tag, “node:1”, “node:2” etc. When node 2 is edited, it’s cache tag and the frontpage view cache tag would be invalidated.

Taking this into account if we want to know what pages have changed, we need to know what cache tags the page has, then what cache tags have been invalidated.

The client already has a crawler to check their sites, this can look at the X-Drupal-Cache-Tags header and capture the cache tags the page has. So all we need now is a way of telling the crawler what cache tags have been invalidated.

Welcome to the CacheTag Notify module. This has a simple settings page where an endpoint URL can be added. Then every time a cache tag is invalidated it gets POSTed to the endpoint as a JSON string. The crawler will then need to to lookup which pages are using the invalidated cache tags, then it knows which have changed.

The CacheTag Notify module works by adding CacheTagsInvalidator service. The invalidateTags method is passed an array of invalited tags, this is then POSTed to the endpoint url using Guzzle. Overall a very, very simple, but effective solution.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Sep 23 2015
Sep 23
[embedded content]

At DrupalCon Barcelona this year I presented with Dick Olsson outlining a plan for CRAP (Create Read Archive Purge) and revisions (on all content entities) in core.

Phase 0

For Drupal 8.0.0
Enable revisions by default (https://www.drupal.org/node/2490136) on content types in the standard install profile and when creating new content types.

Phase 1

For Drupal 8.1.0

  • Improve the Revisions API performance, some of this will come from moving elements from the multiversion module into the entity API.
  • Enable revisions by default for all content entity types. So not just nodes anymore but blocks, comments, taxonomy terms etc.
  • Introduce a revision hash, parents and tree. Each revision needs to have a parent so you know where it’s come from, each parent can have multiple child revisions.
  • Data migration - Moving all 8.0.0 sites to 8.1.0 will mean moving their data to the new revision system.

Phase 2

For Drupal 8.2.0

  • Remove the ability to not have revisions. To simplify the API and the data stored it makes sense to remove the ability to disable revisions. This will allow us to remove all the conditional code around if an entity has a revision or not.
  • Delete is a new flagged revision. When deleting an entity a new revision will be created and this revision will be flagged as deleted. This is the archive element of the CRAP workflow.
  • Introduce purge functionality. There may be times when an entity needs to be completely deleted.
  • Commit trash module to core. Trash is just a UI for the delete flag. It displays all entities marked as deleted. It then allows these to be restored by creating a new revision not flagged deleted, or purged by removing the entity.

Simple right?

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Sep 11 2015
Sep 11

At heart Drupal is an awesome CMS. The reason I would advise going for Drupal over any other framework is it’s ability to manage a scalable amount of content entities.

Earlier this week I discussed revisions in Drupal 8. Specifically the use of Multiversion module to allow Drupal to use a CRAP (Create Read Archive Purge) workflow for content management.

Since then I have been working on the archive and purge aspects of this workflow. The Multiversion module is pretty astonishing in how it revolutionises the way revisions are handled and content is managed but so far there has been little focus on the archive and purge process. When entities are deleted they are simply flagged as “deleted”, they exist in the Drupal database but are gone from any where on the site’s UI. Until now…

The all-new Trash module builds on the Multiversion module provide a place where users can see all their deleted entities, listed by entity type, then restore them along with all their revisions. The first commit on the 8.x-1.x branch was only done 3 days ago on September 8th, so it’s still early days, but the MVP is available as a dev release to have a play with.

Now, this is more or less the archive process working, next up is purge. So what do we purge? Well, ideally, nothing. It’d be great if we never need to truly delete any entities, but instead purge specific revisions. This is something that will be getting a lot of thought over the coming weeks.

Another step to look at is compaction. Multiversion has been following CouchDB’s API and coupled with the Relaxed web services module it allows entities to be replicated to CouchDB and other platforms following the same API. CouchDB has a notion of compaction via the “_compact” endpoint. How harshly should the content be compacted? remove all revisions apart from the current one?

All of this will be discussed by Dick Olsson and I, in our session Planning for CRAP and entity revisions everywhere in core at Drupalcon Barcelona.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Sep 07 2015
Sep 07

I’m sorry to say that on the surface little has changed with revisions in Drupal 8, but there is some work still being done.

The first step, for me, is to get revisions enabled by default. I was interested to hear from webchick that this came up indirectly during user testing. Most Drupal sites I have worked on have been large enterprise organisations. These types of companies all want an audit trail and all want to effectively manage their content. This is easily achievable by using revisions. It looks as though many new to Drupal are in a way scared by revisions. Enabling revisions by default will take away the reasoning for being scared and give this awesome feature out of the box.

The current revisions system isn’t perfect, it’s pretty good, but not perfect. There’s a lot of work being done to improve upon in. In Drupal 7 we had the deploy module, which allowed users to move content from one site to another, such as from staging to production, this built upon the existing revisioning platform. In Drupal 8 we have the multiversion and relaxed web services modules. These were both demoed by dixon_ at Drupalcon Los Angeles.

Although multiversion builds upon Drupal 8?s core revision system it’s vastly different with an awesome direction. One key feature is it enables revisions for every entity, users, comments, everything. The way it displays revisions as a tree also makes it clear and understandable as to how the revisons work and how the relate.

The relaxed web services module builds upon a similar ethos as the deploy module, but it makes use of an API based on the replicatio.io protocol to handle bi-directional content replication. This means it can replicate content to any other replicatio.io protocol source such as couchdb or pouchdb.

Step by step Drupal 8 is getting more amazing.

To find out more about all this head along to the Planning for CRAP and entity revisions everywhere in core session at Drupalcon Barcelona on Tuesday 22nd September at 11am in room 122-123 “Interoute”.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Sep 02 2015
Sep 02

In a post last week I discussed versioning in Drupal and briefly touched on version numbers in info files. A lot of my focus over the last few months has been around composer and Drupal, one issue for Drupal when using composer for contrib modules is that the info file is pulled from git and therefore doesn’t have the version number, project name or datestamp in the info.yml file that is added by the drupal.org packager.

I am currently working on a patch for the update module which will get the project name from the module’s composer.json. This is just the first step because the project name is what’s needed for the module to show on the update status page.

The patch adds a composer parser service, which like the info parser service that parses the info.yml file of a module, this parses the composer.json file of a module. From here we can get the project name such as “drupal/devel”, then by exploding that we can get the project name from the second element in the array.

Composer.json files sometimes include a version number so we could use that too, but this is vary rare. The only real option for the version number is the git branch or git tag, this is how the git deploy module works.

Please enable JavaScript to view the comments powered by Disqus.

blog comments powered by
Jul 13 2014
Jul 13

The ingredient icons are term reference fields formatted to output as their respective image fields, rather than as a link or text

The example situation is where a view displays a list of nodes or fieldable entities, for our example items on a menu, and each of these has one or more taxonomy term references, in this example the main ingredients. While it’s simple to output the term references as plain text or a link, showing an image or other field attached to the term reference instead of this presents problems.

Using views relationships

The obvious solution is to create a relationship to the taxonomy in the view set up, and add the image field via the relationship. However, this currently presents issues with duplicate rows being output. If an item in the view has more than one term reference, it is displayed once for each term reference. Because of how views works, setting “distinct” and “pure distinct” in the query settings does nothing as they are technically distinct results (each has a different term reference).

The views_distinct module should offer a solution to this kind of problem, but currently it does not work in a way that can aggregate the required fields while filtering duplicates in this situation.

Creating the custom field template

In our example view, no relationship is used and the relevant term reference field is included in the field list

If you have never made a views template before, click the link “Information” in the Other section of the view:


This displays a list of possible templates to use in customising your view for each field in the view. The template names shown are ordered from least specific to most specific – the filename of the template determines which situtations it is used. The bolded template is the one currently being used. To make a new custom template, create a file in the theme’s templates directory with the name. Click the link next to it to get the default code which should go into the template. In this case we wish to control output in all situations the field appears, so the first custom template option (highlighted) is that used.


From the helpful comment at the top of the file, it can be seen that the contents of the view item can be found in the $row object. By debugging this object the location of the ingredients term references and their respective image fields can be found.

In this case the term reference field data is at:


and the image field at:


Where INDEX is the array index for multiple items.

The field_view_field() function is useful here to display the image field without needing to worry about URLs and allows control of formatting, e.g. image style presets. We also need to use an isset() condition to prevent warnings being thrown where rows don’t have any term references.

Putting this all together gives the example code:

if(isset($row->field_field_ingredients)) {
        $term = $row->field_field_ingredients;

        foreach($term as $ingredient){
                print render(field_view_field('taxonomy_term', $ingredient['raw']['taxonomy_term'], 'field_image',));

This outputs the image, but at it’s original size and with an ugly label that says “Image:”. To fix this, we need to use the optional fourth parameter of the field_view_field() function to control display and formatting of the field. The line inside the foreach() loop becomes:

print render(field_view_field('taxonomy_term', $ingredient['raw']['taxonomy_term'], 'field_image',
array('label'=>'hidden', 'settings' => array('image_style' => 'thumbnail'))));

This hides the label and sets the image style preset for the output to ‘thumbnail’.

Final code:

if(isset($row->field_field_ingredients)) {
        $term = $row->field_field_ingredients;

        foreach($term as $ingredient){
                print render(field_view_field('taxonomy_term', $ingredient['raw']['taxonomy_term'], 'field_image',
                array('label'=>'hidden', 'settings' => array('image_style' => 'thumbnail'))));

It will be necessary to set the views Query settings to distinct (pure distinct shouldn’t be needed) to avoid multiple rows being output, but in this case it will filter the view as expected.

Nov 06 2013
Nov 06

While there are a couple of modules that can be used to easily set up entity reference field relationships, in particular the corresponding entity references module, it’s also possible to do create these kinds of relationships with rules. While initially more complicated to set up (though not requiring a dedicated module), the result is a more powerful setup that can be easily cloned and extended with additional actions and conditions – for example, sending e-mails, displaying messages on the site, or changing the value of a third field.

Entities and Entity References

Drupal 7 is built around entities and entity bundles. To understand this concept, it’s necessary to look back to how things were in Drupal 6. D6 used a system of nodes, expanded by the CCK module to provide configurable content types. Content types have to be types of node by definition, and come with comments, revisions etc that may not be relevant in many use cases and cause an overhead.

With entities, there is no longer anything special about nodes. In Drupal 7, nodes, comments and users are all examples of entities, configured from the same base entity system. It’s now possible to create new entity types and many Drupal 7 modules take advantage of this to provide enhanced functionality and performance – for example, the Commerce module with its line item entity type and profile2 module’s profile types.

Entity references replace the role of the user and node reference fields from CCK. These field types allow the definition of relationships between two things (e.g. a user and a node). There are two modules commonly used to enable entity references in Drupal 7:


References is a simple module which emulates the node and user reference fields from Drupal 6. It is easy to configure and use.

Entity reference

Entity reference is a powerful module which is not restricted to any predefined entities like the References module. It is possible to create references to any entity type, including custom entities. The module is a little more complicated to set up and use, particularly for anyone previously unfamiliar with reference fields. There is a module to migrate from references to this module should referencing other entity types become a requirement.

Both modules work with rules in the same way, but if you wish to work with more than just user, node and taxonomy term entities you will need entity_reference.

Creating a synced client / manager relationship in rules

In this example, the user entity can have an attached customer profile or a manager profile, containing different fields, using the profile2 module.

The client has a user reference field called “manager”, which can only contain one item, and the manager has a user reference field called “clients” which may contain multiple values.

When a manager is assigned to a client, using the manager field on the client’s profile, the objective is to update the “clients” field on the manager’s profile to match.

We are also going to set up a second rule that fires only when a currently assigned manager is removed or changed, and to make things more fun we have a “past clients” reference field for the managers which former clients should be prepended to.

You will need the Rules and token modules installed and enabled

The basic rule: Add client to the manager’s ‘clients’ entity reference field when a manager is assigned to a client.

Event and conditions

Event: After updating an existing profile

To have access to the relevant fields later on and to be sure the rule only fires if the field is not empty:

  • Condition: Entity has field
  • Parameter: Entity: [profile2]
  • Field: field_current_manager


  1. Add an action “Fetch entity by ID”
    We need to load the entity that will be altered by the other actions. The type of this entity is User – the user who is being assigned as the client’s manager.
    • Entity type = User
    • Data selector= profile2:user:profile-client:field-current-manager:uid
      • Provides variable “manager”
      • note that this is actually accessing data through the entity reference field to get the new manager’s UID – pretty cool!
  2. Add an action “Add an item to a list” – this is needed rather than ‘set value’ as the clients field can hold multiple values. We use the ‘entity fetched’ variable provided by the previous step to achieve this:
    • List to be added to: manager:profile-manager:field-clients
    • Item to be added: profile2:user (the user of the client profile being updated)
    • Check the box “enforce uniqueness” and select “prepend to the list” if you wish to have newest clients at the top.

You could choose to add other actions, such as to display a message on the site such as “Assigned manager is [token value]” or send an e-mail. These may require the addition of a condition that makes the rule only fire if the client did not have a manager before (use a data comparison) – as currently the rule fires every time a profile is updated – but this does not cause any issues if “enforce uniqueness” is checked.

You can also have this rule react to profiles being created if the manager may be delegated when data is first entered into the profile. Otherwise profiles would have to be saved twice in this situation. To do this, add a second event: After saving a new profile. You can also clone this rule and change the event for greater power.

Rule Two: Adding to a manager’s past clients list when a manager is changed or removed

With a second rule we can react in two situations. When a manager is changed or removed from the manager field (the new rule), and when a manager is assigned and the manager field was empty (the first rule – which will need an extra condition)

  1. Clone the first rule. Call it “Sync fields when manager is changed and update old manager’s past clients field”
  2. New condition: NOT Data comparison: profile2-unchanged:field-current-manager

    This makes sure that the rule only fires if the new manager is different to the old one (to prevent the rule firing when the profile is updated, but this field hasn’t been changed)

  3. New condition: NOT Data value is empty:
    So the rule only fires when there was a previous manager assigned. It would otherwise also fire in the same situations as the first rule.
  4. New action “Fetch entity by ID”
    Load the old manager’s user entity
    • Entity type = User
    • Data selector= profile2-unchanged:user:profile-client:field-current-manager:uid
    • Provides variable “old-manager”
  5. New action: Add an item to a list:
    • List to be added to: old-manager:profile-manager:field-past-clients
    • Item to be added: profile2:user
  6. New action: Remove item from a list
    • List: Selected list: old-manager:profile-manager:field-clients
    • Item to remove: profile2:user

Again, there are plenty of avenues for extending this with new actions. For example, “Display a message on the site” – a message helpful for admins might be:

[profile2:user]‘s manager set to [profile2:field-manager] – Old manager was [old-manager:name]

Adjusting the first rule

To prevent the first rule from also firing in situations where the second does, add a new condition to the first rule:

  • Condition: Data value is empty
  • Data to check: profile2-unchanged:field-current-manager

Two way relationships should be set up using another rule with inverse data selectors and if applicable, a different event to react to. Hopefully this article helps illustrate what can be achieved with rules and entity references.

Apr 05 2013
Apr 05

Listen online: 

Git is often touted as among other things being extremely flexible. It's a big selling point for the software. You're not throwing all your eggs in one basket and assuming that there is one singular workflow to rule them all. This flexibility can also be a challenge though. In this podcast we'll talk about the various ways that we at Lullabot use Git when working with teams of people. Both small and large. And the organizational tricks we've learned along the way to help make sure our projects continue moving forward and don't get to messy.

Some of the things discussed include designating someone as a branch manager, working in feature/task branches to keep your code organized and easy to review, and using pull requests. Spoiler alert! We end up boiling it all down to this. There is no one perfect way, but whatever way you decided to organize your team make sure you document it and that everyone on the team knows what you decided.

Podcast notes

Ask away!

If you want to suggest your own ideas for podcasts, or have questions for us to answer on a podcast, let us know:
Contact us page

Release Date: April 5, 2013 - 10:00am


Length: 42:35 minutes (24.85 MB)

Format: mono 44kHz 81Kbps (vbr)

Feb 17 2013
Feb 17

The power of the webform module allows sophisticated registration forms to be put together far more quickly than would be possible by modifying the standard system. Webforms and Rules can also easily be cloned, allowing the possibility of simple setup of a multi-faceted registration system as site-wide settings for user registration no longer apply – everything (sending e-mails, assigning roles etc) is handled by Rules.

Restrictions of Drupal’s built-in user registration system

Drupal sites allows one of these three options for user registration:

  • Registration Disabled
  • Registration Approved (with admin approval)
  • Registration Approved (no approval needed)

There is also a checkbox to select whether e-mail verification is required or not.

In my previous post on Drupal Commerce’s powerful integration with Rules, I explained a way to improve the anonymous user experience on completing an order, but it required the user registration settings to be set to the most lenient option “Registration approved, no approval needed, no e-mail verification needed”.

In the past when I have used this setting, sites are likely to be inundated with bot created accounts. To prevent this, I added a rewriterule from /user/registration back to /user and hid the “register” tab on the user page with CSS.

The side effect of this was to effectively lock registration to one of only two methods – via placing an order in the store, or via manual admin creation of accounts.

A need arose on a site configured like this for user registration functionality for a new ‘members only’ section. There are a number of ways I could have tackled this (the most obvious being a ‘subsite’), but I decided to see what was possible using Rules and Webform. I was very pleased with how it turned out and believe it can be useful for any Drupal site looking to make a departure from the standard user registration system.

Adding user registration to your Drupal Webform

You must install the Webform Rules module, which provides relevant events, conditions and data selectors to rules. You must be using version 4+ of Webform (which, though still an alpha release, is worth upgrading to for improved built-in error handling presentation on existing forms and built in ‘conditions’, which are similar to a stripped down version of rules conditions for use internally in the webform module). This guide also assumes you have Drupal Commerce installed, though you only need it in order to clone one if its rules.

Update November 2013
It turns out that the commerce_checkout module must be installed in order to have the “Entity exists by propery” condition used below available in Rules. This is an extremely useful condition and a patch has been made to add it to the Rules module, though at the time of writing it has not been committed to dev. If you wish to use this on a site without commerce_checkout, you’ll need to apply the patch, and make the edit to the patch to change the type to text (I will make a new patch with this done and add it to the issue).

  1. If you don’t have an existing form you wish to use, create your registration form. A password field is not required or accounted for in this guide, it is set by the user on first logging into their account from a link in e-mail sent to them. The only necessary field is ‘e-mail‘. You can set up the Webform with whatever other fields and functionality you like though.
  2. Make a clone of the rule from commerce, “Create a new account for an anonymous order”, calling it “Create a new account on Webform submission“.
  3. The first step is to delete the event “Completing the checkout process” (this will throw a bunch of errors on reloading the page), replacing it with a new event “After a Webform has been submitted“.
  4. Delete all the conditions except “NOT entity exists by property“.
  5. Add a new condition “Webform has name” and select your registration form.
  6. Move “send account e-mail” out of the loop. Select the type of account registration e-mail relevant. If you have already customised the relevant one, as in my case, just use one of the other options and alter the template to fit (in the configuration > people > account settings page).
  7. Delete the other actions in the loop to fetch the created account, and the loop itself (unless you wish to manipulate areas outside the user entity based on its data too).
  8. There will still be errors shown for most of the rules. This is because they are still using tokens from commerce. These need to be replaced with tokens from the Webform submission. These are formed similar to [data:email-value] or [data:email-value-raw], where ‘email’ is the key of your Webform component.
  9. (Optional) Create a new role for the users registering via this form.
  10. Add an action below “Create a new Entity” and above “Save entity”.  Set a data value [acount-created:roles] – select the role to assign user accounts created via this form from list.
  11. Finally, add an action “Show a message on the site“, last in the actions list – The Webform module does not know whether the rule was successful or not, so it goes to the confirmation page even if there was a problem in the rules evaluation. So we show a success message as part of the rule instead. I have not fully looked into the conditions aspect added to Webform 4 to see if there is any provision for integration in this area.
  12. Test your new Webform based registration form

Update November 2013

Stay tuned for a followup article on advanced entity manipulation based on submitted webform data using the rules_data_transforms module – including addressfields, times and dates, integer values and multiple value fields (e.g. checkboxes) – both input and output

Feb 07 2013
Feb 07

Lullabot Re-Launches a Responsive GRAMMY.com for Music’s Biggest Night

Lullabot is excited to announce its fourth annual redesign and relaunch of GRAMMY.com! Just in time for the 55th Grammy Awards this Sunday, February 10th, this year's site features a complete redesign, as well as an upgrade to Drupal 7 leveraging Panels and Views 3.x, some cool, fullscreen, swipeable photo galleries and other mobile-web bells and whistles. The site is also fully responsive, allowing the 40 million+ expected viewers to stream, share, and interact across any web or mobile device. Special congratulations to Acquia for hosting GRAMMY.com for the first time, as well as to our good friends over at Ooyala for once again delivering the site’s archive video content, and to AEG for delivering the live stream video via YouTube.

Simultaneously tune in to every aspect of music’s biggest night so you won’t miss a thing: awards news, video, pictures, artist info, Liveblog posts, thousands of tweets-per-minute and behind-the-scenes action.

The GRAMMY Live stream starts tomorrow, Friday February 8th at 5pm ET / 2pm PT, so you can spend all weekend with the website as you gear up for the CBS telecast on Sunday night. Don't miss any of the GRAMMY action this weekend!

We'll be watching and tweeting along from @lullabot. Follow us on Twitter and let us know what you think.

Feb 01 2013
Feb 01

Listen online: 

Jeff Eaton and Drupal initiative lead Greg Dunlap discuss the history of digital transformation at the Seattle Times, the difficulties of cross-site content sharing, and the importance of cross-discipline communication.

Release Date: February 1, 2013 - 2:30pm


Length: 39:20 minutes (16.37 MB)

Format: mono 44kHz 58Kbps (vbr)

Jan 31 2013
Jan 31

Replicating module_invoke_all and drupal_alter in Javascript

If you've ever written a Drupal module before you're likely familiar with Drupal's hook system. I'm not going to go in to details about how the hook system works, or why this particular pattern was chosen by Drupal's developers. What's important here is what this systems allows module developers to accomplish.

At its most basic, the hook system is what allows me to write a module that enhances or extends Drupal -- without ever having to modify a line of someone else's code. I can, for example, modify the list of blocks that are available on a given page by simply implementing a "hook" function in PHP that modifies the information that was already set up. This approach is one of the things that makes Drupal incredibly flexible!

When you're writing your own custom modules, it is customary to expose these types of hooks for other modules, too. That way, other developers can come along and make minor modifications or feature enhancements to your module by "piggybacking" on your module's functionality, rather than hacking your code. It also means that you don't have to anticipate every possible use case for your code: by providing these extension points, you allow future developers to extend it.

Drupal makes it really easy for modules developers to do this, and it provides a set of helper functions that allow you to easily broadcast these "I have a hook! Who wants to tie into it?" announcements to the world.

The case for APIs

Now, that's all well and good, but what if the functionality I want people to be able to alter or events I want people to be able to react to are encapsulated in Javascript? This is where Drupal breaks down a bit and we're left to our own devices. Drupal provides a simple mechanism for modules to essentially register a bit of code that they would like to be executed whenever Drupal.attachBehavoirs is called. This happens when the DOM is fully loaded and Drupal's Javascript code has been properly initialized, and anytime new elements have been added to the DOM via AJAX. And that's about it.

For most cases where Javascript needs to interact with Drupal this works just fine. What you're likely really after is some element in the DOM anyway so you can do your sweet web 2.0 fadeIn().

Sometimes, though, your Javascript needs are more complex than adding visual pizzaz. Consider this; You've been asked to write a module that integrates a video player from a third party site into Drupal. The video service offers a straightforward Javascript based embed option. All you have to do is include their Javascript file on the page and call the player.setup() method, passing in an embed code to the player so that it knows which video to play. Easy enough, and a common pattern.

Let's say the setup() method takes not only an embed code but also an array of additional paramaters to configure how the player appears and behaves. Some of those paramaters are callback functions -- the name of an additional Javascript function that should be called when certian things happen. Some examples of this might be 'onCreate' when the player is embeded and ready to start playback, 'onPause' when someone clicks the player's play/pause button, and so on. For our example we'll assume that we're implementing an 'onCreate' callback. It should be triggered by the video player after it's been embedded, and is ready for playback to start. (Another common example of something like this the jQuery.ajax, which can take 'success' and 'error' callbacks. Which one gets called depends on the result of the Ajax request.)

This should be simple, right? Just set the callback to 'Drupal.myModule.onCreate' and write the corresponding function in your mymodule.js file!

Except... Later on in the project, Kyle comes along and is told to implement an unrelated piece of functionality that also fade a DOM element in on the page after the video player has been embeded. Now two different functions both need to fire when the Video player has been created. You can't just pass in a second 'onCreate' callback function to the player.setup() method -- it only allows one value! So now Kyle is stuck trying to jam his unrelated Javascript in to your Drupal.myModule.onCreate function. Blam! You've got a mess of unrelated, hard to maintain code!

A better way of handling this would be for your module to re-broadcast the 'onCreate' callback to give other code a chance to respond to it as well. You could take it one step farther and implement a system that sends out a notification when the 'onCallback' event occurs, and subscribe to it with any functions that need it. That approach would be a lot like the module_invoke_all() function in Drupal's PHP API.

Lucky for you, there are all kinds of ways to do this in Javascript! I'll outline two of them below.

The Drupal Way

One way of solving the problem is to replicate the Drupal.behaviors system provided by core. That's actually pretty straightforward. You need to:

  • Create a well known place for someone to register their objects or functions.
  • Write a short snippet of Javascript that will loop through and execute these registered functions.
  • Call this Javascript at the appropriate time.
  • Ensure that your module's Javascript is loaded before that of other modules.

In your javascript code, you'll need to create a standard object that other modules can go to when they register their functions. In core, this is Drupal.behaviors. We'll create our own new object for this example.

var MyModule = MyModule || {};
MyModule.callbacks = {};

Then you'll need an easy way to call and execute any registered callbacks.

MyModule.executeCallbacks = function(data) {
   $.each(MyModule.callbacks, function(key, callback) {
       if ($.isFunction(callback)) {

What this code does is loop over all the functions collected in MyModule.callbacks and executes them. Pretty simple, really! It works well for notifying any code of some "event" as long as you remember to call the MyModule.executeCallbacks() method at the appropriate times.

Now, any other module can register callback functions that will be called by the MyModule.executeCallbacks() method:

MyModule.callbacks.theirModuleOnCreate = function() {
   // Do some sweet Javascript stuff here ...

Put it all together by implementing your onCreate callback (the code we wanted to implement at the very beginning of this exercise!) and call the new code.

MyModule.onCreate = function() {
   // Give all modules that have registered a callback a chance to respond.

Pretty painless. Just make sure your module's Javascript file is loaded before any others: in Drupal, you can do that by changing the weight of your module to -10, or something similar. If you don't do that, you'll end up with warnings about "MyModule.callbacks being undefined" when someone else's Javascript is loaded first, and tries to register a callback with your object.

This approach is easy to implement, but it still has some problems.

  • It's a major "Drupalism." For anyone familiar with Javascript but not with Drupal's way of doing things, it's a conceptual hurdle that needs to be overcome before understanding how to add a new behavior.
  • If one behavior fails, the execution stops: anything that hasn't be executed will not get called, and you're dependent on others to write code that doesn't fail.
  • There is no easy way to remove a behavior added by someone else's code, or to overwrite the way that Drupal core does something. Don't like the table drag javascript? The only way around it is Monkey Patching.

An alternative way

Another approach that's a bit more "Javascripty" is to use the jQuery.trigger() and jQuery.bind() methods. With them, you can create custom events that other modules can listen for and react too. It's a lot like using jQuery to intercept the 'click' event on a link, perform some custom action, then allowing the link to continue with it's processing. In this case, though, we'll be triggering our own custom event on a DOM element. To do this you need to:

  • Call jQuery.trigger on an object or DOM element in order to broadcast an event.
  • Use jQuery.bind on an object or DOM element to register a listener for an event.
  • Wash, rinse & repeat ...

As usual, the code samples below would go inside of your module's mymodule.js file and be included on the page when necessary via the drupal_add_js() PHP function.

Inside of our module's .onCreate callback, we use the jQuery.trigger() method to trigger our custom event and alert all listeners that they should go ahead and do their thing. It's not necessary to prefix our event names with 'myModule.' but it does lead to cleaner code. (It also makes it easier to unbind all of the events associated with a particular module in one step.) This approach is functionally equivalent to calling the MyModule.executeCallbacks() method from the previous example. We're telling anyone that wants to participate that now is the time to do it!

MyModule.onCreate = function() {
   // Trigger an event on the document object.

The second piece of this puzzle is using the jQuery.bind() method to add an event listener that will be triggered any time our custom event is triggered. Each event listener receives the jQuery.Event object as the first argument. The code below is equivalent to the bit above where we register our callback with MyModule.callbacks.theirModule = {}

$(document).bind('myModule.onCreate', function(event) {
   // Do my fancy sliding effect here ...

Any number of modules can bind to the custom event, and respond to the onCreate callback event, without ever having to modify your module's Javascript.

Another technique that I've used in the past is to create a drupal_alter() style functionality in Javascript. This would allow others to modify the parameters that my code passes to a third party's API. It's easy to do, so since you can pass an array of additional arguments to the jQuery.trigger() method. They'll be passed along to any listeners added with jQuery.bind(). And, since complex data types in Javascript are inherently passed by reference, the listener can make changes to the incoming parameters and they'll be reflected upstream. Something like the following would do the trick.

MyModule.createWidget = function() {
  var parameters = {width: 250, height: 100, onCallback: 'MyModule.onCreate'};
  // Allow other modules to alter the parameters.
  $(document).trigger('myModule.alterParameters', [parameters]);

Then anyone else could bind to the new 'myModule.alterParameters' event and receive the parameters object as an additional argument. The first argument for any function using jQuery.bind() to listen to an event is always the jQuery.event object.

$(document).bind('myModule.alterParameters', function(e, parameters) {
  // Here I can change parameters and it will be reflected in the function that triggered this event.
  parameters.width = 350;

While this method isn't perfect either, I like that it's closer to the Javascript programming patterns used in the broader world outside of Drupal. This means it's easier for someone not familiar with Drupal to understand my code and to quickly figure out how to work with it.

It does, however, still exhibit some of the same problems as the Drupal.behaviors method. Notably the fact that if any one listener has code that fails the whole system breaks down. In addition, you have to trigger and bind to events on either a DOM element or other Javascript object.


Drupal itself doesn't come with a Javascript equivalent to the module_invoke_all() function, but there are a lot of ways that we can implement a similar system ourselves. When you run in to this problem in your development, I encourage you to use the second approach outlined: it has all the same capabilities of the Drupal.behaviors approach, with less code and a shallower learning curve.

These are by no means the only methods for accomplishing this sort of task in Javascript. Another for example would be the popular publish/suscribe pattern, but we'll wait to explore those in another article! Whichever approach you choose, it's important to build for future flexibility, just as you would with your PHP code.

Jan 30 2013
Jan 30

About Drupal Commerce

Out of the box, Drupal Commerce is not, as yet, very streamlined in terms of purchaser experience or non-technical store administration. In part, this is due to it’s relative youth in development and especially in wide deployment, but a significant factor is the modularity: A lot of functionality that is provided as part of the default system in Ubercart must be installed as an optional companion module with commerce – for example, there is no shipping or payment without installing the relevant modules. However, even with these modules there is still a lot for the web developer to do for more than a very basic setup, and Commerce may even seem clumsy and limited in what it can do. What may not be realised is the excellent integration with the Rules module gives Commerce a significant amount of power and flexibility under the hood. In this guide I am going to be using this module extensively and will be sharing some ideas on improving the customer experience and ease of non-technical admin on a typical Commerce site.

Update November 2013

Commerce has matured into a stable and well supported platform, however it’s basic out-of-the-box nature and modularity remain.
A significant proportion of commerce developers now use the Commerce Kickstart platform for building typical use case sites, a commercial but open source project.
Although nearly all development work to create a typical online store is already done, configuration of products and the last parts of development have a steep learning curve (and work differently to vanilla Commerce) – to the point that some developers use the ‘example store’ with products, taxonomies etc already set up as a base to build from rather than using the platform installation. It should be stressed that there are many ways to build a feature-rich store with Drupal commerce, the Kickstart platform is not the only way to go. For other non-typical use cases the Kickstart platform carries a large overhead. Everything in this tutorial is likely to still be of use if building from a basic Commerce installation.

Some ideas worth implementing are found here: Express checkout with Drupal Commerce -the main thrust of the article is setting up a ‘buy now’ instead of ‘add to cart’ functionality, but it walks through some aspects of the setup process which are worth doing on all Drupal Commerce sites. This guide assumes you already have a basic Drupal Commerce site set up. In this guide I am going to use the Rules, Views and Views Bulk Operations  to tackle several specific points where the system has current issues or a lack of built in functionality.

Covered in this tutorial

  • Fixing the default order success page for anonymous orders – account creation on purchase, auto log in, redirect to order page
  • Improving the information displayed to purchasers
  • Setting up a decent workflow for handling orders
  • Improved handling of delayed payment options (e.g. cheque, bank transfer)
  • A store admin page showing just orders waiting to be sent out, rather than all orders
  • Bulk operations on order tables
  • Printing Address Labels with VBO

Improved Handling of Anonymous/First Time Orders

new anonymous drupal commerce order page

An improved order success page for anonymous users/first-time customers, made with Rules

By default, in Drupal Commerce when an anonymous user checks out a purchase, they are presented on order completion with a page with very minimal info, only the order number and the below: “Thanks for your order – please click here to view your order when logged in”. Of course, anonymous users by definition aren’t logged in, and clicking the link gives them a generic “access denied” error with no further info, which is very unfriendly:

The link does work for authenticated users, but even then it would be desirable to display the order details on the confirmation page rather than requiring a click. An easy fix would be to edit the message to not have that link – but the page doesn’t show details of their order apart from the order number, and some things (e.g. selected postal method) are as far as I know impossible to output via tokens to this page. Others have tackled the issue by forcing customers to create or log in to an account before ordering, but requiring registration is not going to be optimal for all e-commerce sites. The page that a logged in user is taken to on clicking the link mentioned above link would be to some extent the perfect order confirmation page, apart from not having a success message. So how to get past the access denied error? First, we must make sure that accounts are fully and instantly registered with the site and do not require confirmation or admin approval to be accessed. Second, we set up an automatic redirect to the previously mentioned, previously inaccessible new order page with rules:

System Setup

Go to the configuration > users page, change settings to “anyone can create an account” and untick “e-mail verification needed” – this enables instant account creation and verification. Now a user can see their own order details page on checkout without any errors. Note – this will enable spam signups to your site, so depending on needs you may wish to make your standard user registration page inaccessible. On the site I developed this store for I disabled the registration page, but a need arose for a functional registration system alongside this which I found a fairly simple solution for using Webform, stay tuned for a follow-up article.

Rules Setup

Here we set up a rule that sends a customer straight to their order details page. Add a new rule in workflow > rules – call it “Redirect to order page”. event: on completing checkout action: page redirect [commerce-order:customer-url] The order details page doesn’t have any success message – but we can display our own with the rules action “display a message on the site” – and even improve on the basic setup with different messages for logged in (repeat) and anonymous (first time) purchasers.

Setting new order success messages

To set an order success message for anonymous users:

Add an action to “create a new account for an anonymous user” action: display a message on the site (add your message for first-time purchasers) You can use HTML. The default “status” styling with a green tick is fairly good for a success message.

Set order success message for authenticated users and repeat purchasers

Create a new rule – call it “Show a success message for authenticated purchasers” Event: on completing checkout Condition: NOT entity is new [site:current-user] Action: display a message on the site (add your message for repeat/logged-in purchasers) Give both of these rules a high weight (large number) so they only run after successful evaluation of the rest of the “on completing checkout” rules. I haven’t yet got this to work properly for payments involving an off-site redirect (eg Payapl WPS), so to handle these I would recommend customising the default success page to not have a link to the order details and to just instruct purchasers to log in via their e-mail.

Setting up a good order handling scheme

This is sometimes a stumbling block. The Commerce system has several stages an order can be put through, but how these are used and when is up to the developer. They are:

  • Pending
  • Processing
  • Complete

(There are multiple stages before an order is successfully placed but those are not relevant here). The most sensible way to use these for a good workflow is:

  • Pending – order received, but payment not authenticated
  • Processing – Payment authenticated, items to be readied for dispatch
  • Complete – Order has left on its way to customer.

If further stages are necessary for your enterprise, there is an optional module to achieve this.

Update Order status on payment completion automatically

Building on this idea, a great rule to add that saves work in the store back end, especially with non-instant payments is: Event: When an order is paid in full Condition: None Action: Update Order Status: Processing With payments where authentication comes through some time after the order is placed, this helps ensure your store is up to date without any work.

Improved Delayed Payment Handling

If using modules like cheque or bank transfer, there is currently no inbuilt way to show the relevant details (bank account number to make payment to / address to post cheque to) except on the ‘payment’ page of the checkout process (with an instruction to write it down before placing the order…). The bank_transfer module does not recommend use on production sites because of this issue. Again, this can be solved by adding more rules with actions to display a message on the site, when meeting certain conditions. As set up, it shows the payment details to the user on their order page until the payment is marked as cleared in the system, when it automatically stops showing.

Rule Setup

Event: On viewing order
Condition: payment-method comparison = your payment method (e.g. cheque)
Condition: order-balance > 0 (this will stop showing the message once the payment is cleared in the system)
Condition: data comparison: site:current-page:url = [commerce-order:customer-url] (if this condition is not added, the message will show wherever orders are displayed (multiple times if looking at the ‘orders’ page in admin, for example), instead of just on the customer’s order page.
Action: Display a message on the site Set high (higher number) weight for the rule, so that it is displayed below any success message from checkout completion, and if you wish change the message type to from status (suitable for a success message) to “warning” (suitable for information), which will seperate the two messages into different boxes and give them a different visual context. You can always style the output of the messages just for this page if you wish, though I have found the default presentation to be acceptable (example of first time order):

A page for store admin to ship orders from

The orders page store > orders for order administration is quite lacking in terms of useful information. It’s simple enough to edit which fields are displayed in the View which creates it. For the use of someone who’s job it is to post out orders, the page is not easy to use at all. Create a new view of orders as a table (you can also clone the default orders view and adjust), calling it “Processing Orders”. This will handle just orders that are paid for and ready to be shipped out. The basic view includes these fields:

  • Name
  • Line Items
  • Shipping Address
  • Order Total
  • Order Balance

Use a filter on order status to just show processing orders, and in the centre column of the views admin you can set a URL and menu entry for admins to access this new page by.

Using Bulk Operations to speed up order handling

If you don’t have it already, install the module Views Bulk Operations (VBO). This is a powerful module that lets administrators achieve tasks on multiple items in the database with a checkbox interface.

Excuse the backwards English, one of those 3AM errors that stayed in

To mark orders as shipped/complete en masse

We need to create a component in rules – this is like a segment of a rule that can be used by other rules or modules. Component type: Rule Add a parameter at the bottom (commerce product) Set the action to “change order status” – completed Go back to your view, and add a field “VBO”. Select the component you just created as one of the possible actions. Now you can mark orders as complete (shipped) en masse instead of one at a time via at least three clicks each. It’s also easy to set up an option to delete orders in bulk.

To add a basic Address labels Printing page (for shipping out orders)

This allows a non-technical admin to tick the checkboxes on a batch of orders on the ‘processing orders’ page and then click a button to be taken to a page with those labels formatted for printing. First, add a new display to the previously created processing orders view. The only field needed is shipping address. Give the display a URL. In the views ‘advanced settings’ add a contextual filter, select “order id” In the original processing orders display, edit the VBO field and create a “pass argument to page” operation. Call it “print address labels”. Pass to the URL of the print labels page. Select the “multiple arguments” option. Now, whatever orders are selected with the VBO checkboxes can be displayed for address label printing. You will need to do some CSS styling with print styles.

Jan 25 2013
Jan 25

Listen online: 

About two months ago we got a comment from a listener, KeyboardCowboy, about questions they had around contributing code to Drupal. Join Addison Berry, Karen Stevenson, Andrew Berry, Kyle Hofmeyer, Joe Shindelar, and Juampy Novillo Requena as we discuss those questions, and chat about how we got involved with contributing code, the challenges we face, and list out things that can be confusing or trip people up as they begin learning how the Drupal community works on code together.

Here is the text of the original comment that this podcast is based on (along with some handy links we've added):

Technical Newbs Podcast?

(asked by KeyboardCowboy)
I've been developing in Drupal since 5.2 but only within the last couple of years have really gotten involved in contributing and trying to be more involved in the community. I know the docs are resource out there on this are plentiful but I would love to hear some Drupal experts talk about some of the finer points of collaboration and contributing such as how they got started and their current process now.

I don't have much free time, but I want to help with D8 and Drupal is the first collaborative system I've worked in, so removing the grey area around these points could be the push I need to dive in more quickly.

1. What's your process to create a patch? Submit a patch? Test a patch?

2. How/Does this process differ between Contrib and Core?

3. How big can patches get and how do you handle the big ones?

4. Can more than one person work on the same patch? If so, how do you handle conflicts?

  • Interdiff: Show a diff between to patches so that you can see what's changed

5. What, exactly, do each of those statuses mean in the issue queue and who is responsible for changing the status of an issue?

6. What was/is your biggest challenge in collaborating on Drupal projects/issues/bugs/features?

7. How do you decide on a release state for a project (alpha, beta, rc)

And I'm sure I could think of others. Just thought I would pose that as an eager developer with limited time. Thanks again for keeping these podcasts going.

Ask away!

If you want to suggest your own ideas for podcasts, or have questions for us to answer on a podcast, let us know:
Contact us page

Release Date: January 25, 2013 - 9:17am


Length: 52:02 minutes (30.2 MB)

Format: mono 44kHz 81Kbps (vbr)

Dec 31 2012
Dec 31

Make 2013 safer with a backup plan for your site

Ah, New Years Eve. It's the time for parties, reminiscing, and the old tradition of New Years resolutions. We promise ourselves we'll exercise, read that important book, donate to charity, or -- depending on the number of web sites we're responsible for -- vow that this is the year to set up a backup plan. Few things, after all, are more terrifying than the realization that a server hiccup has wiped out a web site, or a hasty change deployed to the live site has nuked important content. Fortunately, there's a module that can help. Backup and Migrate offers site builders a host of options for manually and automatically backing up their sites' databases -- and integrates with third-party backup services, to boot!

Screenshot of Backup and Migrate settings

Backup and Migrate offers a number of options for preserving your Drupal site's database, but all of them revolve around three important choices. You can choose to backup just a select group of database tables, or the whole kit and kaboodle -- useful for ditching the unwieldy cache and watchdog tables that Drupal can easily recreate. You can choose the destination for your backup -- drop it into a private directory on your server, download it directly to your computer if you're performing a manual backup, or send it to several supported third-party servers. And finally, you can run backups manually or schedule them to occur automatically.

In its simplest form, you can use the module to manually pull down a snapshot of a site's database for safe keeping, or to do local development with the latest and greatest production data. For real safety, though, you can tie it to services like Amazon S3 storage, or the new NodeSquirrel offsite backup service. Set up scheduled daily or weekly backups, tell it how long to keep existing backups around, and rest assured that regular snapshots of your site's critical data will be tucked away for safe keeping when you need them.

When disaster strikes, you can use the Backup and Migrate module to upload one of those database backups, and restore your site to the state it was in when the backup was made.

Screenshot of the Restore Backup screen

It's important to remember that Backup and Migrate won't solve all of your problems. It requires a third-party addon module (Backup and Migrate Files) to archive and restore the site's important file uploads directory, for example. In addition, the easy one-click backup and restore process can tempt developers to forgo safe deployment strategies for new features. Just because it's possible to download a database snapshot, do configuration work on your local computer, then re-upload the snapshot to the live server, doesn't mean it's a good idea.

That said, Backup and Migrate is an excellent tool that's proven its worth on countless sites. Its clean integration with third-party file storage services also means that it's a great transitional path towards a full-fledged backup strategy for business-critical data. If you aren't using it, check it out -- and enjoy the peace of mind that comes with keeping your site's data safe.

Dec 11 2012
Dec 11

Inline WYSIWYG editing can improve life for some content managers, but brings new problems for content-rich sites.

For several years, core Drupal contributors have been working on ways to improve the user experience for content editors. Since May of 2012, project lead Dries Buytaert and his company Acquia have been funding the Spark Project, an ambitious set of improvements to Drupal's core editing experience. One of the most eye-popping features they've demonstrated is Inline WYSIWYG editing, the ability to click on a page element, edit it in place, and persist the changes without visiting a separate page or opening a popup window.

Chances are good that inline editing functionality could make it into Drupal 8 -- specifically, an implementation that's powered by Create.js and the closely associated Aloha WYSIWYG editor. Fans of decoupled Drupal code definitely have something to cheer for! The work to modernize Drupal 8's codebase is making it much easier to reuse the great front-end and back-end work from open source projects like Symfony and Create.js.

With that good news, though, there's a potential raincloud on the horizon. Inline editing, as useful as it is, could easily be the next WYSIWYG markup: a tool that simplifies certain tasks but sabotages others in unexpected ways.

Direct manipulation: A leaky abstraction

Over a decade ago, software developer Joel Spolsky wrote a critically important blog post about user experience: The Law of Leaky Abstractions. He explained that many software APIs are convenient lies about more complex processes they hide to simplify day-to-day work. Often these abstractions work, but just as often the underlying complexity "leaks through."

One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to.

The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying "learn how to do it manually first, then use the wizzy tool to save time." Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don't save us time learning.

And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder.

Those words were written about APIs and software development tools, but they're familiar to anyone who's tried to build an humane interface for a modern content management system.

At one extreme, a CMS can be treated as a tool for editing a relational database. The user interface exposed by a CMS in that sense is just a way of giving users access to every table and column that must be inserted or updated. Completeness is the name of the game, because users are directly manipulating the underlying storage model. Any data they don't see is probably unnecessary and should be exorcised from the data model. For those of us who come from a software development background this is a familiar approach, and it's dominated the UX decisions of many open source projects and business-focused proprietary systems.

At the other extreme, a CMS can be treated as an artifact of visual web design. We begin with a vision of the end product: a photography portfolio, an online magazine, a school's class schedule. We decide how visitors should interact with it, we extrapolate the kinds of tasks administrators will need to perform to keep it updated, and the CMS is used to fill those dynamic gaps. The underlying structure of its data is abstracted away as WYSIWYG editors, drag-and-drop editing, and other tools that allow users to feel they're directly manipulating the final product rather than markup codes.

The editing interfaces we offer to users send them important messages, whether we intend it or not. They are affordances, like knobs on doors and buttons on telephones. If the primary editing interface we present is also the visual design seen by site visitors, we are saying: "This page is what you manage! The things you see on it are the true form of your content." On certain sites, that message is true. But for many, it's a lie: what you're seeing is simply one view of a more complex content element, tailored for a particular page or channel.

In those situations, Inline WYSIWYG editing is one of Joel Spolsky's leaky abstractions. It simplifies a user's initial experience exploring the system, but breaks down when they push forward -- causing even more confusion and frustration than the initial learning would have.

A brief interlude, with semantics

With that provocative statement out of the way, I'll take a step back and define some terminology. Because Drupal's administrative interface, the improvements added by the Spark project, and the nature of web UX are all pretty complicated, there's a lot of potential for confusion when a term like "Inline Editing" gets thrown around. There are four kinds of editing behaviors that we'll touch on, and clarifying how they differ and overlap will (hopefully) prevent some confusion.

Contextual editing

When a content editor is on a particular portion of the web site or is viewing a particular kind of content, they should have access to options and tools that are contextually relevant. If an editor visits an article on their web site, give them access to an "Edit" link for that article. If it's unpublished, they should see a "Publish" link, and so on. Contextual editing also means hiding options from users when they're inappropriate. If you don't have permission to modify an article, you shouldn't see the "Edit" link.

Well-designed contextual editing is a great thing! It puts the right tools in the users' hands when they're needed, and helps prevent "option overload"."

API-based editing

Rather than rendering an HTML form, API-based editing means bundling up a copy of the content object itself -- usually in a format like XML or JSON -- and sending it to another program for editing. That "Client" could be Javascript code running on a user's browser, a native mobile app, or another CMS entirely. The client presents an editing interface to the user, makes changes to the object, and sends it back to the CMS they're done.

API-based editing is cool, too! It's not a specific user-visible widget or workflow. In fact, it could be used to deliver the very same HTML forms users are used to -- but it provides a foundation for many other kinds of novel editing interfaces.

Inline editing

Inline editing takes contextual editing a step farther. When you see data on the page, you don't just have a link to edit it at your beck and call: you can edit it right there without going to another page or popup window. One common scenario is tabular data: click in a cell, edit the cell. Click outside of the cell, and your changes are saved. A more complex example might include clicking on the headline of an article and editing it while viewing it on the front page, or clicking on the body text and adding a new paragraph then and there. The emphasis here is on eliminating context switches and unecessary steps for the editor.

Inline editing can dramatically simplify life for users by replacing cluttered forms, fields, and buttons with direct content manipulation. However, when direct manipulation the primary means of editing content, it can easily hide critical information from those same users. We'll get to that later.

WYSIWYG editing

"What You See Is What You Get" editing is all about allowing users to manipulate things as they will appear in the finished product rather than using special codes, weird markup, or separate preview modes. Desktop publishing flourished on 1980s Macintosh computers because they let would-be Hearsts and Pulitzers lay out pages and set type visually. WYSIWYG HTML editors have been popular with web content editors for similar reasons: finessing the appearance of content via clicks and drags is easier than encoding semantic instructions for web browsers using raw HTML.

WYSIWYG editing tools can help reduce markup errors and streamline the work of content managers who don't know HTML. Without careful restrictions, though, it can easily sabotage attempts to reuse content effectively. If a restaraunt's menu is posted as a giant HTML table in the "Menu" page's Body field, for example, there's no way to highlight the latest dishes or list gluten-free recipes. Similarly, if the key photo for a news story is dropped into that Body field with a WYSIWYG editor, reformatting it for display on a mobile phone is all but impossible.

Everything in-between

Often, these four different approaches overlap. Inline editing can be thought of as a particularly advanced form of contextual editing, and it's often built on top of API-based editing. In addition, when inline editing is enabled on the visitor-visible "presentation" layout of a web site, it functions as a sort of WYSWIWG editing for the entire page -- not just a particular article or field.

That combined approach -- using inline editing on a site's front end to edit content as it will appear to visitors -- is what I'll be focusing on. It's "Inline WYSIWYG."

Inline WYSIWYG! Can anything good come from there?

Of course! Over the past year or so, anything with the word 'WYSIWYG' in it has taken a bit of a beating in web circles, but none of the approaches to content editing listed above are inherently good or bad. Like all tools, there are situations they're well-suited for and others that make an awkward fit.

As I’m writing this, I see not just a WYSIWYG editor, I see the page I’m going to publish, which looks just like the version you’re reading. In fact, it is the version you’re reading. There’s no layer of abstraction. This is a simple (and old) concept… and it makes a big difference. Having to go back and forth between your creation tool and your creation is like sculpting by talking.

That's an incredibly compelling argument for the power of WYSIWYG and inline editing. I've seen it in action on Medium, and it really does feel different than the click-edit-save, click-edit-save cycle that most web based tools require. However, and this is a big however, it's also critical to remember the key restrictions Ev and his team have put in place to make that simplicity work.

One of the reasons its possible to have this really WYSIWYG experience is because we’ve stripped out a lot of the power that other online editors give you. Here are things you can’t do: change fonts, font color, font size. You can’t insert tables or use strikethrough or even underline. Here’s what you can do: bold, italics, subheads (two levels), blockquote, and links.

In addition, the underlying structure of an article on Medium is very simple. Each post can have a title, a single optional header image, and the body text of the article itself. No meta tags, no related links, no attached files or summary text for the front page. What you see is what you get here, too: when you are viewing an article, you are viewing the whole article and editing it inline on the page leaves nothing to the imagination.

This kind of relentless focus -- a single streamlined way of presenting each piece of content, a mercilessly stripped down list of formatting options, and a vigilant focus on the written word -- ensure that there really is no gap between what users are manipulating via inline editing and what everyone else sees.

That's an amazing, awesome thing and other kinds of focused web sites can benefit from it, too. Many small-business brochureware sites, for example, have straightfoward, easily-modeled content. Many of those sites' users would kill for the simplicity of a "click here to enter text" approach to content entry.

The other side(s) of the coin

Even the best tool, however, can't be right for every job. The inline WYSIWYG approach that's used by Create.js and the Spark Project can pose serious problems. The Decoupled CMS Project in particular proposes that Inline WYSIWYG could be a useful general editing paradigm for content-rich web˙sites, but that requires looking at the weaknesses clearly and honestly.

Invisible data is inaccessible

Inline editing, by definition, is tied to the page's visible design. Various cues can separate editable and non-editable portions of the page, but there's no place for content elements that aren't part of the visible page at all.

Metadata tags, relationships between content that drive other page elements, fields intended for display in other views of the content, and flags that control a content element's appearance but aren't inherently visible, are all awkward bystanders. This is particularly important in multichannel publishing environments: often, multiple versions of key fields are created for use in different device and display contexts.

It encourages visual hacks

Well-structured content models need the right data in the right fields. We've learned the hard way that WYSIWYG markup editors inevitably lead to ugly HTML hacks. Users naturally assume that "it looks right" means "everything is working correctly." Similarly, inline WYSIWYG emphasizes each field's visual appearance and placement on the page over its semantic meaning. That sets up another cycle of "I put it there because it looked right" editing snafus.

The problem is even more serious for Inline WYSIWYG. Markup editors can be configured to use a restricted set of tags, but no code is smart enough to know that a user misused an important text field to achieve a desired visual result.

It privileges the editor's device

In her book , author Karen McGrane explains the dangers of the web-based "preview" button.

…There's no way to show [desktop] content creators how their content might appear on a mobile website or in an app. The existence of the preview button reinforces the notion that the dekstop website is the "real" website and [anything else] is an afterthought.

Inline WYSIWG amplifies this problem, turning the entire editing experience into an extended preview of what the content will look like on the editor's current browser, platform, screen size, and user context. The danger lies in the hidden ripple effects for other devices, views, publishing channels, and even other pages where the same content is reused.

It complicates the creation of new items

Create.js and the Spark Project also allow editors to create new content items in place on any listing page. This is a valuable feature, especially for sites dominated by simple chronological lists or explicit content hierarchies.

On sites with more complex rule-based listing pages, however, the picture becomes fuzzier. If an editor inserts a new piece of content on another author's page, does the content become owned by that author? On listing pages goverened by complex selection rules, will the newly-inserted item receive default values sufficient to ensure that it will appear on the page? If the editor inserts new content on a listing page, but alters its fields such that the content no longer matches the listing page's selection rules, does the content vanish and re-appear in a different, unknown part of the web site?

In addition, multi-step workflows accompany the creation of content on many sites. Translating a single piece of content into several legally mandated languages before publication is necessary in some countries, even for small web sites. Approval and scheduling workflows pose similar problems, moving documents through important but invisible states before they can be displayed accurately on the site.

Complexity quickly reasserts itself

Many of the problems described above can be worked around by adding additional visual cues, exposing normally hidden fields in floating toolbars, and providing other normally hidden information when editors have activated Inline WYSIWYG. Additional secondary editing interfaces can also be provided for "full access" to a content item's full list of fields, metadata, and workflow states.

However, the addition of these extra widgets, toolbars, hover-tips, popups, and so on compromise the radical simplicity that justified
Inline Editing in the first place. On many sites, a sufficiently functional Inline WYSIWYG interface -- one that captures the important state, metadata, and relational information for a piece of content -- will be no simpler or faster than well-designed, task-focused modal editing forms. Members of the Plone team discovered that was often true after adding Inline WYSIWYG to their CMS. After several versions maintaining the feature, they removed it from the core CMS product.

To reiterate Ev William's vision for Medium,

There’s no layer of abstraction. This is a simple (and old) concept… and it makes a big difference. Having to go back and forth between your creation tool and your creation is like sculpting by talking.

In situations where Inline WYSIWIG can't live up to that ideal, it paradoxically results in even more complexity for users.

In conclusion, Inline WYSIWYG is a land of contrasts

So, where does this leave us? Despite my complaints, both Inline and WYSIWYG editing are valuable tools for building an effective editorial experience. The problem of leaky abstractions isn't new to Drupal: Views, for example, is a click-and-drag listing page builder, but requires its users know SQL to understand what's happening when problems arise. As we consider how to apply the tools at our disposal, we have to examine their pros and cons honestly rather than fixating on one or the other.

The combined Inline WYSIWYG approach can radically improve sites that pair an extremely focused presentation with simple content. But despite the impressive splash it makes during demos, Inline WYSIWYG as a primary editing interface is difficult to scale beyond brochureware and blogs. On sites with more complex content and publishing workflows, those training wheels will have to come off eventually.

Is Inline WYSIWYG right for Drupal core? While it can be very useful, it's not a silver bullet for Drupal's UX werewolves. Worse, it can actively confuse users and mask critical information on the kinds of data-rich sites Drupal is best suited for. Enhanced content modeling tools and the much-loved Views module are both built into Drupal 8; even new developers and builders will easily assemble sites whose complexity confounds Inline WYSIWYG.

At the same time, the underlying architectural changes that make the approach possible are incredibly valuable. If Drupal 8 ships with client-side editing APIs as complete as its existing server-side edit forms, the foundation will be laid for many other innovative editing tools. Even if complex sites can't benefit from Inline WYSIWYG, they'll be able to implement their own appropriate, tailored interfaces with far less work because of it.

Like WYSIWYG markup editors, design-integrated Inline WYSIWYG editing is an idea that's here to stay. Deciding when to use it appropriately, and learning how to sidestep its pitfalls, will be an important task for site builders and UX professionals in the coming years. Our essential task is still the same: giving people tools to accomplish the tasks that matter to them!

Nov 16 2012
Nov 16

Listen online: 

In this episode Addison Berry talks with Bryan Hirsch (bryanhirsch), Kay VanValkenburgh (kay_v), and Brock Boland (BrockBoland) about the Drupal Ladder, a community project to get more people involved in Drupal core, while also learning valuable skills for their Drupal work. What is it, what's the goal, and the whys and hows about jumping in are covered.

Podcast notes

If you want to suggest ideas for podcasts, or have questions for us to answer on a podcast, let us know:
Contact us page

Release Date: November 16, 2012 - 10:00am


Length: 36:39 minutes (21.21 MB)

Format: mono 44kHz 80Kbps (vbr)

Nov 07 2012
Nov 07

If you have ever worked with views and columns in Drupal, you may have had the unfortunate and rather difficult task of getting the columns to render out just the way you really want them to. After some research, I believe I've found a workable approach.

The most common approach is probably cutting the row widths in half and then floating them left. This gives the "illusion" that they are actually in "columns". This works great, if you want the view's row results to read left to right. However, I recently came upon the need to have the view rows to stack on top of each other and then overflow into the second column.

Many of you might be thinking, "Why didn't he just use a Grid style plugin in Views? That is, after all, what it's there for!"

Answer: Responsiveness!

Grids render in table markup. Anyone that has ever tried theming a table to be responsive knows it's a lot of work to override default table positioning behaviors.

There had to be a better approach in getting these columns to actually function the way I needed them. After doing a quick Google search, I came across the the Views Column Class module. Great! This looked very promising! I started to mess around with it and quickly realized that it added classes in a custom view style. Yes, this could work. However, it's still the same technique that we use for creating left to right "faux columns" - relying on classes to float left or right. What I really needed was to actually alter the markup that was generated. I needed to separate the rows into columns that have their own

markup. This would allow for easier manipulation and styling of the actual columns. I started searching some more.

I finally came across this wonderful blog by Amanda Luker: Flowing a list view into two columns. Granted it's a little over a year old, but it definitely got me on the right path! The technique is rather solid, but I'm not really a fan of using preg_replace(). For performance reasons, it's better to generate the correct markup in the first place. No need to generate markup only to replace it later. This is Drupal after all!

Not only that, I also needed to accomplish the following:

  1. Easier way to manage which views get processed and converted into columns
  2. Standardized view classing for the columns (including zebra striping and first/last classes). This is very useful for theming purposes.
  3. Dynamic Columns. Have the ability to produce any number of columns, more than just two.

In the end, I felt just doing some rewriting was in order. In the end, this approach helps with all the previously mentioned tasks at hand. Below are the source code files (in Gist) needed to start preprocessing Views Columns:

  1. template.php - Contains the preprocess hook needed for your theme. I figured preprocessing an unformatted list would probably be the easiest.
  2. views-view-unformatted.tpl.php - The template file needed for your theme.
  3. views-columns.less & views-columns.css - These are supplemental and are really just base styling for creating evenly width 2 and 3 columns for tablet sizes and up.

I have tried to document these files fairly well, however if you find that additional documentation is needed please feel free to comment below or fork the Gists!

Nov 05 2012
Nov 05

Frustration has motivated me to write a post about this. These are largely a lot of issues I have with Pantheon. I will address the good things first.

The good bits (Some of which you may already know)

Innovative: I feel Pantheon is doing a great thing. They're helping Drupal developers simplify their workflow. They've created a service that stands above most other hosting services that currently exist. Additionally, Pantheon exudes the culture of listening to the Drupal community and continually trying to improve their product.

User Interface: For the most part, the user interface is pretty good. You're able to find things that you need without too much searching. It's not perfect and it could be improved.

Multi-tenet deployment is cheap: Being able to deploy a multi-tenet deployment structure is very easy and takes little time and effort. It's just a couple clicks away. You upload your database and then upload your files folder and after you get your favorite warm morning beverage you have a new Drupal installation created.

Upstream updates: Being able to run updates for Drupal core is awesome. A single click allows you to update code quickly without having to click through the interface. Although, drush is equally sufficient for this task.

Quick backup: With one click you're able to execute a complete backup. The backup is also conveniently compressed for quick download.

The bad bits

Pulling in changes: Moving from dev/staging/live requires clicking. I may be in the minority but I really enjoy using git purely through the command line interface. Having to pull in changes in your testing environment by clicking can get old quickly.

Drush / SSH-less access: While some innovative folks in the contributed module space have created a Drupal project for drush access, it still limits you from what you can do. I understand the limitations exist due to large concerns of security. Without Drupal or ssh access, it can often be a burden for Drupal developers. I would much prefer to have the ability to sync databases and files a command line interface with Drupal. I know Pantheon does a great job of creating user interfaces to replace this but being able to `drush cc all` is superior in my opinion.

Working the Pantheon way: Using Pantheon, you're tied to a specific workflow. This includes exporting the database, downloading it from the browser and then importing it into your machine. This is OK the first 10 times. After a while it gets quite old. I would much rather use `drush sql-sync` for this.

New Apollo Interface: The new Apollo interface has too many tabs and fancy dropdowns. Changing the environment now requires clicking twice. Click the dropdown, then pick your environment, then pick the tab on the left side. Someone went a little crazy with Twitter bootstrap. I would rather see a longer page. The tabs/dropdowns often abstract where and what you need. Also, you have to re-learn another new workflow for it to work. This is a slight curveball.

503 Errors: This issue was of the most problematic. On one of our website setups, it produces an unhelpful 503 error every time you would visit the features page or to clear cache. This became instantly an impediment on our team's productivity. We've posted a ticket; however, the ticket process has been rather slow. Different techs have come in, passed the issue and escalated it each time; but we have yet to have a resolution. (We're on day 7 of this problem.) Being able to call and wait for an hour or two and getting it resolved then would be more efficient of my time; especially when something like this is becoming an impediment to our project.

In the end it's up to you

Overall, it all depends on the workflow and tastes of the Drupal developer. Pick and choose what works for you. For some people, Pantheon is the right service / tool for their job. For me, I would much rather prefer more granularity and control. I really desire for Pantheon to succeed. Pantheon is fulfilling a need that exists in the Drupal community. Hopefully, they'll continue to improve their product and I'll give them another shot later on. At the moment, it's not what I'm looking for.

Do you agree, disagree or have comments? Please let us know below.

Oct 23 2012
Oct 23

A case study in re-platforming latingrammy.com


The previous version of the site was built on Ruby on Rails in a custom content management system that allowed administrators of the site to have content types, categories, and even translations. This was all stored in a MySQL database with a simple schema that allowed for a pretty easy migration into Drupal.

The task was to re-create the entire site within Drupal, keeping the exact same content types, content, and existing design with the exception of a new homepage, new header and new footer. The new homepage design is based off of the current Grammy.com's homepage design.

Previous Homepage:

Newest Homepage:

The Recording Academy wanted to re-platform this on Drupal for a couple of reasons. The first is that since Grammy.com is on Drupal 6 and will be upgraded to Drupal 7 before the next annual awards show, we could leverage some of the work in building the Latin Grammys website with Drupal 7 for the upgrade of Grammy.com itself. The second reason was to bring the site more inline with Grammy.com in terms of capabilities in content editing, advertising, and future enhancements to the site could be leveraged from any improvements that end up being made on Grammy.com.


For the migration work on this project, we decided to create a custom drush command that could run the various migration scripts. The scripts are split out by content type: each script migrating the pertinent information for a particular content type. They are comprised of a simple function that runs bootstrapped Drupal code to take data from the old Ruby database and transform it for the new Drupal database.

You can find the code for this custom drush command, $ drush lmi in this gist.

We then have one simple bash script that runs drush commands in succession, like so:

# Clean install using our installation profile written with profiler.
drush site-install lagra -y
drush upwd admin --password="admin"
drush fra --force -y
drush dis overlay -y
drush cc all

# Migrate taxonomy terms.
drush lmi tag
drush lmi genre
drush lmi award

# Migrate content.
drush lmi event
drush lmi nominee
drush lmi page
drush lmi photo
drush lmi podcast
drush lmi press_release
drush lmi sponsor
drush lmi video import

You may also notice that within that bash script, we are installing Drupal from scratch with a custom install profile called 'lagra'. This installation profile is using Profiler and does some simple enabling of core modules, contrib modules, custom features, and sets a few initial variables within the variables table. If you haven't looked into using Profiler on your projects, you should. It allows you to do some pretty complex operations with very simple syntax right in your profile's .info file.

You can find the code for this custom install profile in this gist.

What this all leads to is a relatively simple and quite an efficient way to completely reinstall the entire project from scratch at any given point. All of the features of the site are in custom Features modules, and the migration scripts run automatically, so a fresh install of the project is as easy as running $ ./docroot/scripts/reset.sh at any given time. We found this led to a very rewarding workflow during the entire duration of the project.


Another huge requirement for the site (in fact, one we underestimated) was the use of three different languages on the site: English, Spanish, and Portuguese. For this we had to add a whole slew of i18n_* modules. Here's the list:

  • Internationalization (i18n)
  • Block languages (i18n_block)
  • Menu translation (i18n_menu)
  • Multilingual content (i18n_node)
  • Path translation (i18n_path)
  • String translation (i18n_string)
  • Taxonomy translation (i18n_taxonomy)
  • Translation sets (i18n_translation)
  • Variable translation (i18n_variable)
  • Views translation (i18nviews)
  • I18n page views (i18n_page_views)


The old Ruby system had a very competent system for managing translations of content. However, not all content in the system was translated, and so there were some challenges in making sure that we could switch to a different language and not have the page show up empty simply because the content was not translated for that particular views listing. The listing being empty was not ideal from a user's standpoint, nor from a content editor's standpoint. So we opted to do a little extra work in the migration scripts for the particular content types that did not have translations. We simply created the English version of the nodes, and at the same time used the same copy for the translations. This resulted in populated views for each language, and an ideal way for content editors to browse the listings and quickly know what content needed to be translated, hit an edit button, and then translate.

Another challenge we didn't account for was all of the strings that are not translated automatically. The String translation module was easy enough to use for most cases, but for the times we found it lacking, we ended up using the String Overrides module to fill in the gaps.

We also found that keeping string translations in code was problematic. We opted for using the $conf array like so:

global $conf;
$conf['locale_custom_strings_es'][''] = array(
"You must be logged in to access this page." => "Debes estar registrado para acceder a esta página.",

However, this also became an issue once we decided to start using the String Overrides module because the $conf array would simply override any translations we would put in the String Overrides UI. So, we opted to use this $conf array method to get the values into the database initially (hitting save in the String Overrides UI) and then just remove the $conf array. Then, we could use the UI to translate the few strings that remained.

This is still something we haven't solved, so if you have any suggestions on easily keeping string translations in code (à la Features), we would love to know about it in the comments!


The following were involved in this project at various stages and in various roles:

In order of code contributions.


Brock Boland: Developer at Lullabot


Jerad Bitner: Senior Developer & Technical Project Organizer at Lullabot


David Burns: Senior Developer at Lullabot


Angus Mak: Developer at Lullabot


Ben Chavet: Systems Administrator at Lullabot


Bao Truong: Software/Web Developer at The Recording Academy

Oct 19 2012
Oct 19

Listen online: 

Join us in this podcast with Angie Byron (webchick), co-maintainer for Drupal core and general cat-herder for the community. Addi and Angie discuss what exactly she does day to day, the cool Spark project and Drupal 8, in addition to a peek into her non-Drupal life. We also put a couple community questions to Angie:

  • What about D7 would you do differently in hindsight?
    - @kattekrab on Twitter
  • What is your best tip to handle time, having so many open fronts?
    - @juampy72 on Twitter

If you want to suggest ideas for podcasts, or have questions for us to answer on the podcast, let us know:
Contact us page

Podcast Notes

Release Date: October 19, 2012 - 9:00am


Length: 50:35 minutes (35.14 MB)

Format: mono 44kHz 97Kbps (vbr)

Oct 17 2012
Oct 17

After a little over 9 months and with an impressive 1290 sites reporting they use it, Menu Views has undergone a little nip and tuck! Today, I have finally released in hopes to squash the ever so annoying bugs and wonderful feature requests that were made! This module has been an invaluable tool in the mega-menu creation process. It has solved a problem for many people: how to insert a view into the Drupal menu system.

Many Drupal sites I've seen through out the years (those that have complex mega-menus) left me perplexed at how to accomplish this task. I could never really imagine it happening effectively, unless it was rendered using views. After being able to finally see how some of these sites actually accomplished this great feat, I was also a little baffled at the shear complexity of making it happen.

Often times, the theme is the unsuspecting and unfortunate victim. Being hacked, sliced and injected with arbitrary code to succeed in rendering the desired result. Some prefer to do it this way and to them I say "be my guest". However, when a more dynamic approach is needed, it is far better to utilize the awesome power of Drupal. Which begs me to reiterate what the purpose of a CMS is for: letting the system actually manage the content (hmm novel idea).

Menu Overview Form

Eureka! Let Drupal's own menu system and administrative user interface handle this complex problem! When I first released Menu Views (1.x) that is what it solved: inserting a view into the menu. However, it also introduced quite a few other problems that were unforeseen and rather complicated to fix. Namely these involved other contributed modules and the biggest native one: breadcrumbs!

Over the past few months, I really started digging into the complexity that is the menu system in Drupal. Trying to figure out what exactly I could do to help simplify how the replacement of links were intercepted and rendered. After pouring over the core menu module and several contributed modules, I began noticing several commonalities in the best approach: theme_menu_link().

In Menu Views-1.x I was intercepting theme_link() instead. In hindsight, I can't believe how incredibly stupid that was! So essentially, the old method intercepted every link on the site with the off chance it might be a menu link that had Menu Views configuration in the link's options array. Whoa... major performance hit and a big no-no! For this reason alone, that is why I decided to do a full version bump on Menu Views. Part of this decision was to consider existing sites and how they may have already compensated for view were popping up everywhere. An additional deciding factor involved refactoring the entire administrative user interface experience.

Menu Item TypeIn Menu Views (1.x), there seemed to be a lot of confusion around "attaching" a view and optionally hiding the link using <view> in the link path. I thought about this for a while. Ultimately I decided that the best way to solve this would to separate the view form from the link form and give the administrator the option to choose what type of menu item this should be. In this way, menu views can more accurately determine when to intercept the menu item and render either the link or a view.

There are now a couple options to better manage what the breadcrumb link actually outputs as well. Along with rendering a title outside of the view if desired. Both of which can use tokens! No longer are we left with stuff extraneous markup in the view's header. Last but not least, one feat of UI marvel: node add/edit forms can control menu views and you're no longer limited to just the menu overview form!

Menu View Form

Per some user requests, I have also set up a demonstration site so you can firebug to your heart's content: http://menuviews.leveltendesign.com. I push blocks on the left and right to show that it integrates well with Menu Block, Nice Menus and Superfish.

Overall, I think Menu View's new face lift will allow this module to reach a new level of maturity and stability that has been greatly needed. Thank you all for your wonderful suggestions in making this module a reality and truly a joy to code!

Stay tuned for next week: Theming Menu Views in Drupal 7

Oct 17 2012
Oct 17

Using a PHP Debugger is a great way to effectively debug your code. A debugger gives you the ability to trace your code line-by-line, with the call stack and all the variables available for inspection at run-time. In previous versions of Mac OS X, installing Xdebug could be a hassle. The recently release Mountain Lion version makes it easy by shipping with many of the tools we need for PHP debugging.

Setting up Xdebug

OSX Mountain Lion conveniently ships with Xdebug. If you're using the earlier OSX Lion version, you can download and install Xdebug with PECL (PHP Extension Community Library) or Homebrew. Once you have Xdebug installed, we need to make it available to PHP by editing the php.ini file. To do that, open up /etc/php.ini with your text editor.

If /etc/php.ini doesn't exist, create a copy of the default file with the following command:

sudo cp /etc/php.ini.default /etc/php.ini

Then find and uncomment the following lines in /etc/php.ini


Restart Apache, then make sure Xdebug is loading by looking at phpinfo.




Finally, we have to tell the IDE how to communicate with Xdebug. In this example, I'm using PhpStorm but you can use any compatible IDE. Start by setting up a new project. Once you have it set up, we will set up the debugger here.

Edit Configuration

First, add a new PHP Web Application

PHP Web Application

Add a new server, your Host will be different depending on your setup


Your configuration should look something like this, where http://drupal7.local/ is where your drupal instance is


Now we can run the Debugger


The browser should now open up http://drupal7.local?XDEBUG_SESSION_START=xxxxx and the debugger should open up in Phpstorm. To make sure the debugger is working properly, let's set a break point in index.php.


Refresh the browser and the debugger should invoke and stop at the break point.


We're now ready for some PHP debugging!

Oct 15 2012
Oct 15

Almost a year ago, Module Monday featured the Views Datasource module, a tool for exposing custom XML and JSON feeds from a Drupal site. It's a great tool, but what if you need something a bit more... human-readable? A .doc file, or a spreadsheet for example? That's where the similarly-named but differently-focused Views Data Export module comes in.

Screenshot of administration screen

Views Data Export provides a custom View display type optimized for producing downloadable files in a variety of formats. Need a Microsoft Excel file with a list of the latest 100 posts on your site? Looking for a Microsoft Word .doc file that collects the text of the site's top-rated articles? No problem. The module supports XLS, DOC, TXT, CVS, and XML exports with format-appropriate configure options for each one. In addition, it uses Drupal's batch processing system when it's asked to generate extremely large export files, rather than timing out or giving out-of-memory errors.

Screenshot of resulting change to site

Whether you're putting together reports, giving users an easy way to download product data sheets, or conducting a quick, ad-hoc audit of your site's content, Views Data Export is a handy tool to have around. Its batch processing support makes it more reliable on large sites, and its support for a variety of "day to day" file formats means you'll spend less time juggling your data and more time making use of it.

Oct 05 2012
Oct 05

Listen online: 

Listen in as Addi talks to five Drupal 8 initiative owners about the amazing new things coming in Drupal 8, why they matter, and what you can do to start learning about them now, and give a helping hand. We are joined by Gábor Hojtsy (Gábor Hojtsy), Larry Garfield (Crell), Kris Vanderwater (EclipseGc), John Albin Wilkins (JohnAlbin), and Greg Dunlap (heyrocker), in a series of separate interviews.

Multilingual Initiative

Web Services and Context Core Initiative (WSCCI)

Blocks and Layouts Initiative (SCOTCH)

Mobile Initiative

Configuration Management Initiative

Release Date: October 5, 2012 - 9:00am


Length: 68:09 minutes (47.29 MB)

Format: mono 44kHz 97Kbps (vbr)

Sep 27 2012
Sep 27

Lullabot's Drupal training site turns 2

It's been almost 2 years since we launched Drupalize.Me and I'd like to take a moment to appreciate some of the site's recent accomplishments.

Over 600 Videos

A few weeks ago, Drupalize.Me product manager Addison Berry announced the 600th video posted to Drupalize.Me! Members now get access to over 233 hours of content on an immense range of Drupal-oriented topics from simple content creation to site building tricks and database server performance optimization. Drupalize.Me's most popular videos cover coding for and using Views, using the Calendar and Date modules, configuring WYSIWYG editors, Display Suite, Organic Groups, Drupal 7 module development, and more. The Drupalize.Me team has been really amazing – posting new videos every week and paying close attention to member requests for new topics.

Over 2,000 Subscribers

Word about Drupalize.Me has spread and I often see people on Twitter telling one another that for them, Drupalize.Me has become the way to learn and keep up with Drupal techniques. Drupalize.Me has iPhone/iPad, Android, and Roku apps so members can watch the videos on their mobile devices or televisions. Drupalize.Me has also partnered with Acquia to offer discounted memberships to Acquia Network subscribers.

As word has been getting around about Drupalize.Me, subscriber numbers have been growing and recently crossed 2,000 simultaneous subscribers. We're reaching more people on a monthly basis than most large-scale Drupal events. We couldn't be more excited about the response!

Drupalize.Me now has a staff of 3 full-time people creating videos, maintaining the site, adding features and handling customer support. This team is augmented by others at Lullabot who step in to help with expert video training, development, design and support. Drupalize.Me now represents more than 15% of Lullabot's budget and has become a great outlet for the Lullabot team to share the knowledge that we've gained building the high-profile websites that represent the majority of our work.

New Features

The Drupalize.Me team has been listening closely to subscriber feature requests and we've gotten lots of new features over the past 2 years. They've arranged videos into "series" collections and allow users to watch them consecutively. They've also added curated "guides" collecting videos into linear curriculum for different types of members. They've also greatly improved the user dashboard pages allowing users to manage their queue as well as see the listing of recently watched videos and even displaying a line graph so members can see their progress within the videos they've watched. The team also added the ability to store pause points and allow users to resume from exactly where they left off - even if they're resuming on a different device such as their phone or connected television.

And speaking of connected televisions, we've got apps! We've got an iOS app for your iPhone/iTouch/iPad which can AirPlay to your AppleTV. We've also got an Android app and an app for the Roku Streaming Player box. You can pick up a Roku box for as little as $50, hook it up to your television, and watch all of the Drupalize.Me videos from your couch. It's a great way to learn.

Group Memberships

Drupalize.Me also offers group memberships. If you want Drupal training for your entire group, department, company, or institution, we offer that too. Group accounts greatly reduce the price of individual accounts while still allowing each group member to manage their own video queue, resume videos, and see their own history. We offer both managed group plans and IP-range plans to allow access to all devices at a physical location such as a library or campus.

Subtitles, Transcripts & Translations

Perhaps the greatest new features at Drupalize.Me are the ability to offer subtitles, transcripts, and translations for the videos. We have many international subscribers who, while they speak and understand English, sometimes can't keep up with the rapid-fire technical information in the videos. They've been asking for English-language subtitles and transcripts so they can follow along better. We're proud to say that we've added this functionality as well as functionality to provide complete translations with subtitles in other languages! 60 of our recent videos as well as the complete Introduction to Drupal guide already have transcripts and subtitles. And all new videos published on Drupalize.Me in the future will have transcripts and subtitles.

We're currently looking for volunteers to do foreign language translation for Drupalize.Me. If you're a bi-lingual Drupalist and you'd like to help make bring these Drupal training videos to the world, please contact us!

Drupalize.Me & Videola

One of the Drupalize.Me team's biggest accomplishments is building the Drupalize.Me site itself. The team has built a great Drupal-based platform which manages both the permissions and delivery of adaptive bitrate streaming video; recurring subscription billing and administration; video content categorization, listing, and organization; mobile app and IPTV delivery with seamless pause-on-one-device-resume-on-another functionality; and now even multi-language subtitles and transcripts.

As we've been building Drupalize.Me, we've been funneling this work and knowledge into Videola, a platform to provide this functionality to others wanting to build subscription-based or IPTV-oriented video sites. In short, Videola is a framework for building sites like Drupalize.Me... or like Netflix, Hulu, or Amazon video-on-demand. Videola can do everything that Drupalize.Me can do and more. If you'd like to build a site like this, please contact us and we can talk to you about getting you set up with a Videola site of your own.


Addi and the rest of the Drupalize.Me team have been doing a lot of training at DrupalCons, DrupalCamps, and other events. They've been very involved in the Drupal Ladder project and have posted a series of free videos to help new Drupalers get involved with core development.

Drupalize.Me recently started its own podcast (which picks up where the long-running Lullabot Drupal Podcast left off). Every other Friday, the Drupalize.Me team is posting a new podcast with interviews and discussions to help listeners keep up with the ever-changing world of Drupal. The team is also constantly upgrading and improving the site and they've got lots of great feature ideas for the future.

I couldn't be more proud of the work that's been done by Addi Berry, Joe Shindelar, Kyle Hofmeyer, and everyone who's helped with Drupalize.Me over the past 2 years. The site just keeps getting better and better. At this rate, I fully expect that 2 years from now I'll be bragging about their Drupal training neural implants and interstellar 3D streaming. But for now, I'm really happy with where we are – doing great, having fun, and sharing knowledge with people, empowering them to do great things.

Sep 22 2012
Sep 22

Yeah, you heard me, Windows! Some of you might be thinking,  “What the heck is this guy talking about?”, but hear me out. One of the best ways to get more people using Drupal is to show them that it works on Windows, and can be developed on Windows easily. Windows is used by many more people than Mac or Linux. Some of you may be thinking, “oh man, I have to setup Microsoft Internet Information Server (IIS) or Apache for a Drupal Windows installation”, but I have a better solution for you. Let Acquia do the hard work for you and use the Drupal Dev Desktop. They have developed the best localized/virtual Drupal hosting solution for Windows that I have found. There are other “Drupal Stacks” out there, but Acquia has made a very solid development and testing tool with the Acquia Drupal Dev Desktop.

Acquia Dev Desktop screenshot

Aside from being able to setup a Drupal website quickly and easily, you can also manage your Drupal databases through PHPMyAdmin, which is accessible directly through the Drupal Dev Desktop interface. When creating a new site, you are able to setup a local hostname, which is useful for doing development of multiple client websites on your local machine. Each local Drupal installation is able to have it’s own file system location, database, and local hostname, so you can keep your projects organized.

Acquia Dev Desktop create site screenshot

Acquia Dev Desktop is really an all around great tool for developing sites on Windows, as the Acquia team has done an awesome job of giving you all the tools you need as a Drupal developer. You are able to create sites from scratch based off the default Acquia Drupal distribution, or you can import your own site including the file system and database. There are 3 different ways to import a site. One allows you to upload a dump file, one allows you to createa  new database, and the other allows you to import your database from an existing MySQL server. This is very useful for working on a live site locally to make changes or creating new client project platforms. You should be starting to get the picture that the Acquia Dev Desktop is what you want to use if you want to deploy Drupal websites quickly and efficiently on the Windows operating system.

Acquia Dev Desktop screenshot Import Site

The Acquia Dev Desktop is one part of the puzzle here, but the other parts aren’t very hard to set up either, so just pay attention and you will see just how easy it is to be Developing like Drupal Ninja on Windows. Here are some quick links and descriptions of what else you are going to need before I start blabbing off again. The list may seem large, but all of the apps are quick installs from setup or archive.

  • GetGnuWin32 - This package gets you started to be able to run all the commands that all the Linux and Mac users are so special to their O/S. You can use them on Windows too with GetGNUWin32. You will want to grab the latest version, currently 0.6.30 updated 9-7-2011. This is a managed distribution of GnuWin32 packages, which normally would have to be installed manually.
  • Drush - Drush, if you don’t know is the Drupal Shell. It is just as it sounds if you know what a Linux shell is. You can manage much of your Drupal site through command line with Drush. You are going to want to grab the Drush for Windows 5.x branch or higher.
  • Putty - Putty is required to generate the SSH key you need to connect to Git repositories on Drupal.org, Github, and elsewhere. It’s also the best tool for remote Linux/Ubuntu systems administration from a Windows machine.
  • mysysgit - Git for Windows is music to anyone’s ears who does serious development for any software and has a Windows computer they don’t want to just collect dust. This package available from Google Code is just what you are looking for if you are looking to install Git on Windows. A handy extra that my graphic designer pointed out was that there is a Git Extensions package that allows you got run a Git GUI in Windows, which is pretty sweet for those who are afraid of command line.
  • Notepad++ - Notepad++ is a free (as in "free speech" and also as in "free beer") source code editor and Notepad replacement that supports several languages and has advanced search and replace capabilities. At the time of the writing of this article, it was on version 6.1.8.
  • Filezilla - Let’s face it, not every server supports Git or other version control systems, especially for client projects. Filezilla is a free, open source FTP solution providing both an FTP client and FTP server application for Windows. Their FTP client is available for Windows, Linux, and Mac.
  • Firebug - Firebug is a web developers best friend, as it has all the tools you need to inspect, modify in real-time, and debug website code. It also has one of the best Javascript debuggers available for any browser.
  • DiffMerge - Diffmerge is not an application specific to Windows, as it’s also available for Mac and Linux, but it is a very useful tool when working on code with diffs. It will actually allow you to easily copy and replace code from existing files within an easy to use interface, saving copies of your originals and a new merged file.

So that is it, the basic recipe for developing Drupal sites like a Drupal Ninja on Windows. Once you have all those programs installed, you will have a fairly awesome development environment setup within Windows for developing Drupal websites, Drupal modules, and Drupal themes. I used to use a Macbook Pro for all of my Drupal development work, as it was the best solution I had found up to that point for a local Drupal installation with MAMP. I loved my MBP for what it gave me then,  but I don’t always like being on a laptop, and I definitely hate typing on one. I am a classic keyboard user, which goes back to my early days on the Commodore 64.

I have been trying to setup the perfect dev environment in Windows for a while, and the applications are finally all there to do it. It may seem like a massive setup, but if you are a gamer, and mostly a Windows user like myself, you will love being able to fire up your Windows 7 desktop or laptop and start cracking out Drupal code, or Drupal websites just as easy as someone on a Mac or Linux could do.

Well, I gotta keep it moving. Now you have some knowledge and some tools to get you building and developing Drupal websites in Windows. Make of it what you will Mac fans, but I don't like being limited to any one way, or operating system in this case, to develop websites for my clients. Enjoy.

Sep 19 2012
Sep 19

While we were working on one of our upcoming projects, a new website for Stichting tegen Kanker, we had to integrate the Apache Solr module. We needed Solr for its faceted search capabilities. In combination with the FacetAPI module, which allows you to easily configure a block or a pane with facet links, we created a page displaying search results containing contact type content and a facets block on the left hand side to narrow down those results.

One of the struggles with FacetAPI are the URLs of the individual facets. While Drupal turns the ugly GET 'q' parameter into a clean URLs, FacetAPI just concatenates any extra query parameters which leads to Real Ugly Paths. The FacetAPI Pretty Paths module tries to change that by rewriting those into human friendly URLs.

Our challenge involved altering the paths generated by the facets, but with a slight twist.

Due to the projects architecture, we were forced to replace the full view mode of a node of the bundle type "contact" with a single search result based on the nid of the visited node. This was a cheap way to avoid duplicating functionality and wasting precious time. We used the CTools custom page manager to take over the node/% page and added a variant which is triggered by a selection rule based on the bundle type. The variant itself doesn't use the panels renderer but redirects the visitor to the Solr page passing the nid as an extra argument with the URL. This resulted in a path like this: /contacts?contact=1234.

With this snippet, the contact query parameter is passed to Solr which yields the exact result we need.

  1. /**
  2.  * Implements hook_apachesolr_query_alter().
  3.  */
  4. function myproject_apachesolr_query_alter($query) {
  5. if (!empty($_GET['contact'])) {
  6. $query->addFilter('entity_id', $_GET['contact']);
  7. }
  8. }

The result page with our single search result still contains facets in a sidebar. Moreover, the URLs of those facets looked like this: /contacts?contact=1234&f[0]=im_field_myfield..... Now we faced a new problem. The ?contact=1234 part was conflicting with the rest of the search query. This resulted in an empty result page, whenever our single search result, node 1234, didn't match with the rest of the search query! So, we had to alter the paths of the individual facets, to make them look like this: /contacts?f[0]=im_field_myfield.

This is how I approached the problem.

If you look carefully in the API documentation, you won't find any hooks that allow you to directly alter the URLs of the facets. Gutting the FacetAPI module is quite daunting. I started looking for undocumented hooks, but quickly abandoned that approach. Then, I realized that FacetAPI Pretty Paths actually does what we wanted: alter the paths of the facets to make them look, well, pretty! I just had to figure out how it worked and emulate its behaviour in our own module.

Turns out that most of the facet generating functionality is contained in a set of adaptable, loosely coupled, extensible classes registered as CTools plugin handlers. Great! This means that I just had to find the relevant class and override those methods with our custom logic while extending.

Facet URLs are generated by classes extending the abstract FacetapiUrlProcessor class. The FacetapiUrlProcessorStandard extends and implements the base class and already does all of the heavy lifting, so I decided to take it from there. I just had to create a new class, implement the right methods and register it as a plugin. In the folder of my custom module, I created a new folder plugins/facetapi containing a new file called url_processor_myproject.inc. This is my class:

  1. <?php
  3. /**
  4.  * @file
  5.  * A custom URL processor for cancer.
  6.  */
  8. /**
  9.  * Extension of FacetapiUrlProcessor.
  10.  */
  11. class FacetapiUrlProcessorMyProject extends FacetapiUrlProcessorStandard {
  13. /**
  14.   * Overrides FacetapiUrlProcessorStandard::normalizeParams().
  15.   *
  16.   * Strips the "q" and "page" variables from the params array.
  17.   * Custom: Strips the 'contact' variable from the params array too
  18.   */
  19. public function normalizeParams(array $params, $filter_key = 'f') {
  20. return drupal_get_query_parameters($params, array('q', 'page', 'contact'));
  21. }
  23. }

I registered my new URL Processor by implementing hook_facetapi_url_processors in the myproject.module file.

  1. /**
  2.  * Implements hook_facetapi_url_processors().
  3.  */
  4. function myproject_facetapi_url_processors() {
  5. return array(
  6. 'myproject' => array(
  7. 'handler' => array(
  8. 'label' => t('MyProject'),
  9. 'class' => 'FacetapiUrlProcessorMyProject',
  10. ),
  11. ),
  12. );
  13. }

I also included the .inc file in the myproject.info file:

  1. files[] = plugins/facetapi/url_processor_myproject.inc

Now I had a new registered URL Processor handler. But I still needed to hook it up with the correct Solr searcher on which the FacetAPI relies to generate facets. hook_facetapi_searcher_info_alter allows you to override the searcher definition and tell the searcher to use your new custom URL processor rather than the standard URL processor. This is the implementation in myproject.module:

  1. /**
  2.  * Implements hook_facetapi_search_info().
  3.  */
  4. function myproject_facetapi_searcher_info_alter(array &$searcher_info) {
  5. foreach ($searcher_info as &$info) {
  6. $info['url processor'] = 'myproject';
  7. }
  8. }

After clearing the cache, the correct path was generated per facet. Great! Of course, the paths still don't look pretty and contain those way too visible and way too ugly query parameters. We could enable the FacetAPI Pretty Path module, but by implementing our own URL processor, FacetAPI Pretty Paths will cause a conflict since the searcher uses either one or the other class. Not both. One way to solve this problem would be to extend the FacetapiUrlProcessorPrettyPaths class, since it is derived from the same FacetapiUrlProcessorStandard base class, and override its normalizeParams() method.

But that's another story.

Aug 01 2012
Aug 01

Here at Achieve Internet, we perform a lot of system reviews for our clients to help them improve the performance and scalability of their sites.  From these numerous system reviews, we've identified 5 common things that are missing from many hosting configurations.

  • APC
  • PHP realpath_cache_size
  • MySQL Query Cache
  • Memcached
  • Drupal optimizations


APC, or one of its alternatives, should be installed and configured on every web server. APC caches the compiled op-code for your PHP application. This has two main benefits:

  • Reduces the amount of time that it takes the server to render each page (pages load faster)
  • Reduces the amount of RAM that the server must use to render each page

There are two main APC settings that will have the greatest impact on your site:

  • shm_size
  • stat

The "shm_size" setting determines how much memory APC will allocate to caching your Drupal site's PHP code. A good starting point for a single Drupal 7 site is 64MB although you should monitor APC via the apc.php page to determine the optimal value for your server and sites.

The "stat" setting determines if APC checks the original PHP source files to see if they have been modified before using the cached version. By setting stat=0, you can tell APC that the source PHP files do not change and thus reduce the overhead of the disk I/O to check the status of the PHP files. A word of caution though for this setting: only disable stat on production servers where the source code rarely or never changes. Otherwise you'll find yourself having to manually clear the APC cache after every code update.

PHP realpath_cache_size

Drupal sites make extensive use of include files. By default PHP attempts to cache the file system pointers for any included file. However, the default cache size is too small for Drupal sites. Increasing this setting in your php.ini will reduce the number of disk I/O operations that PHP performs in rendering pages. A good starting point for this setting is 64KB although as with all optimizations, you should monitor the actual usage to determine the optimal value for your particular situation. You can see how much of the realpath cache is being used by creating a PHP page that calls ealpath_cache_size().

The impact of this setting will vary greatly from site to site.  On a large Drupal site you could see anywhere from a 1% to 10% improvement in page load times.

MySQL Query Cache size

The MySQL query cache caches the results of frequently run queries, though amazingly some default MySQL installations comes with this disabled. A good starting point for the query_cache_size is 128MB. The optimal value for this should be a value that provides a cache hit ratio of at least 95% while not having an excessive amount of unused memory.

A related setting is the query_cache_limit. This determines the largest query result set that can be stored in the cache. Generally, we start with the default value of 1MB and adjust up or down based on the information being cached.


Memcached is a memory resident data cache that is used to mostly replace the cache_* tables in the MySQL database. Installing Memcached and configuring your site to use it can have a dramatic impact on the performance and scalability of your site. On a stock Drupal installation, a test of Memcached showed that cache reads were 2x faster, cache writes 40x faster, and the number of database queries was reduced by almost 50% per page load.

Memcached can sometimes be challenging to setup and get working correctly since there are several pieces that have to be installed and configured to work together. The three main components you need to use Memcached are:

Drupal optimizations

Drupal comes with some good performance optimization options out of the box when properly configured. Here are a few:

  • Enable Anonymous page caching.
  • Enable CSS, JS, and Page Aggregation/Compression.
  • Set the page cache expiration time reasonably large, say 60 minutes or more for sites with infrequently changing content.
  • Consider using the Boost module (http://drupal.org/project/boost) if your site is mostly anonymous traffic and you cannot setup a Varnish (https://www.varnish-cache.org/) server.

While most of the options out of the box work well, do not use the Statistics module.  Instead, use an external analytics service such as the Google Analytics API.


Achieving a high performance and scalability website is all about enabling your servers to render pages as quickly as possible while using as few resources as possible. These core tools are just the starting point for having a well tuned hosting environment and should be tailored for your specific situation and website.

A final word of caution, while we've provided some reasonable starting points for these optimizations, just blindly dropping them into your environment without tuning for your specific needs can actually reduce the performance of your site. Once these settings are tuned for your environment though, you will see a significant improvement in both performance and scalability.

Jul 09 2012
Jul 09

Globalization demands that websites serve content to very diverse audiences. Given its robust API and wide variety of contributed modules, Drupal’s language support is key to sites that require internationalization. In this blog post we will explore some of the tools used to create Drupal 7 multilingual sites, provide implementation recommendations, and give you a bit of a preview as far as what is in store in the upcoming major release of Drupal 8.

Currently, localization and internationalization needs in Drupal 7 are handled by a number of different modules. Most of these needs are supported by the Locale and Content Translation modules, which come as part of the Drupal core, but more often than not they need to be complemented by other modules such as Localization Update, Admin Language, Internationalization, Title, Entity Translation, Translation Overview and others.

Language Support

You can use Drupal’s language support in several ways. One such use case is when developing a website that is only shown in one language, containing no text or translation support. An example of this is a Spanish website with no English text or translation. Another use case is when developing a site supporting content written in several languages, again with no translation support. A final use case is when creating a website in one or more languages with translatable content included. Whatever your multilingual needs, Drupal will handle the task.

Drupal contains a number of modules providing language support and it can become confusing if the user is not aware of a modules’ specific purpose. Luckily for us these modules can be classified according to functionality in four major groups: a) Interface translation, b) Content language / translation, c) Configuration language / translation and d) Base language functionality and APIs.

When creating Drupal sites that will only have one language (different than English) the only two elements you need to be concerned with are language and interface. When you create a site that will have content in more than one language, but different pieces of content for each language audience, you should be concerned with the language, each language’s interface as well as defining the language negotiation and content selection rules. The last level of complexity comes when you have a full multilingual site; that is, a site that will have English as the base language as well as translated copies of each piece of content.

Interface Translation

These modules will assist you in providing a localized version of any text you see on your interface, including buttons and labels, excluding user generated content like blog posts, pages or articles.

Content Language / Translation

These modules will allow you to take an existing piece of user generated content (node) and identify the language in which it was written. These modules also allow you to translate the node by creating a matching node in a different language while maintaining the relationship between them.

Configuration Language / Translation

These modules will provide you with the means to override system configuration variables depending on the language being displayed. When you look under the hood, Drupal is managed and driven by a significant amount of configuration variables. A simple example of this is the variable that defines what the home page location is. If you want different pages to show up depending on the visitor’s preferred language, these modules will provide the administrative screens to be able to choose a different landing page for each language.

Base Language Functionality and APIs

These modules are responsible for language selection, language negotiation and more importantly providing an API that will allow other modules to interact with these processes,. These modules make Drupal’s language support robust and extensible.

Just How Many Multilingual Modules Are There?

Drupal 7 contains many modules for language support. In fact, there are well over 50 commonly used modules to support your multilingual needs. The purpose of this blog post is not to overwhelm you but rather to provide a helpful overview of what is required to make your site multilingual. For a full reference of modules supporting multilingual please check back for my next blog post.

How Did We Get Here?

A group of developers, themers, and community managers are hard at work right now preparing the Drupal 8 Multilingual Initiative, or D8MI for short.

It is important to understand how we got here and why it is important to reign in all the efforts that provide Drupal 7’s language support in preparation for Drupal 8. First off, we have a multitude of modules because we have a multitude of use cases. Different projects have different needs and thus call for different solutions. These use cases have given way to very useful modules that were eventually contributed back to the community.

The Drupal community fosters collaboration rather than competition. This vibrant community embraces an approach that challenges developers to reach out to module maintainers and offer assistance to extend their modules rather than creating their own.

Tips and Recommendations

OK, now that we got the Drupal evangelism out of the way, let me give you a couple of tips that come from years of experience at building multilingual sites in Drupal.

Plan out what your translation strategy will be and stick to it. Once you have defined the way that you want to handle entity and menu translation you shouldn't try to deviate from it as it is no easy task.

Parting Thoughts

Drupal is constantly evolving and so are its translation features. There are many considerations that one must make before developing a multilingual website. Hopefully you have a better understanding of the landscape of these considerations and a preview of what is to come in the next major release of Drupal. Please check back for my next blog post where we take a deep dive into commonly used modules for developing Drupal multilingual sites, and look at some pros and cons of some of these widely used modules.

Learn More:

There are a handful of sites and books out there that cover the topic of translation and internationalization. A good starting point is http://drupal.org/documentation/multilingual and you can follow the step by step guide provided there http://drupal.org/node/1268692

Related Links:

Jun 29 2012
Jun 29

Many of you might be familiar with the module Skinr (http://drupal.org/project/skinr). It gained a lot of support back in Drupal 6 by providing an easier, albeit somewhat verbose, way of dynamically configuring how elements on your site are styled.

When I first started using Skinr, it worked as advertised; however, it ultimately left me with a bitter taste in my mouth. I felt like I was constantly trying to navigate an endless maze while blind-folded. There were options upon options and fieldsets within fieldsets. It had almost everything but the kitchen sink in it.

I never really have been one who enjoys feeling like I’m just wasting my time. So I eventually scrapped this module as being a potential long term candidate for managing customizable styling. Apparently I wasn’t the only one who had these concerns either.

Then the 2.x branch was born. I only started using the 2.x branch because it was the only one available for Drupal 7. They had completely scrapped the old interface and did an amazing amount of work to make Skinr easily maintainable. You can view a list of changes here: http://groups.drupal.org/node/53798.

So if any of you are like me, you probably were thinking: “Skinr, in Drupal 7? I don’t want to make this any more confusing than it has to be!” Well fear not! You can learn how to start Skinr-ing in just 7 easy steps!

Let us know if you're using this module and leave us a comment below!

May 29 2012
May 29

Being able to set custom page titles and Meta descriptions is an important part of SEO for many sites, vital to establishing topical relevancy and therefore rankings in the case of the former, and clickthrough rates from search results in the latter.


Drupal 7 does not support custom page titles or Meta descriptions out of the box and I have found this script to be extremely useful on a number of sites and still find it to be a more robust alternative than using multiple modules. It works with Panel pages which has continued to be an issue with the page_title module. If no custom page title is found, it falls back on a basic “drupal title | site name” scheme. The script could be edited to support other patterns.

At the end of last year I was working on adding multilingual functionality to a Drupal 7 site and needed to get this script to work with multiple languages. It now allows the setting of field-based custom titles and Meta descriptions for different languages, with each node translation being able to have it’s own unique title and Meta description.

It works on single/undefined language sites too so this replaces the old version of the script entirely.


Simply create new fields for your content type(s) called “title” and “meta_desc”, then paste the following code into your THEME_preprocess_html function in template.php:

 * Ver: 1.05
 * By Miles J Carter
 * http://www.milesjcarter.co.uk/blog
 * Tested with Drupal versions 7.3, 7.4, 7.8, 7.9, 7.12, 7.14
 * licensed under the GPL license:
 * http://www.gnu.org/licenses/gpl.html
 * Works with all content types that have
 * text fields created named "title" and "meta_desc".
 * Insert into THEME_preprocess_html function in template.php

// Extract language value for multilingual sites
global  $language;
        $lang = $language->language;

// Find Node ID

$node = $vars['page']['content']['system_main'];

if (isset($node['nodes'])) {  
        $node = $node['nodes'];
        // Extract key value for node ID
  if (list($key, $val) = each($node))  {
    // Node object variable
    $node = ($node[$key]['#node']);            
        // Assign page title field content variable, if set            
        if (isset($node->field_title)) {
          $node_title = $node->field_title;
                if (isset($node_title[$lang]['0']['value'])) {
                  $seo_title = $node_title[$lang]['0']['value'];
                // Fall back on undefined language if nothing set for this language
                        elseif (isset($node_title['und']['0']['value'])) {
                          $seo_title = $node_title['und']['0']['value'];
  // If manual field title for SEO has been set, set the title to [seo-title] | [site-name]
        if (isset($seo_title)) {
          if (strlen($seo_title)65) {        
  $vars['head_title'] = implode(' | ', array($seo_title, variable_get('site_name', ''),  ));
          else {        
  $vars['head_title'] = $seo_title;
// If SEO title field not set, use an automatic [current-page-title] | [site-name] scheme
        else {  
  $vars['head_title'] = implode(' | ', array(drupal_get_title(), variable_get('site_name', ''), ));  
// Uncomment to set custom pattern for non-node content
#       else {  
#  $vars['head_title'] = implode(' | ', array(drupal_get_title(), variable_get('site_name', ''), ));  
#       }

// ----- Custom Meta Description (uses $node variable from previous code) -----

// Assign meta_desc field content variable, if set
if (isset($node->field_meta_desc)) {
  $node_desc = $node->field_meta_desc;

        if (isset($node_desc[$lang]['0']['value'])) {
            $seo_desc = $node_desc[$lang]['0']['value'];
                 elseif (isset($node_desc['und']['0']['value'])) {
                        $seo_desc = $node_desc['und']['0']['value'];

if (isset($seo_desc)) {
// Create meta description element array for insertion into head
    $element = array(
      '#tag' => 'meta',
      '#attributes' => array(
        'name' => 'description',
        'content' => "$seo_desc",
    // Insert element into (if field has a value)
    drupal_add_html_head($element, 'meta_description');

// ------- END SEO CODE ---------

If nothing changes, flush your caches.

I hope this is useful to other SEOs and Drupal developers – follow me on Twitter to stay tuned for updates (I may see if it’s possible to turn this into a module). Please report any bugs in the comments.


1.03 – 30/5/2012 – Minor tidying of code and comments, fall-back pattern now applies to nodes with no title set rather than non-node content (moved a parenthesis to where it should have been)

1.04 – 31/5/2012 – Addition of optional custom pattern for non-node content (commented out by default)

1.05 – 24/7/2012 – Only shows ” | sitename” on the end of the custom title if the custom title is less than 65 characters in length (to help prevent title overflow in search results) – let me know if you think this is not improved functionality.

May 22 2012
May 22

We have some really fantastic clients here at LevelTen and have worked very hard with them to explain how web development works. We find though that there are many people that just have no idea what the process is about. Over the last couple of years of working with a fantastic team here at LevelTen, it has occurred to me just how similar building a website is to building a house. Turns out this analogy helps a lot in explaining what we do and how we do it. It also helps set expectations along the way.

If you were to start building a house today, would you start by hiring an interior decorator? Of course not! You also wouldn't ask them to design the structure of your house and yet this is often what happens when building websites. It is important to gather together all the right people necessary to build a website just like when building a house.

General Contractor/Project Manager

First there is the general contractor. For a house, he is the main contact that you, the customer, will have with all of the other people building the house. You may talk to the other people but at the end of the day he is the one responsible for the whole project, and to make sure that everyone has what they need and works together. At LevelTen we call this person the Project Manager.

Next up you would probably go talk to an architect to design the house. What's interesting at this point is that no one is really talking all that much about the color of the paint on the walls and which pictures go where. Generally you are just trying to get an outline of the house, which rooms go where and how big they are. Then there is the engineering of making sure everything is livable and works right. At LevelTen, this is the Information Architect/Wireframers we have. Their main job is to talk to the client, get a good idea of the breadth of the project and then design the whole thing so it will work.

Architect/Information Architect

It is really important when talking to an Information Architect to make sure that you don't leave stuff out. Could you imagine getting near the end of building your new house and then remember that you wanted a media room right in the middle. How much would it cost to squeeze another room there? That happens all the time in the web world. If we don't know what is going on up front then we can't plan for it and it is going to take a lot of effort (read time and money!) later on.


The next person who builds a house is the actual builders. They lay the foundation, raise the walls and make sure everything the plumbing and wiring is done. In typical house construction this usually takes about 40% of the budget to get the foundation laid and the walls framed. This again is very similar to web development. At LevelTen we have a very talented group of developers who use Drupal to build some incredibly feature rich websites very quickly. These are the builders of your website. They are going to be building the content types and views (roughly like rooms) and make sure that all the modules are set up (roughly like plumbing and electrical).

It's at this point that a web shop that really understands Drupal is going to stand out. It takes a while to really learn how to build a Drupal website the right way. We've seen plenty done the wrong way. So finding someone who can do it right the first time is important. Would you hire someone who only has experience building brick buildings to build your wooden framed house? I sure hope not.

Interior Decorator/Graphic Designer

Once the building of your house is done the Interior Decorator takes over. This is the person that picks out the colors, bricks, furniture and finishing touches for the site. So much of this process is dependent on the preferences of the person who will be living in the house that this is a bit of an art to get right. Again we've got a couple of very talented designers here at LevelTen that can make amazing designs for websites. They will pick out the colors and pictures (kinda like furniture!) and make sure everything looks good together.


Of course once the whole design is in place you are going to need The painters and movers to actually get the house looking like the design. This is analogous to our themers. It actually takes quite a bit of work to get all the images, colors and everything set up so that a very dynamic site will always look good. This is one area where I'd say the web world is actually harder than the physical world. Mostly because everything has to be done in such a way that it looks good on hundreds of pages instead of just one room. Luckily we've got a great themer with an eye on detail and a complete understanding of all the different browser quirks.

Occupants/Content Authors

At this point the house and website are more or less done but there is still one thing missing and that is the occupants! In houses people live in them, move around in them and are constantly doing things. With websites that is the role of the Content Authors. In order to have a really great website you need to constantly create and update your content. This is very much like the people that live and breathe in a house. You are going to need some people to live an breathe in your website as well. At LevelTen we typically don't do a lot of the content authoring since that is so specific to the company that it turns out much better when done by someone internal. We do, however, do a lot of training using the newly built website and how to best write for

Security and Maintenance

There is one other group of people that is important for houses and websites but is often overlooked. They are the people who secure and maintain the house. Think of the security monitoring firm and the handyman. Websites need the same monitoring and maintenance. You wouldn't build a house, furnish it and then not look at it again for 3 years would you? Websites are the same way. You need to keep them secure and maintained. LevelTen also offers a support contract that handles this, which allows our expert team to keep your website secure, up to date, and working.

So next time you think about building a website, be sure to not just look for some pretty designs but find a team that has all the skills necessary to take your project through the full process to create a successful website. I can honestly say that our team here at LevelTen is the best team I have ever seen for building websites, and it is an honor and privilege to be working with them.

Get Drupal help when you need it most! Find hundreds of great tutorials. Track, rate, comment and more. Create Account
Apr 01 2012
Apr 01

LevelTen Interactive announces today that despite many recent acquisitions and mass mergers in the Drupal web development space, LevelTen remains committed to being the last remaining independent Drupal shop and first choice for people looking for a quality Drupal site without being locked into a slow, expensive corporate behemoth.

First, Phase 2 Technology and the Treehouse Agency merged, taking two quality, agile shops and creating Initech PhaseHouse Cartel, and their new slogan "Open Source, Open Wallets". Dictator for Life Jeff Walpole was quoted as saying "Separately we were just too efficient. I'm hoping this move really helps to slow that whole development process down and add lots of red tape." Their talented engineers now spend most of their time creating HOOK_ALTER_TPS reports and back porting the Pirate module to Drupal 4.7.

A few days later Acquia was nationalized by Elio Di Rupo, Prime Minister of Belgium. His office later released a statement saying "Drupal is the most important and valuable thing to come out of Belgium since the waffle, and now we want it back." Dries barely escaped repatriation back to Brussels and instead has started a new company in Norway that plans to build a crowd sourced history of the Scandinavian peoples, called "Open Norse". Angie Byron was recruited to build software solutions for General Mill's cereal division, and has changed her handle to "webchex". Co-founder Jay Batson was elated by the move to Brussels and was quickly picked up by the Beligum Cycling Team (Seniors Division).

Today a dozen or so top-shelf Drupal companies (and couple of crappy ones) joined together to form Bluemarine Synergistics, a new unmovable object in the Drupal waterfall methodologies and paperwork generation space. New CEO Jedavael Robilton was heard to boast "Drupal was losing deals to IBM and Adobe because Drupal created beautiful web solutions too quickly and cheaply - now we can compete on an even playing field".

That leaves LevelTen Interactive in Dallas, a quickly (but not too quickly!) growing Drupal development company in rural and pastoral downtown Dallas, TX as the last remaining independent Drupal shop. We've resisted the corporatization of Drupal and remain true to our roots as a 5th generation family-run web development company. CEO Tom McCracken's great, great, great grandmother personally hand-coded Stephen F. Austin's Republic of Texas website (in Ruby on Transcontinental Rails) and we've been providing quality, independent, website development ever since.

For more information on how to avoid getting crushed in the Drupal International Conglomerates uncaring and mashing gears, please visit leveltendesign.com, and learn more about our own Drupal Distribution Open Small Family Locally Sourced Cooperative. 100% of the profits of this free distribution goes to providing food and bicycles for Randy Fay.

LevelTen Interactive, April 1, 2012


Get Drupal help when you need it most! Find hundreds of great tutorials. Track, rate, comment and more. Create Account


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web