Mar 22 2013
Mar 22

Listen online: 

In this episode, Addison Berry is joined by Kyle Hofmeyer, Dave Burns, Jerad Bitner, and Sean Lange to discuss the deceptively simple question "What is a Drupal Site Builder?" This is a term that is bandied about quite a bit, but we wanted to try to zero in on what exactly a site builder does. We end up talking about several different roles that are often used around site-building, like themer, front-end developer, developer, and they relate to each other. In the process, Dave and Sean end up creating some new words too. What is a globulator?

Podcast notes

Ask away!

If you want to suggest your own ideas for podcasts, or have questions for us to answer on a podcast, let us know:
Twitter
Facebook
Contact us page

Release Date: March 22, 2013 - 10:21am

Album:

Length: 39:31 minutes (22.86 MB)

Format: mono 44kHz 80Kbps (vbr)

Mar 13 2013
Mar 13

I'm working on another Drupal site and as usual, it is going to make heavy use of the entityreference field to link entities together. Many of these fields are multiple value fields, and we need an easy way to allow editors to select multiple values from some long lists of potential values.

We have a couple options out of the box. We can display the options as checkboxes, an autocomplete field, or a drop-down select list. These are long lists, too long for checkboxes. And a long list in a multiple value select list is ugly and hard to use. That leaves the autocomplete, which is fine if you know what values to expect, but it's not very good for discovering or sorting through the available options. I'm looking for something that handles long multiple value lists better. It should make it easy to see what's been selected and what's available to be selected and be easy to use.

I finally pulled down a collection of possible Drupal 7 contributed modules to review to see what the options are. Here's a line up of some of the options with screen shots to show what each of them looks like and a little information about how to get them working and what they do.

For a point of reference, here's what the unvarnished Drupal multiple value selector looks like.

Multiple Selects

One option is Multiple Selects. This is a fairly simple rework that turns a single multiple value select list into a series of single value select lists, with an 'Add more' button. It's very easy to set up, just enable the module and choose the 'Multiple Selects List' widget as the widget.

Chosen

Another interesting option is Chosen. Chosen is based on a jQuery library that pulls together some of the benefits of both an autocomplete and a drop-down selector. The selected values are listed at the top of the selector. The bottom of the selector has a drop-down list of the available options. In between is an autocomplete box where you can type values that will be used to narrow the list.

To install this module, you have to enable the Libraries module and grab the jQuery Chosen library and drop it in sites/all/libraries/chosen. Then go to admin/config/user-interface/chosen and indicate when the Chosen selector should be applied. It can be set up to only apply to long lists of options and leave short lists alone and there is a jQuery selector that can be used to identify which elements it should be applied to. You need to set this up carefully because this will affect every select list on the site, not just those for a specific field.

Dynamic Multiselect

An additional possibility is the Dynamic Multiselect Widget. To use it you need to enable Dynamic Select and Entity Reference Dynamic Select Widget. Then you need to create a view using the 'Dynamic Select' display type. The view should be a list of the values you want to see in the select list. Finally, change the Entityreference field to use the Dynamic Select widget, and set up the widget to use the view that you just created.

The result looks like the following, where every selected item shows up with an individual selector. At the bottom is an 'Add more' button that you can use to select another option.

It took me a while to figure out what the 'Filter' options in the widget were for but I finally understood. Basically each drop down select list is a view, and the 'Filter' option to its right is a custom exposed filter for that view that allows you to filter the list to its left. I added a new filter to the view for the node title using the 'contains' operator, edited the field widget settings to indicate that I wanted to use the 'Title' filter in the widget, and after that I could type a title or partial title into the filter textfield and it would limit the list to its left to just the values that matched the value I typed in.

Entityreference Views Widget

Another option is Entityreference Views Widget. This widget uses a view to display the available options, making it possible to display lots more information about each option, like title, images, description, etc.

To install it, create a view of the items that should be displayed in the selector using a display of the type 'Entityreference Views Widget'. change the widget to the 'View' widget and select the view you just created. Then check the 'Display fields' tab for the content type being displayed in the widget and adjust the 'Entity Reference View Widget' view mode to identify the fields that you want to see in the selector.

This option makes it possible to see lots of information besides the title of the related items, so you could display images or descriptions to help indicate which is which. The downside of this widget is that it takes up a lot of space on the node form. It would be nice if it opened up in a modal window and just displayed just the selected items in the form to take up less space.

Improved Multi Select

The Improved Multi Select module is another option. To install it, enable the module, then go to admin/config/user-interface/improved_multi_select and choose the options. You can apply this to all multiple value selectors on the site, or indicate which ones to use, and you can indicate which paths to apply the effect on. At a minimum you will want to add 'select[multiple]' as the replacement. The result looks like the following:

jQuery UI Multiselect

Finally there is jQuery UI Multiselect. This is another jQuery effect. It requires the jQuery Update module. Once enabled multiple select form elements are transformed to look as follows:

The settings are controlled from admin/config/user-interface/jquery_ui_multiselect_widget. It will work out of the box with the default settings, but they can be adjusted.

Which is Best?

So that's my list. There are probably other alternatives but these are the ones I knew about or could find that have Drupal 7 releases. I don't want to even attempt to evaluate which of them is the 'best', because that depends on how you want to use it.

But I think it's helpful to see some of the available alternatives all in one place to help figure out which of them is the 'best' solution for your specific site.

Mar 11 2013
Mar 11

Drupal's content modeling toolbox makes it easy to add simple values to any content type -- numbers, text fields, dates, and so on -- but odd gaps pop up occasionally. Numeric ranges, for example, can be frustrating: While you could add separate "minimum" and "maximum" fields to a content type, it's a bit frustrating to split conceptually related set of values into distinct fields. Thankfully, the Range module can fill the gap.

Screenshot of administration screen

Range field gives site builders and administrators a single field type with two values: minimum and maximum. Variants for each basic numeric type are available (Float range, integer range, and so on), and the module provides enough formatting options to capture most of the basic use cases for the field. Views integration is also baked in, making it possible to sort and filter by the range values as well.

Screenshot of resulting change to site

A handful of existing feature requests for the module suggest potential improvements: the Views filter would be even more useful if it could pull up nodes nodes whose ranges included the incoming filter value, for example. While it's possible to simulate that using a pair of filters working together, it's definitely not as convenient. That request aside Range is a clean and simple field module that fills an awkward gap. If your content types need ranges, it's just what you need!

*/
Mar 04 2013
Mar 04

A freshly-installed copy of Drupal core includes the handy ability to generate short "teaser" text from a long node automatically. You can choose the character count that will be used, but unfortunately that's all: if you need to break on word boundaries or want to make it clear that the text is a truncated version of the full article, you're out of luck. If you need more flexibility, the Smart Trim module can help.

Screenshot of administration screen

Smart Trim adds a new set of formatting options to Drupal's Text, Long Text, and Long Text With Summary field types. Like the existing Summary formatter, it allows you to choose the truncation length -- but you can also truncate at a specific number of words, ensuring that the break won't fall awkwardly between words. An elipses can be inserted automatically at the end of the truncated text, and HTML can be stripped out to ensure that unclosed tags don't sneak into teasers. It's even possible to automatically add a "Read more…" link that points to the full view of the node itself. While that one's less useful on node teasers, it can be handy when you're using the truncated summary text in a highly streamlined view; it eliminates the need to add a separate field to contain a link to the node.

Screenshot of resulting change to site

Most of the options in Smart Trim are available in older modules for Drupal 5 and 6, or can be accomplished with a few snippets of custom code tucked in template files. This module collects all of them in one simple field formatter, however, and ensures the busy admins only need to grab one tool to make sure their teasers work just so.

*/
Feb 18 2013
Feb 18

It's a simple problem, but a serious one. You've put your content editors in front of Drupal for the first time, and they can understand the node form without any problems. They understand taxonomy terms, grok menus and node references… but they get nervous when it's time to save their work. "Will... will this be published as soon as I click 'save?'" Normally, there's no good way to make the distinction between saving and publishing a piece of content explicit. Site builders can set a content type to be unpublished by default, then give editors the broad "administer nodes" permission, but that's clumsy solution that forces editors to dig for what should be a simple action: publishing or unpublishing a post. That's where the Publish Button module comes in.

Screenshot of settings screen

With the module installed, every content type gets a simple new checkbox: "Use publish/unpublish button." Once it's selected, users with appropriate permissions get a new, Publish, on the node edit form. It unambiguously flips the node's "Published" flag and saves it. If the node is already published, the button reads (predictably) "Unpublish." Site builders can set up fine-grained permissions around who can and can't publish and unpublish content. Setting posts to be unpublished by default, then enabling this module, can give the Drupal content creation workflow an almost WordPress-esque simplicity.

Screenshot of the resulting change to the node form

Because Publish Button is so simple, there's not much to say about how it works. You set up the permissions, activate it for the appropriate content types, and you're ready to go. It won't solve all of your content workflow problems -- it makes no provisions for scheduled publishing or detailed review of revisions, for example -- but it's a clean, quick solution to the uncertainty that comes from Drupal's one-size-fits-all "Save" button.

*/
Feb 11 2013
Feb 11

Comparing Drupal themes is tough: the screenshots they provide are often based on heavily tweaked sites with plenty of slider blocks, tweaked media attachments, and other just-so content. Figuring out the "basics" -- how a given theme styles core HTML elements and recurring Drupal interface patterns -- can be tough! Fortunately, the Style Guide module can help.

Screenshot of styleguide overview

The module was written by a group of experienced Drupal developers, site builders, and themers -- its sole purpose is to provide a single page that contains easily viewable examples of HTML and Drupal design elements. Form fields? Check. Password strength checkers? Yes! Themed Drupal navigation menus? Breadcrumbs? Tabbed input groups? Large chunks of text with inline images? Yes, yes, yes, and also yes. Just enable a theme, browse to the /admin/appearance/styleguide page on your site, and you can quickly see how it handles all of the basics. If you have multiple themes enabled (for example, a primary theme and a backend administrative theme), the module provides a tab for each one. Clicking one quickly switches the active them, and thus the elements you're previewing.

Screenshot of styleguide details

Obviously, a module like this can't show you everything that a theme is capable of. It does provide a easy way to see how it handles the essentials -- typography, form elements, navigational menus, and so on. If you're building your own theme, Style Guide can also serve as a handy checklist of elements to account for.

*/
Feb 07 2013
Feb 07

Lullabot Re-Launches a Responsive GRAMMY.com for Music’s Biggest Night

Lullabot is excited to announce its fourth annual redesign and relaunch of GRAMMY.com! Just in time for the 55th Grammy Awards this Sunday, February 10th, this year's site features a complete redesign, as well as an upgrade to Drupal 7 leveraging Panels and Views 3.x, some cool, fullscreen, swipeable photo galleries and other mobile-web bells and whistles. The site is also fully responsive, allowing the 40 million+ expected viewers to stream, share, and interact across any web or mobile device. Special congratulations to Acquia for hosting GRAMMY.com for the first time, as well as to our good friends over at Ooyala for once again delivering the site’s archive video content, and to AEG for delivering the live stream video via YouTube.

Simultaneously tune in to every aspect of music’s biggest night so you won’t miss a thing: awards news, video, pictures, artist info, Liveblog posts, thousands of tweets-per-minute and behind-the-scenes action.

The GRAMMY Live stream starts tomorrow, Friday February 8th at 5pm ET / 2pm PT, so you can spend all weekend with the website as you gear up for the CBS telecast on Sunday night. Don't miss any of the GRAMMY action this weekend!

We'll be watching and tweeting along from @lullabot. Follow us on Twitter and let us know what you think.

Feb 04 2013
Feb 04

It's a question we all ask ourselves: What would I do if my site or server was compromised? Security professionals have loads of checklists to follow, and experienced server administrators drill for those moments. As we saw when Twitter.com was recently compromised by hackers, "Reset everyone's passwords, right away!" is almost always one of the important steps. If you run a Drupal site, that particular step can be frustrating. Resetting user passwords one by one is incredibly time consuming, and there's no way to do it for everyone in one fell swoop. At least, there wasn't until the release of the Mass Password Reset module…

Screenshot of administration screen

This recently-released module gives administrators a simple, straightforward admin page where they can reset every user's password with a single click. Notification emails can optionally be sent out to each user, just as if they'd requested the password reset themselves. (If you're using the module as part of the response to an actual security incident, it's probably a good idea to modify the standard password reset email before you click the "reset" button -- explaining why they've been reset is important, and unfortunately the module doesn't let you override it right from its bulk reset screen.) You can also choose to reset the root administrator's password (User 1 on the Drupal site), though the option is disabled by default.

There are a few situations in which you'd want to issue a mass password reset in calmer times. Just before launch, for example, you might want to ensure that a large batch of users migrated from another system all select new, secure passwords. For the most part, though, Mass Password Reset is a good tool to keep in your back pocket for a time when you need it. Hopefully you won't, but it's great to have when you do.

*/
Jan 29 2013
Jan 29

Working with Lightbox2 and Colorbox

In this series, Lightboxes and Drupal 7, Kyle Hofmeyer walks you through working with two popular lightbox modules in Drupal 7: Ligthbox2 and Colorbox. The first video in the series is free, and explains what a lightbox is, and some things to consider when selecting which module to use.

Jan 29 2013
Jan 29

Working with Lightbox2 and Colorbox

In this series, Lightboxes and Drupal 7, Kyle Hofmeyer walks you through working with two popular lightbox modules in Drupal 7: Lightbox2 and Colorbox. The first video in the series is free, and explains what a lightbox is, and some things to consider when selecting which module to use.

Jan 28 2013
Jan 28

It's a word that can strike fear into the heart of the bravest site builder: Breadcrumbs. Manage them well, and you'll give visitors a helpful visual indicator of where they're at in your site. Miss a detail, and the weird inconsistencies will be more confusing than no breadcrumbs at all. The challenges stem from Drupal's "flat hierarchy" -- by default, almost all pages (including every node you create) live just beneath the home page itself in an undifferentiated pool of content. All of the visual cues it sends to visitors (breadcrumb trails, highlighted parent items in the navigation menus, and so on) start with that assumption until you override them. That's where the Menu Position module helps out. It lets you set up simple rules that tell Drupal where each node type should go in the site's hierarchy, then handles all of the frustrating details automatically.

Screenshot of administration screen

The module's operation is simple and straightforward. Site administrators can set up simple rules describing certain pools of content -- nodes of type 'Blog,' articles tagged with 'Star Wars,' and so on -- then assign them to a particular parent menu item. From that point on, the rule will ensure that matching nodes get the proper breadcrumb trail, highlighted menu trails, and so on. Multiple rules can be active at once, positioning different pools of content in their proper home. It's a much more efficient approach than positioning each node underneath a parent item manually, and there's no performance slowdown if your site sports thousands (or hundreds of thousands) of nodes.

Screenshot of resulting change to site

Menu Position was created by John Albin, the author of the popular Menu Blocks module. He's well-versed in the frustrations that come with wrangling menu items and breadcrumbs, and the module is polished and straightforward. The only downside is that items must be children of an actual Drupal menu item. If you'd like article nodes to appear as if they're the children of a complex contextual-filter-driven view, for example, additional modules like Path Breadcrumbs might be necessary. For the vast majority of sites, though, Menu Position module does the job smashingly, and with a minimum of hassle.

*/
Jan 21 2013
Jan 21

In honor of Module Monday's post-holiday return, we're taking a look at a problem that plagues many sites: dead links. If you maintain content that contains links to other sites, it's inevitable that some of them will ultimately go bad. Domains expire, sites go down, articles are unpublished, blogs migrate to a new CMS and change their URL patterns... and eventually you're left with a dusting of broken URLs in your otherwise pristine content. That's where LinkChecker comes in. It's a module for Drupal 6 and 7 that scans your content for busted links, tells you what nodes need fixing, and -- optionally -- tidies up the ones it can fix automatically.

Screenshot of administration screen

LinkChecker runs at cron time on your Drupal site, churning its way through a bite-size number of nodes each time and scanning them for URLs. It pings those URLs, makes sure a working web page is still there, and moves on. If not, it logs the specific HTTP error for later reference and moves on. It's handy, but the devil is always in the details -- and LinkChecker is designed to handle all of them with aplomb. Do you need to white-list certain content types to ensure they aren't scanned? No problem. Need to make sure that dummy URLs like "example.com" don't get checked and generate false positives? It handles that, too. Need to hunt for urls contained in dedicated CCK or FieldAPI Link fields, in addition to text fields and node bodies? No problem. Want to check image links that reside on remote servers, or check URLs that are generated by Drupal input filters even though they don't appear in the "raw" text of the node? LinkChecker allows a site administrator to toggle all of those options and more.

The module can correct kinds of errors automatically (301 redirects, for example) but it's up to the site's administrator to check the report that it generates for news about broken links. There, each node with busted links can be reviewed and edited.

Screenshot of resulting change to site

LinkChecker is a tremendously useful tool, and its smorgasbord of configuration options means that it can deal with lots of oddball edge cases. One missing option that would still be welcome? An easy way to export the "busted links" report to a text file for review. For extremely large sites, dedicated third-party web crawlers with link checking functions may be a more robust solution, but for Drupal admins who need a hand keeping their sites tidy, it's a lifesaver.

*/
Dec 31 2012
Dec 31

Make 2013 safer with a backup plan for your site

Ah, New Years Eve. It's the time for parties, reminiscing, and the old tradition of New Years resolutions. We promise ourselves we'll exercise, read that important book, donate to charity, or -- depending on the number of web sites we're responsible for -- vow that this is the year to set up a backup plan. Few things, after all, are more terrifying than the realization that a server hiccup has wiped out a web site, or a hasty change deployed to the live site has nuked important content. Fortunately, there's a module that can help. Backup and Migrate offers site builders a host of options for manually and automatically backing up their sites' databases -- and integrates with third-party backup services, to boot!

Screenshot of Backup and Migrate settings

Backup and Migrate offers a number of options for preserving your Drupal site's database, but all of them revolve around three important choices. You can choose to backup just a select group of database tables, or the whole kit and kaboodle -- useful for ditching the unwieldy cache and watchdog tables that Drupal can easily recreate. You can choose the destination for your backup -- drop it into a private directory on your server, download it directly to your computer if you're performing a manual backup, or send it to several supported third-party servers. And finally, you can run backups manually or schedule them to occur automatically.

In its simplest form, you can use the module to manually pull down a snapshot of a site's database for safe keeping, or to do local development with the latest and greatest production data. For real safety, though, you can tie it to services like Amazon S3 storage, or the new NodeSquirrel offsite backup service. Set up scheduled daily or weekly backups, tell it how long to keep existing backups around, and rest assured that regular snapshots of your site's critical data will be tucked away for safe keeping when you need them.

When disaster strikes, you can use the Backup and Migrate module to upload one of those database backups, and restore your site to the state it was in when the backup was made.

Screenshot of the Restore Backup screen

It's important to remember that Backup and Migrate won't solve all of your problems. It requires a third-party addon module (Backup and Migrate Files) to archive and restore the site's important file uploads directory, for example. In addition, the easy one-click backup and restore process can tempt developers to forgo safe deployment strategies for new features. Just because it's possible to download a database snapshot, do configuration work on your local computer, then re-upload the snapshot to the live server, doesn't mean it's a good idea.

That said, Backup and Migrate is an excellent tool that's proven its worth on countless sites. Its clean integration with third-party file storage services also means that it's a great transitional path towards a full-fledged backup strategy for business-critical data. If you aren't using it, check it out -- and enjoy the peace of mind that comes with keeping your site's data safe.

*/
Dec 17 2012
Dec 17

Among Drupal 7's new features was a freshly rewritten Taxonomy module. Starting with that version, Drupal's venerable tool for tagging and categorizing content became a FieldAPI Field, just like Image fields, Node Reference fields, and so on. That's meant much greater flexibility for editing and displaying taxonomy terms on new Drupal sites, but it's also caused a few hiccups. In particular, when taxonomy terms are deleted (say, by an editor determined to keep things tidy), the references to them on existing nodes don't go away. Taxonomy Orphanage module is designed to fix that specific issue.

Screenshot of administration screen

Using the module is simple -- once it's installed an enabled, the Taxonomy administration page gets an extra tab. There, you can can tell the module to hunt down and remove "dangling" references to nonexistent taxonomy terms. You can also turn on automatic scans for dangling taxonomy terms: whenever cron runs for your site, it will march through a configurable number of nodes and remove pointers to nonexistent terms as well. The use case is simple -- resolving a simple issue with Drupal core -- but Taxonomy Orphanage helps keep your site's content neat and tidy with a minium of fuss.

*/
Dec 11 2012
Dec 11

Inline WYSIWYG editing can improve life for some content managers, but brings new problems for content-rich sites.

For several years, core Drupal contributors have been working on ways to improve the user experience for content editors. Since May of 2012, project lead Dries Buytaert and his company Acquia have been funding the Spark Project, an ambitious set of improvements to Drupal's core editing experience. One of the most eye-popping features they've demonstrated is Inline WYSIWYG editing, the ability to click on a page element, edit it in place, and persist the changes without visiting a separate page or opening a popup window.

Chances are good that inline editing functionality could make it into Drupal 8 -- specifically, an implementation that's powered by Create.js and the closely associated Aloha WYSIWYG editor. Fans of decoupled Drupal code definitely have something to cheer for! The work to modernize Drupal 8's codebase is making it much easier to reuse the great front-end and back-end work from open source projects like Symfony and Create.js.

With that good news, though, there's a potential raincloud on the horizon. Inline editing, as useful as it is, could easily be the next WYSIWYG markup: a tool that simplifies certain tasks but sabotages others in unexpected ways.

Direct manipulation: A leaky abstraction

Over a decade ago, software developer Joel Spolsky wrote a critically important blog post about user experience: The Law of Leaky Abstractions. He explained that many software APIs are convenient lies about more complex processes they hide to simplify day-to-day work. Often these abstractions work, but just as often the underlying complexity "leaks through."

One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to.

The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying "learn how to do it manually first, then use the wizzy tool to save time." Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don't save us time learning.

And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder.

Those words were written about APIs and software development tools, but they're familiar to anyone who's tried to build an humane interface for a modern content management system.

At one extreme, a CMS can be treated as a tool for editing a relational database. The user interface exposed by a CMS in that sense is just a way of giving users access to every table and column that must be inserted or updated. Completeness is the name of the game, because users are directly manipulating the underlying storage model. Any data they don't see is probably unnecessary and should be exorcised from the data model. For those of us who come from a software development background this is a familiar approach, and it's dominated the UX decisions of many open source projects and business-focused proprietary systems.

At the other extreme, a CMS can be treated as an artifact of visual web design. We begin with a vision of the end product: a photography portfolio, an online magazine, a school's class schedule. We decide how visitors should interact with it, we extrapolate the kinds of tasks administrators will need to perform to keep it updated, and the CMS is used to fill those dynamic gaps. The underlying structure of its data is abstracted away as WYSIWYG editors, drag-and-drop editing, and other tools that allow users to feel they're directly manipulating the final product rather than markup codes.

The editing interfaces we offer to users send them important messages, whether we intend it or not. They are affordances, like knobs on doors and buttons on telephones. If the primary editing interface we present is also the visual design seen by site visitors, we are saying: "This page is what you manage! The things you see on it are the true form of your content." On certain sites, that message is true. But for many, it's a lie: what you're seeing is simply one view of a more complex content element, tailored for a particular page or channel.

In those situations, Inline WYSIWYG editing is one of Joel Spolsky's leaky abstractions. It simplifies a user's initial experience exploring the system, but breaks down when they push forward -- causing even more confusion and frustration than the initial learning would have.

A brief interlude, with semantics

With that provocative statement out of the way, I'll take a step back and define some terminology. Because Drupal's administrative interface, the improvements added by the Spark project, and the nature of web UX are all pretty complicated, there's a lot of potential for confusion when a term like "Inline Editing" gets thrown around. There are four kinds of editing behaviors that we'll touch on, and clarifying how they differ and overlap will (hopefully) prevent some confusion.

Contextual editing

When a content editor is on a particular portion of the web site or is viewing a particular kind of content, they should have access to options and tools that are contextually relevant. If an editor visits an article on their web site, give them access to an "Edit" link for that article. If it's unpublished, they should see a "Publish" link, and so on. Contextual editing also means hiding options from users when they're inappropriate. If you don't have permission to modify an article, you shouldn't see the "Edit" link.

Well-designed contextual editing is a great thing! It puts the right tools in the users' hands when they're needed, and helps prevent "option overload"."

API-based editing

Rather than rendering an HTML form, API-based editing means bundling up a copy of the content object itself -- usually in a format like XML or JSON -- and sending it to another program for editing. That "Client" could be Javascript code running on a user's browser, a native mobile app, or another CMS entirely. The client presents an editing interface to the user, makes changes to the object, and sends it back to the CMS they're done.

API-based editing is cool, too! It's not a specific user-visible widget or workflow. In fact, it could be used to deliver the very same HTML forms users are used to -- but it provides a foundation for many other kinds of novel editing interfaces.

Inline editing

Inline editing takes contextual editing a step farther. When you see data on the page, you don't just have a link to edit it at your beck and call: you can edit it right there without going to another page or popup window. One common scenario is tabular data: click in a cell, edit the cell. Click outside of the cell, and your changes are saved. A more complex example might include clicking on the headline of an article and editing it while viewing it on the front page, or clicking on the body text and adding a new paragraph then and there. The emphasis here is on eliminating context switches and unecessary steps for the editor.

Inline editing can dramatically simplify life for users by replacing cluttered forms, fields, and buttons with direct content manipulation. However, when direct manipulation the primary means of editing content, it can easily hide critical information from those same users. We'll get to that later.

WYSIWYG editing

"What You See Is What You Get" editing is all about allowing users to manipulate things as they will appear in the finished product rather than using special codes, weird markup, or separate preview modes. Desktop publishing flourished on 1980s Macintosh computers because they let would-be Hearsts and Pulitzers lay out pages and set type visually. WYSIWYG HTML editors have been popular with web content editors for similar reasons: finessing the appearance of content via clicks and drags is easier than encoding semantic instructions for web browsers using raw HTML.

WYSIWYG editing tools can help reduce markup errors and streamline the work of content managers who don't know HTML. Without careful restrictions, though, it can easily sabotage attempts to reuse content effectively. If a restaraunt's menu is posted as a giant HTML table in the "Menu" page's Body field, for example, there's no way to highlight the latest dishes or list gluten-free recipes. Similarly, if the key photo for a news story is dropped into that Body field with a WYSIWYG editor, reformatting it for display on a mobile phone is all but impossible.

Everything in-between

Often, these four different approaches overlap. Inline editing can be thought of as a particularly advanced form of contextual editing, and it's often built on top of API-based editing. In addition, when inline editing is enabled on the visitor-visible "presentation" layout of a web site, it functions as a sort of WYSWIWG editing for the entire page -- not just a particular article or field.

That combined approach -- using inline editing on a site's front end to edit content as it will appear to visitors -- is what I'll be focusing on. It's "Inline WYSIWYG."

Inline WYSIWYG! Can anything good come from there?

Of course! Over the past year or so, anything with the word 'WYSIWYG' in it has taken a bit of a beating in web circles, but none of the approaches to content editing listed above are inherently good or bad. Like all tools, there are situations they're well-suited for and others that make an awkward fit.

As I’m writing this, I see not just a WYSIWYG editor, I see the page I’m going to publish, which looks just like the version you’re reading. In fact, it is the version you’re reading. There’s no layer of abstraction. This is a simple (and old) concept… and it makes a big difference. Having to go back and forth between your creation tool and your creation is like sculpting by talking.

That's an incredibly compelling argument for the power of WYSIWYG and inline editing. I've seen it in action on Medium, and it really does feel different than the click-edit-save, click-edit-save cycle that most web based tools require. However, and this is a big however, it's also critical to remember the key restrictions Ev and his team have put in place to make that simplicity work.

One of the reasons its possible to have this really WYSIWYG experience is because we’ve stripped out a lot of the power that other online editors give you. Here are things you can’t do: change fonts, font color, font size. You can’t insert tables or use strikethrough or even underline. Here’s what you can do: bold, italics, subheads (two levels), blockquote, and links.

In addition, the underlying structure of an article on Medium is very simple. Each post can have a title, a single optional header image, and the body text of the article itself. No meta tags, no related links, no attached files or summary text for the front page. What you see is what you get here, too: when you are viewing an article, you are viewing the whole article and editing it inline on the page leaves nothing to the imagination.

This kind of relentless focus -- a single streamlined way of presenting each piece of content, a mercilessly stripped down list of formatting options, and a vigilant focus on the written word -- ensure that there really is no gap between what users are manipulating via inline editing and what everyone else sees.

That's an amazing, awesome thing and other kinds of focused web sites can benefit from it, too. Many small-business brochureware sites, for example, have straightfoward, easily-modeled content. Many of those sites' users would kill for the simplicity of a "click here to enter text" approach to content entry.

The other side(s) of the coin

Even the best tool, however, can't be right for every job. The inline WYSIWYG approach that's used by Create.js and the Spark Project can pose serious problems. The Decoupled CMS Project in particular proposes that Inline WYSIWYG could be a useful general editing paradigm for content-rich web˙sites, but that requires looking at the weaknesses clearly and honestly.

Invisible data is inaccessible

Inline editing, by definition, is tied to the page's visible design. Various cues can separate editable and non-editable portions of the page, but there's no place for content elements that aren't part of the visible page at all.

Metadata tags, relationships between content that drive other page elements, fields intended for display in other views of the content, and flags that control a content element's appearance but aren't inherently visible, are all awkward bystanders. This is particularly important in multichannel publishing environments: often, multiple versions of key fields are created for use in different device and display contexts.

It encourages visual hacks

Well-structured content models need the right data in the right fields. We've learned the hard way that WYSIWYG markup editors inevitably lead to ugly HTML hacks. Users naturally assume that "it looks right" means "everything is working correctly." Similarly, inline WYSIWYG emphasizes each field's visual appearance and placement on the page over its semantic meaning. That sets up another cycle of "I put it there because it looked right" editing snafus.

The problem is even more serious for Inline WYSIWYG. Markup editors can be configured to use a restricted set of tags, but no code is smart enough to know that a user misused an important text field to achieve a desired visual result.

It privileges the editor's device

In her book , author Karen McGrane explains the dangers of the web-based "preview" button.

…There's no way to show [desktop] content creators how their content might appear on a mobile website or in an app. The existence of the preview button reinforces the notion that the dekstop website is the "real" website and [anything else] is an afterthought.

Inline WYSIWG amplifies this problem, turning the entire editing experience into an extended preview of what the content will look like on the editor's current browser, platform, screen size, and user context. The danger lies in the hidden ripple effects for other devices, views, publishing channels, and even other pages where the same content is reused.

It complicates the creation of new items

Create.js and the Spark Project also allow editors to create new content items in place on any listing page. This is a valuable feature, especially for sites dominated by simple chronological lists or explicit content hierarchies.

On sites with more complex rule-based listing pages, however, the picture becomes fuzzier. If an editor inserts a new piece of content on another author's page, does the content become owned by that author? On listing pages goverened by complex selection rules, will the newly-inserted item receive default values sufficient to ensure that it will appear on the page? If the editor inserts new content on a listing page, but alters its fields such that the content no longer matches the listing page's selection rules, does the content vanish and re-appear in a different, unknown part of the web site?

In addition, multi-step workflows accompany the creation of content on many sites. Translating a single piece of content into several legally mandated languages before publication is necessary in some countries, even for small web sites. Approval and scheduling workflows pose similar problems, moving documents through important but invisible states before they can be displayed accurately on the site.

Complexity quickly reasserts itself

Many of the problems described above can be worked around by adding additional visual cues, exposing normally hidden fields in floating toolbars, and providing other normally hidden information when editors have activated Inline WYSIWYG. Additional secondary editing interfaces can also be provided for "full access" to a content item's full list of fields, metadata, and workflow states.

However, the addition of these extra widgets, toolbars, hover-tips, popups, and so on compromise the radical simplicity that justified
Inline Editing in the first place. On many sites, a sufficiently functional Inline WYSIWYG interface -- one that captures the important state, metadata, and relational information for a piece of content -- will be no simpler or faster than well-designed, task-focused modal editing forms. Members of the Plone team discovered that was often true after adding Inline WYSIWYG to their CMS. After several versions maintaining the feature, they removed it from the core CMS product.

To reiterate Ev William's vision for Medium,

There’s no layer of abstraction. This is a simple (and old) concept… and it makes a big difference. Having to go back and forth between your creation tool and your creation is like sculpting by talking.

In situations where Inline WYSIWIG can't live up to that ideal, it paradoxically results in even more complexity for users.

In conclusion, Inline WYSIWYG is a land of contrasts

So, where does this leave us? Despite my complaints, both Inline and WYSIWYG editing are valuable tools for building an effective editorial experience. The problem of leaky abstractions isn't new to Drupal: Views, for example, is a click-and-drag listing page builder, but requires its users know SQL to understand what's happening when problems arise. As we consider how to apply the tools at our disposal, we have to examine their pros and cons honestly rather than fixating on one or the other.

The combined Inline WYSIWYG approach can radically improve sites that pair an extremely focused presentation with simple content. But despite the impressive splash it makes during demos, Inline WYSIWYG as a primary editing interface is difficult to scale beyond brochureware and blogs. On sites with more complex content and publishing workflows, those training wheels will have to come off eventually.

Is Inline WYSIWYG right for Drupal core? While it can be very useful, it's not a silver bullet for Drupal's UX werewolves. Worse, it can actively confuse users and mask critical information on the kinds of data-rich sites Drupal is best suited for. Enhanced content modeling tools and the much-loved Views module are both built into Drupal 8; even new developers and builders will easily assemble sites whose complexity confounds Inline WYSIWYG.

At the same time, the underlying architectural changes that make the approach possible are incredibly valuable. If Drupal 8 ships with client-side editing APIs as complete as its existing server-side edit forms, the foundation will be laid for many other innovative editing tools. Even if complex sites can't benefit from Inline WYSIWYG, they'll be able to implement their own appropriate, tailored interfaces with far less work because of it.

Like WYSIWYG markup editors, design-integrated Inline WYSIWYG editing is an idea that's here to stay. Deciding when to use it appropriately, and learning how to sidestep its pitfalls, will be an important task for site builders and UX professionals in the coming years. Our essential task is still the same: giving people tools to accomplish the tasks that matter to them!

Dec 10 2012
Dec 10

It's a simple problem, but a tricky one: How can you ensure that special words and phrases, like your company's name or certain trademarks, are always linked to an appropriate web site when they're used in the text of an article? The easy answer is Word Link module: it lets you set up a custom glossary of terms that should be turned into links whenever the appear in text.

Screenshot of administration screen

Configuration is straightforward: once the module is turned on, administrators can choose what content types and what fields should receive the link treatment. Then word lists can be set up, each linking to a custom URL inside your site or on another server. Case-sensitive matching can be turned on and off for each word, and special "no-link" pages can be set up where the module won't apply its filtering. The result? Words are turned into links, wrapped in a special CSS tag that makes them easy to style, and you don't have to train your content administrators that "Our Company, Inc." should always link to the main corporate site.

Screenshot of resulting change to site

The design of Word Link -- each linked word and phrase is its own custom record with unique settings -- means that it's best suited to a small but import list of names, key phrases, and so on rather than a massive glossary. The module also features a handy "only link the first n instances to the word" option, but in the current release that feature isn't working correctly. That said, it's a clean and self-contained solution to the problem that won't interfere with the rest of your editing tools!

*/
Nov 26 2012
Nov 26

Media heavy Drupal sites feel the pain of bandwidth costs in a big way. Simply dropping a few large rotator images on your site's home page, for example, can bloat load times and inflate costs for network and CDN usage on high-traffic sites. Drupal 6's ImageCache module (now built into Drupal 7's Image module) can scale images down to the correct size automatically, but JPEG and PNG images still have a lot of space left to give: specialized utilities can crunch them down to save additional space without compromising quality. That's where the ImageAPI Optimize module comes in: it allows you to pipe all uploaded images through those optimizers automatically.

Screenshot of administration screen

Installing ImageAPI Optimize is easy on the Drupal side -- drop the module in place, turn it on, and head to the Image Settings configuration page. Normally, Drupal provides a simple 'quality selector' on that page to control the level of compression for scaled JPEGs. With this module installed, however, site builders can specify the specific compression and optimization utilities that should be used to crunch image files as much as possible.

Most of the supported utilities will need to be installed on your web server before they can be used, but if you don't have server access or just want to try things out, you can point the module to Yahoo's hosted Smush.it service. For designers who hand-tweak every image in Photoshop, or make manual use of "Export to Web" compression features in their image editors, the utilities supported by ImageAPI Optimize may not make a huge difference. But when users or non-designers are allowed to post images as part of Drupal content, ImageAPI Optimize can help ensure that you're saving all the space (and bandwidth) you can.

*/
Nov 19 2012
Nov 19

Drupal's highly granular permissions system allows site builders to control who can create, edit, and delete each type of content on the site. Third-party modules can add additional permissions to that mix as well, paving the way for extremely focused role-based permission setups. The interface for configuring all of those permissions, however, is more than a bit cumbersome. Thankfully, the Permissions Grid module offers a solution: a consolidated permissions page that only includes node and entity type specific options.

Screenshot of administration screen

Installing the module doesn't alter the operation of Drupal's standard permission forms. Rather, it adds an additional "Permissions Grid" page that exposes just node and entity related permissions. Because Drupal 7's entity system includes Taxonomy terms, Drupal Commerce products, Flag module flag types, and more. Because the permissions are organized by content and entity type rather than by name (the normal Permission screen's default), it's quite a bit simpler to set them up or skim them to review their current state.

Permissions Grid is a simple module, but if you're frustrated by the complexity of node type permissions, it's a quick and painless solution.

*/
Nov 12 2012
Nov 12

How to Hide Metadata From Showing on a Published Drupal Page

Drupal's FieldAPI gives site builders a wide variety of ways to attach custom data to content types and other entities. Sometimes, though, you're adding metadata -- stuff that shouldn't be displayed directly when a piece of content is shown, but can affect the content's relationships with other posts, or its overall appearance. The easiest way to do this yourself is to hide the field in question, then sneak extra CSS classes or IDs onto the node itself based on the field's value. A new module, Field Formatter CSS Class, automates that process and makes turning simple fields into extra CSS classes a no-brainer.

Screenshot of administration screen

As one might guess from its name, the module adds a new FieldAPI Formatter type -- "CSS Class." When selected, the field in question doesn't appear in the visible content of the node or entity in question. Instead, the field's value is turned into a CSS class and added to the entity's wrapper div. This is most useful when combined with a List field or a Text field with a small number of allowed values. When combined with some simple CSS rules in a custom theme (or added via a module like CSS Injector), this allows content editors to make easy choices that affect a node's visual appearance without hacking complex markup or embedded CSS rules into the content itself.

Screenshot of resulting change to site

Field Formatter CSS Class is also smart enough to sanitize the CSS classes that it adds -- even if you allow editors to enter custom text in a field, dangerous HTML attributes and CSS tricks will be scrubbed out. If you need a quick and simple way for editors to pass information about a node or other entity to the theme, Field Formatter CSS Class is a solution worth checking out!

*/
Nov 05 2012
Nov 05

How to Tidy URLs and Relative Links When Moving From Dev to Go-Live (for Drupal 6 and 7)

Few things are as annoying as building something that works perfectly when you create it, but fails when you take it out of the lab. That's how site owners can often feel when content editors create piles and piles of Drupal nodes full of relative URLs in images and links. They look fine on the site, but if the content is syndicated via RSS or Atom, sent out in an email, or otherwise repurposed in another location, the links break. Even worse, hand-made links and images entered while the site is under development can easily point to the outdated "beta" URL. Who can save the day? Pathologic module, that's who.

Pathologic module's configuration options

Pathologic is an input filter -- to install it, you drop the module into your Drupal site and add it to one of your text formats -- Full HTML and Filtered HTML, for example. Whenever content is posted in a format configured to use Pathologic, it will scan the content for URLs and tidy them up. Relative URLs like /node/1 get turned into absolute ones like http://example.com/node/1, URLs pointing to alternative versions of your site like dev.example.com are replaced with your public URL, and so on.

Pathologic can also standardize the protocol of links inside your site's content. If users edit content over a secure connection, for example, it's easy to mix links using the http:// and https:// protocols -- something that can lead to annoying warnings on some users' machines. For developers with exacting URL-correcting needs, it also supports custom URL modification hooks. Using those hooks, your site's custom fixes (replacing MP3 links with a URL on a different server, for example) can piggyback on Pathologic's configuration and logic.

Pathologic is an efficient workhorse of a module that solves an annoying problem efficiently. If you've run into problems with relative links and staging-server URLs breaking links and images on your RSS feeds, you owe it to yourself to check it out!

*/
Oct 26 2012
Oct 26

Updated Schedule: More Podcasts Means More Awesome

This Friday, we've got two podcasts hitting the streets! Insert Content Here delivers an interview with Erin Kissane, long-time editor of A List Apart and the co-founder of Contents Magazine. Meanwhile, The Creative Process talks to Alex Cornell, author of Breakthrough: Overcome Creative Blocks and Spark Your Imagination.

In case you haven't been paying close attention, we recently split the former "Lullabot Podcast" into 3 new podcasts. The Drupalize.Me Podcast is probably most similar to the old podcast, focused on keeping up with Drupal and the Drupal community. Insert Content Here is our new podcast talking about content strategy and the "content" part of content management. And The Creative Process talks with creative professionals about the process of turning ideas into things.

Thanks to the great feedback we've received about both podcasts, we're updating our schedule. Starting in November, The Creative Process will be published twice a month on Monday, and Insert Content Here will be coming out twice a month on Friday! The Drupalize.me podcast will keep its current schedule on alternating Fridays. Confused? Subscribe to the "All Lullabot Podcasts" feed and you won't miss a thing!

Oct 23 2012
Oct 23

A case study in re-platforming latingrammy.com

Requirements

The previous version of the site was built on Ruby on Rails in a custom content management system that allowed administrators of the site to have content types, categories, and even translations. This was all stored in a MySQL database with a simple schema that allowed for a pretty easy migration into Drupal.

The task was to re-create the entire site within Drupal, keeping the exact same content types, content, and existing design with the exception of a new homepage, new header and new footer. The new homepage design is based off of the current Grammy.com's homepage design.

Previous Homepage:
old

Newest Homepage:
new

The Recording Academy wanted to re-platform this on Drupal for a couple of reasons. The first is that since Grammy.com is on Drupal 6 and will be upgraded to Drupal 7 before the next annual awards show, we could leverage some of the work in building the Latin Grammys website with Drupal 7 for the upgrade of Grammy.com itself. The second reason was to bring the site more inline with Grammy.com in terms of capabilities in content editing, advertising, and future enhancements to the site could be leveraged from any improvements that end up being made on Grammy.com.

Migration

For the migration work on this project, we decided to create a custom drush command that could run the various migration scripts. The scripts are split out by content type: each script migrating the pertinent information for a particular content type. They are comprised of a simple function that runs bootstrapped Drupal code to take data from the old Ruby database and transform it for the new Drupal database.

You can find the code for this custom drush command, $ drush lmi in this gist.

We then have one simple bash script that runs drush commands in succession, like so:

# Clean install using our installation profile written with profiler.
drush site-install lagra -y
drush upwd admin --password="admin"
drush fra --force -y
drush dis overlay -y
drush cc all

# Migrate taxonomy terms.
drush lmi tag
drush lmi genre
drush lmi award

# Migrate content.
drush lmi event
drush lmi nominee
drush lmi page
drush lmi photo
drush lmi podcast
drush lmi press_release
drush lmi sponsor
drush lmi video import

You may also notice that within that bash script, we are installing Drupal from scratch with a custom install profile called 'lagra'. This installation profile is using Profiler and does some simple enabling of core modules, contrib modules, custom features, and sets a few initial variables within the variables table. If you haven't looked into using Profiler on your projects, you should. It allows you to do some pretty complex operations with very simple syntax right in your profile's .info file.

You can find the code for this custom install profile in this gist.

What this all leads to is a relatively simple and quite an efficient way to completely reinstall the entire project from scratch at any given point. All of the features of the site are in custom Features modules, and the migration scripts run automatically, so a fresh install of the project is as easy as running $ ./docroot/scripts/reset.sh at any given time. We found this led to a very rewarding workflow during the entire duration of the project.

Translation

Another huge requirement for the site (in fact, one we underestimated) was the use of three different languages on the site: English, Spanish, and Portuguese. For this we had to add a whole slew of i18n_* modules. Here's the list:

  • Internationalization (i18n)
  • Block languages (i18n_block)
  • Menu translation (i18n_menu)
  • Multilingual content (i18n_node)
  • Path translation (i18n_path)
  • String translation (i18n_string)
  • Taxonomy translation (i18n_taxonomy)
  • Translation sets (i18n_translation)
  • Variable translation (i18n_variable)
  • Views translation (i18nviews)
  • I18n page views (i18n_page_views)

Challenges

The old Ruby system had a very competent system for managing translations of content. However, not all content in the system was translated, and so there were some challenges in making sure that we could switch to a different language and not have the page show up empty simply because the content was not translated for that particular views listing. The listing being empty was not ideal from a user's standpoint, nor from a content editor's standpoint. So we opted to do a little extra work in the migration scripts for the particular content types that did not have translations. We simply created the English version of the nodes, and at the same time used the same copy for the translations. This resulted in populated views for each language, and an ideal way for content editors to browse the listings and quickly know what content needed to be translated, hit an edit button, and then translate.

Another challenge we didn't account for was all of the strings that are not translated automatically. The String translation module was easy enough to use for most cases, but for the times we found it lacking, we ended up using the String Overrides module to fill in the gaps.

We also found that keeping string translations in code was problematic. We opted for using the $conf array like so:

<?php
global $conf;
$conf['locale_custom_strings_es'][''] = array(
 
"You must be logged in to access this page." => "Debes estar registrado para acceder a esta página.",
);
?>

However, this also became an issue once we decided to start using the String Overrides module because the $conf array would simply override any translations we would put in the String Overrides UI. So, we opted to use this $conf array method to get the values into the database initially (hitting save in the String Overrides UI) and then just remove the $conf array. Then, we could use the UI to translate the few strings that remained.

This is still something we haven't solved, so if you have any suggestions on easily keeping string translations in code (à la Features), we would love to know about it in the comments!

Involvement

The following were involved in this project at various stages and in various roles:

In order of code contributions.

brockboland

Brock Boland: Developer at Lullabot

sirkitree

Jerad Bitner: Senior Developer & Technical Project Organizer at Lullabot

davexoxide

David Burns: Senior Developer at Lullabot

makangus

Angus Mak: Developer at Lullabot

blc

Ben Chavet: Systems Administrator at Lullabot

Bao_Truong

Bao Truong: Software/Web Developer at The Recording Academy

Oct 22 2012
Oct 22

Anyone who's ever talked shop about Search Engine Optimization has heard of meta tags. Those bits of extra data jammed in an HTML document's HEAD section provide search engines, browsers, and other software with important information about a document's contents, related links and media, and more. There are quite a few ways to jam meta tags into a Drupal site, and there are even APIs to set the contents of a page's meta tags yourself, if you're a module developer. Thankfully, you don't have to sling code thanks to the excellent Meta Tags module.

Screenshot of administration screen

Maintained by prolific Drupal developer Dave Reid, Meta Tags provides a centralized configuration screen where common meta tags can be edited sitewide, for specific types of pages, and even for pages belonging to specific entity bundles. Need one set of tags for photo galleries and another for user profile pages? No sweat. Meta Tags uses standard Drupal tokens to build the contents of its tags, and the configuration options themselves can be exported and saved in Feature modules for easier deployment.

Screenshot of advanced options

Third-party modules can add new meta tags to the mix, and add new types of URL paths that need to receive custom tags. The Panels Meta Tags and Views Meta Tags projects, for example, allow each view and panel to have custom, token-driven meta tags just like nodes, users, and so on. Meta Tags is already a popular solution to the thorny problem of sitewide meta tag customization -- check it out if you need more than Drupal core's auto-generated ones.

*/
Oct 15 2012
Oct 15

Almost a year ago, Module Monday featured the Views Datasource module, a tool for exposing custom XML and JSON feeds from a Drupal site. It's a great tool, but what if you need something a bit more... human-readable? A .doc file, or a spreadsheet for example? That's where the similarly-named but differently-focused Views Data Export module comes in.

Screenshot of administration screen

Views Data Export provides a custom View display type optimized for producing downloadable files in a variety of formats. Need a Microsoft Excel file with a list of the latest 100 posts on your site? Looking for a Microsoft Word .doc file that collects the text of the site's top-rated articles? No problem. The module supports XLS, DOC, TXT, CVS, and XML exports with format-appropriate configure options for each one. In addition, it uses Drupal's batch processing system when it's asked to generate extremely large export files, rather than timing out or giving out-of-memory errors.

Screenshot of resulting change to site

Whether you're putting together reports, giving users an easy way to download product data sheets, or conducting a quick, ad-hoc audit of your site's content, Views Data Export is a handy tool to have around. Its batch processing support makes it more reliable on large sites, and its support for a variety of "day to day" file formats means you'll spend less time juggling your data and more time making use of it.

*/
Oct 11 2012
Oct 11

Some "Highlights" of an Approach to Content Syndication

I have run into a number of clients that want to create a system to syndicate content to multiple sites. There are several ways to set a system like that up, perhaps using the Migrate or Deploy modules. But it’s also possible to do this using Services and Views on the content source and Feeds on the consuming sites.

The development team at Highlights for Children is diving into Drupal in a big way, and Lullabot has been helping them. They are building a suite of Drupal sites that need a way to consume content from a central Drupal source. The idea of solving this problem using Services and Feeds was a good fit for their requirements, so we worked through the details of how to get this accomplished. Highlights wanted to share the recipe for doing this with the community, and I pulled the details together into this article. We made the example a little more generic, so it would apply to other situations, and focused on ways that you can accomplish this without code. The result is a general recipe that we hope will be widely useful, rather than an exact description of the way they ended up using these tools.

Those who aren’t trying to solve this particular problem may still find it interesting to see an example of how to configure and use Services, Views, and Feeds together. Services and Feeds are two very powerful and interesting modules, but sometimes they can be confusing to configure, and a step-by-step illustration may make it more clear.

Terminology

In this recipe we have two different Drupal sites. One of them contains content that can be shared (we call that the ‘Source’) and the other needs to consume that content (we call this the ‘Destination’). The Source will share its content using Services and Services Views. The Destination will consume the content using Feeds.

Modules needed on the Source

Modules needed on the Destination

Prepare the Source Content

First, create the content type(s) and content on the source. If the content needs to have references to other content, use the EntityReference module to create the reference fields (http://drupal.org/project/entityreference).

Set up Services on the Source

Enable the following modules on the source site:

The Rest Server requires that you add spyc.php to sites/all/modules/services/server/lib. You can get that file from http://code.google.com/p/spyc/.

Create a view of the content you want to syndicate. Instead of creating a page display, create a ‘Services’ display. On the Services display, use the style option to create an unformatted list of fields. Then add each field that you want to move to the Destination to the list of fields in the display.

When editing the field, set up a custom value key with the name you want to use for this value in the xml field, keeping in mind that you want to be sure that each value is unique in the feed.

As you add the fields, you can see in the preview an array of the values that will be displayed in the XML. Many fields in the field API use the entity id to retrieve their values, so you will only see that value in this array. Don’t worry, it will expose the right field value in the XML.

Give this view a path. This will be the services path for this view.

Next go to admin/structure/services and create a new service. Give it a name, use the REST server, and give it a ‘Path to endpoint’ of ‘rest’.

After it has been created you will see a link next to it to ‘Edit Resources’. That will bring up a screen like the following that shows the possible resources for this services. You will see a resource for ‘Views’ and for the path you created in the view. Check both of them.

You can see if this is working by navigating to that path, like http://example.com/rest/articles. You should see something like the following:

Once you have confirmed that your XML service is working correctly, you can switch to the destination.

Prepare the Destination

Next, create the content type(s) on the destination. If the content needs to have references to other content, use the EntityReference module to create the reference fields (http://drupal.org/project/entityreference). We’re going to use Feeds to populate the content from the XML we just created on the Source.

Set up Feeds

Enable the following modules on the Destination:

Create a new content type for the feed (this is not the same as the content type we will use for the nodes that the feed will create). It only needs a title, no body, so you can remove the body field.

Go to admin/structure/feeds and click the link to ‘Add importer’. Give the item a name and description.

Once the importer is created, you will see that it can be edited.

Edit the importer and you will see that you can set up various components. We will attach it to the Feed content type we just created, use the HTTP Fetcher to retrieve it from the XML link we just created on the Source, use the xPath XML Parser to deconstruct the values and move them into right fields, and the Node Processor to create nodes from the results.

Create the node mapping before you configure the xPath parser. This is somewhat confusing. First set up the Node processor to create nodes of whatever content type you want to contain the imported values, in this example that is ‘Article’.

Once you have done that, use the Mapping link to identify what will go into each field in that content type. When using xPath parser, we are going to populate each value with an xPath value. So we need to create a mapping item for each target field that gets its value from xPath. This may look odd, but it is correct. You will end up with a list of fields that all have the same source, the xPath parser. It looks like the following screenshot:

The other tricky part of the configuration is the xPath parser, especially if you’ve never used xPath. xPath is a standard for locating values in a XML file. If you’re not familiar with xPath, there is reference material at http://w3schools.com/xPath.

We need to identify an xPath identifier for the ‘Context’, which is the place in the XML that represents an individual node, and then one for each of the fields within that node. The configuration would up looking something like the following:

Import the Content

We’re finally ready! Create a new Feed node. Input the services path we created above. Be sure to append ‘.xml’ to the path so the parser knows it is dealing with XML.

Once the feed node has been created, you will see an ‘Import’ tab on it. Click on that to actually import the data. We enabled Views so we can also see a ‘Log’ tab on the feed node. That tab shows us a view of the log messages that were created when trying to import the feed.

References, A Special Problem

Moving content that has references in it from one site to another creates a special problem. The reference field on the source site is pointing to the nid of the referenced material, but it is using the nid from the source site. When you move all this content to another site, it will no longer have the same nid. Therefore, references to that material need to be updated to contain the right nid for that item on the destination.

For example. You might have a node 70 on the source that has a reference to node 89 on the source. When you move these two pieces of content to another site, the item that was node 70 on the source might become node 650, and the item that was node 89 might become node 777. The destination reference to node 89 needs to be updated so it now points to node 777.

We’re using Feeds to pull in the content from the Source site. Each field on our destination site is managed using a handler that understands where its value belongs on that particular type of field. We need the Entityreference handler to be smart enough to figure out which value is actually needed on the Destination.

To do that we currently need an EntityReference patch, from http://drupal.org/node/1616680. Once that is applied, EntityReference will analyze each value that is passed into it, look through the Feeds tables to see if the referenced node has already been created. If it has, it will swap in the nid of the node that was created from that value. If it can’t find information that tells it that node has already been created, it will wipe out the value, because a reference to the wrong or a non-existing node would not work correctly.

This process will work best if the referenced material is a different content type. That makes it possible to pull the referenced material before the content that references it, so that all the referenced material exists, to keep the reference links working.

If that is not possible, the alternative is to run the migration twice. By the second pass all the nodes will exist and the links can be created. For this second pass to work correctly, you have to choose the option to ‘Update existing nodes’ in the Node processor settings.

Images, Another Special Problem

Another problem with using Feeds and Services to do this trying to find a way to get images to import correctly from the central server. If images are available at publicly available urls, the following method will work.

In the view of the image field, set the field up to ‘Display download path instead of file storage URI ‘. This will create XML that displays the image like ‘public://field/image/imagefield_O0ivdK.png’. That isn’t quite what we need to access it from the Destination. So on the Destination we need to add one more module to our mix, the Feeds Tamper module (http://drupal.org/project/feeds_tamper).

Feeds Tamper lets us massage our Feed values before trying to do something with them. In this case we want to replace ‘public://’ in the image path with the actual path to the image on the Source, like ‘http://example.com/sites/default/files/’.

Once Feeds Tamper is enabled, you will see a ‘Tamper’ tab on the Feeds importer. It will show us a list of all the fields that have been defined. Select the field that contains the image field, and then choose to use a Regex to replace the value. It will look something like the following:

Now when you import nodes that have images, the images will be retrieved from the full image url and be properly transformed into images on the Destination site.

Conclusion

This should get you started on creating a simple, no-code, content syndication system. I want to thank the team from Highlights for Children for their willingness to share this information with the Drupal community. As noted above, there are other ways to solve this problem, in particular by using the Migrate or Deploy modules, but this is the approach that seemed to fit their needs the best.

If you haven’t used these modules before and have been wondering how they can work together, you may want to try this recipe out.

Oct 09 2012
Oct 09

In the first episode of Insert Content Here, Jeff Robbins and I chatted about the temptation and the danger of the "Dreamweaver Field" content model. When content types are just wrappers around giant chunks of hand-formatted HTML, editors have lots of flexibility but it's all but impossible to repurpose the hard-coded content for new designs and publishing channels.

In presentations, articles, and her upcoming book, Content Strategy for Mobile, Karen McGrane describes the problem as a war of "Blobs" versus "Chunks." The challenge is figuring out how to decompose a site full of inflexible HTML blobs into discrete, bite-sized fields. There's no magic bullet (a model that works for one project can fail miserably for another), but over the past several years we've accumulated a few useful rules of thumb for "deblobbing" a site's content.

The Basics

Don't skimp on the content inventory and auditing process. Figure out what's there, what's going to be tossed, and what you want to have on the site. This is step zero, really: the modeling process is infinitely harder if you're dragging around piles of HTML that don't match what your're trying to build.

Clump similar content. If your existing site doesn't have discrete content types, figuring out which pages are similar to each other is the next stage. Product reviews, staff bios, press releases, blog entries, portfolio slideshows… You know the drill. Remember to look for pages and content types that are really composites of other, smaller units of content. Often, some of the most complex pages and content types can be implemented as rule-based or curated collections of smaller, more management content types.

Look for common "chunk" types. Once you've grouped your blobby content types into similar pools, zoom in and look for patterns that are unique to each content type. These are are potential candidates for dedicated fields. Some of the common field types we encounter include:

  • Links to related content
  • Links to downloadable files and embedded media that occur at consistent locations
  • Publication or event dates
  • Pull quotes, hand-written taglines, author bios, and summaries
  • Business and event addresses
  • Geographical locations and maps
  • Lists of information like features or rules and requirements
  • Ratings, prices, and product codes

Most CMS's support multi-value fields that can be used to model repeating elements like feature lists or multiple file attachments. Be sure to note which elements occur once, and which ones repeat.

Rinse and Repeat. Once you've broken things into multiple content types and identified the discrete fields on each one, look for overlaps. Are there several content types that share the same list of fields? Consolidating them into a single type might simplify things. Is there one "Godzilla" content type with dozens and dozens fields? It might really be several types that should be teased apart. The first pass of a content model is a lot like the first draft of an essay: there are always rough edges and awkward parts that need work.

The Tricky Bits

After identifying all of that easy stuff, large and complex sites usually have quite a few ugly blobs that still need to be broken down.

Identify composite content. Sometimes, elements of one content type need to be broken out into their own sub-content-types, with simple parent-child relationships connecting them. Galleries that contain multiple photos, albums that contain multiple songs, and curated pages that include teasers for other content are common examples. If several content types in your model contain the same cluster of fields (like photo, caption, byline, and link, consider splitting out the cluster into a its own dedicated content type. Treating those scenerios as relationships between discrete elements can often simplify complex models.

Look for common formatting complexities. If you have wireframes or existing pages, look for complex visual formatting around certain elements, in particular the stuff that requires lots of hand-written HTML to implement in a "content blob." Comparison tables are a common offender here. Breaking these out into dedicated fields whenever possible can help prevent massive pain when a piece of content needs to be displayed differently in new channels.

Watch for design elements that change based on context. If you're building a responsive or adaptive site, or have access to designs for mobile apps or other output channels, keep an eye out for elements that appear differently or conditionally based on breakpoints, target device, and so on. It seems obvious, but controlling small elements is infinitely easier when they're broken out as discrete fields.

Plan for searching and filtering. Try to identify as many different filtered lists of content as possible. Faceted search screens, topical landing pages, author-based blogs, product lists, and so on can't be built efficiently without the right data. If the lists and search indexes that you need don't correspond to fields you've already broken out, remember to add additional ones for the required metadata.

Isolate the crazy. Inevitably, complex designs end up requiring "helper" content that doesn't seem to fit the well-understood content types the site's stakeholders imagine. Slides for promotional rotators, free-floating promotional microcontent for landing pages… These tend to be highly variable and often need the kind of raw-HTML flexibility that we're trying to avoid. Isolating them in their own content types and living with the cordoned-off craziness can help simplify models with overloaded, field-heavy primary types.

Recognize when markup is good enough. Despite all the talk about the dangers of blobs, it is possible to go too far. Replacing every HTML div and span with a dedicated field simply to avoid raw markup is overkill, and can easily result in 'Edit Screens of Doom.' Modern WYSIWYG editors generally support plug-in systems, and developing a button to "insert caption here" or "style paragraph as warning" can be a simpler solution. This is where I repeat the warning: There's no perfect content model, only the one that works for your project.

Test the Model

The long-term impact of a bad model on a site's maintainability can be frustrating, but it's also impossible to predict every future application the content will be used for. Iteratively testing the model against real-world content and potential applications is critical.

Put real content into the model. It seems obvious, but it's easy to go down the structural rabbit hole and forget the existing pool of content. Circle around frequently and ask, "How does the content we have in hand fit into these content types and fields?" Look for odd mismatches, required fields that the existing content will leave unpopulated, and so on. Sometimes, the design and the model have to change for practical reasons. Other times, clients or your team will have to update the content to close the gap.

Plan for three channels. When building a model (or a software API), it's easy to imagine you're creating a reusable system while unintentionally baking in assumptions that make real reuse difficult. If you need content that will adapt to reuse in new channels, be sure that you keep at least three in mind -- think of them as user personas for the model. Desktop web, small-display devices, and rich HTML newsletters are common answers for some businesses. Even if you're only building one of them at first, proposed approaches can be compared against them to ensure you aren't painting yourself into any corners.

Social sharing is a publishing channel, too. Twitter and Facebook can automatically embed headlines, summaries, and preview images when users paste one of your site's links -- if you provide the metadata that they're looking for. If your model doens't account for those, it will be much tougher.

Let real users work with it. If you're using a web framework that allows rapid creation of a content model before the full site is finalized, or you can produce wireframes of some sample content input and editing screens, get user feedback sooner rather than later. The people who spend their time creating and maintaining the content can often spot problems and inconsistencies that would otherwise remain undiscovered until launch.

No Rules, Just Lessons

None of above ideas are hard-and-fast rules, obviously. At Lullabot, we've spent years building out complex sites (and the underlying content models) for media publishers, government agencies, corporate intranets, ecommerce sites, and more. And yet, every new client comes with surprises and challenges.

In the coming months, I'll be digging deeper into each of these ideas with follow-up articles and interviews on Insert Content Here. In the meantime, what useful heuristics do you use when breaking down ugly "content blobs" into reusable chunks? Feel free to chime in with comments!

Oct 09 2012
Oct 09

With Drupal's custom content types, custom fields, and the addition of the popular Entity Reference module, site builders can whip up complex content models without a line of code. Editing those complex inter-related content types isn't always easy for content creators, though. The "Create a node, save it, then create its parent, then link the two" workflow is frustrating and error-prone. Fortunately, the Inline Entity Form module can help.

Screenshot of administration screen

Like many of the nifty Drupal tricks these days, Inline Entity Form is a custom field editing widget. On any content type with an Entity Reference field, choose it as the field's editing widget, and the rest is magic. When you create a new piece of content, you'll get the form to create the referenced entity on the same form. It works smoothly with multi-value fields, and can use a simple autocomplete picker to link existing entities if they already exist.

Screenshot of resulting change to site

The challenge of creating and populating complex nested content relationships has always been a tricky one in Drupal. Inline Entity Form was created by the Drupal Commerce team to simplify the management of complex product collections, and it solves the problem quite nicely.

*/
Oct 01 2012
Oct 01

Setting up geo-location data on a Drupal site is startlingly straightforward these days: pop in Geofield or Geolocation module and start adding latitude and longitude data to your nodes. Once you have the data, setting up maps and other complex geographically-aware content listings is easy. Unfortunately, it's that "getting the data" part that can be thorny. If users are willing to type in exact street addresses or latitude/longitude pairs themselves, there are more than a few options. If you need to transform other kinds of data into useful location data, though, Geocoder might just do the trick.

Geocoder Google options

Geocoder provides as a custom field editing widget for several popular location storage fields (Geofield, Geolocation, and Location module). Rather than offering a custom input form, however, it lets the site administrator pick another field on the content type that contains some geographical information.

Are you creating nodes whose titles are city names? Tell Geocoder to send that title to Google's geocoding service and store the resulting data in the Geofield. Are your site's users uploading photos with location information from a camera phone? Tell Geocoder to use the image field as its source, and the location data will be extracted automatically. The actual location storage fields -- the ones that hold latitude and longitude values -- are kept hidden from users and editors entirely.

Geocoder EXIF data extraction

Geocoder supports an impressive array of encoding APIs and data extraction mechanisms, from uploaded .KML files to Mapquest-parsed street addresses. It's an excellent swiss army knife for sites that need user-entered geo data, and a great addition to any site builder's arsenal.

*/
Sep 28 2012
Sep 28

We've launched a great new series by Jen Lampton, Building Websites in Drupal 7 Using Panels. The Panels module is a very powerful tool for site builders. Jen covers everything from the Panels interface to using variants for multi-scenario layouts. The first two videos are available for free, and cover getting started with Panels, creating a multi-column home page, and working with Panels panes.

Setting up a Multi-column Home Page

Adjusting the Settings for Each Panel Pane

Sep 27 2012
Sep 27

Lullabot's Drupal training site turns 2

It's been almost 2 years since we launched Drupalize.Me and I'd like to take a moment to appreciate some of the site's recent accomplishments.

Over 600 Videos

A few weeks ago, Drupalize.Me product manager Addison Berry announced the 600th video posted to Drupalize.Me! Members now get access to over 233 hours of content on an immense range of Drupal-oriented topics from simple content creation to site building tricks and database server performance optimization. Drupalize.Me's most popular videos cover coding for and using Views, using the Calendar and Date modules, configuring WYSIWYG editors, Display Suite, Organic Groups, Drupal 7 module development, and more. The Drupalize.Me team has been really amazing – posting new videos every week and paying close attention to member requests for new topics.

Over 2,000 Subscribers

Word about Drupalize.Me has spread and I often see people on Twitter telling one another that for them, Drupalize.Me has become the way to learn and keep up with Drupal techniques. Drupalize.Me has iPhone/iPad, Android, and Roku apps so members can watch the videos on their mobile devices or televisions. Drupalize.Me has also partnered with Acquia to offer discounted memberships to Acquia Network subscribers.

As word has been getting around about Drupalize.Me, subscriber numbers have been growing and recently crossed 2,000 simultaneous subscribers. We're reaching more people on a monthly basis than most large-scale Drupal events. We couldn't be more excited about the response!

Drupalize.Me now has a staff of 3 full-time people creating videos, maintaining the site, adding features and handling customer support. This team is augmented by others at Lullabot who step in to help with expert video training, development, design and support. Drupalize.Me now represents more than 15% of Lullabot's budget and has become a great outlet for the Lullabot team to share the knowledge that we've gained building the high-profile websites that represent the majority of our work.

New Features

The Drupalize.Me team has been listening closely to subscriber feature requests and we've gotten lots of new features over the past 2 years. They've arranged videos into "series" collections and allow users to watch them consecutively. They've also added curated "guides" collecting videos into linear curriculum for different types of members. They've also greatly improved the user dashboard pages allowing users to manage their queue as well as see the listing of recently watched videos and even displaying a line graph so members can see their progress within the videos they've watched. The team also added the ability to store pause points and allow users to resume from exactly where they left off - even if they're resuming on a different device such as their phone or connected television.

And speaking of connected televisions, we've got apps! We've got an iOS app for your iPhone/iTouch/iPad which can AirPlay to your AppleTV. We've also got an Android app and an app for the Roku Streaming Player box. You can pick up a Roku box for as little as $50, hook it up to your television, and watch all of the Drupalize.Me videos from your couch. It's a great way to learn.

Group Memberships

Drupalize.Me also offers group memberships. If you want Drupal training for your entire group, department, company, or institution, we offer that too. Group accounts greatly reduce the price of individual accounts while still allowing each group member to manage their own video queue, resume videos, and see their own history. We offer both managed group plans and IP-range plans to allow access to all devices at a physical location such as a library or campus.

Subtitles, Transcripts & Translations

Perhaps the greatest new features at Drupalize.Me are the ability to offer subtitles, transcripts, and translations for the videos. We have many international subscribers who, while they speak and understand English, sometimes can't keep up with the rapid-fire technical information in the videos. They've been asking for English-language subtitles and transcripts so they can follow along better. We're proud to say that we've added this functionality as well as functionality to provide complete translations with subtitles in other languages! 60 of our recent videos as well as the complete Introduction to Drupal guide already have transcripts and subtitles. And all new videos published on Drupalize.Me in the future will have transcripts and subtitles.

We're currently looking for volunteers to do foreign language translation for Drupalize.Me. If you're a bi-lingual Drupalist and you'd like to help make bring these Drupal training videos to the world, please contact us!

Drupalize.Me & Videola

One of the Drupalize.Me team's biggest accomplishments is building the Drupalize.Me site itself. The team has built a great Drupal-based platform which manages both the permissions and delivery of adaptive bitrate streaming video; recurring subscription billing and administration; video content categorization, listing, and organization; mobile app and IPTV delivery with seamless pause-on-one-device-resume-on-another functionality; and now even multi-language subtitles and transcripts.

As we've been building Drupalize.Me, we've been funneling this work and knowledge into Videola, a platform to provide this functionality to others wanting to build subscription-based or IPTV-oriented video sites. In short, Videola is a framework for building sites like Drupalize.Me... or like Netflix, Hulu, or Amazon video-on-demand. Videola can do everything that Drupalize.Me can do and more. If you'd like to build a site like this, please contact us and we can talk to you about getting you set up with a Videola site of your own.

Onward

Addi and the rest of the Drupalize.Me team have been doing a lot of training at DrupalCons, DrupalCamps, and other events. They've been very involved in the Drupal Ladder project and have posted a series of free videos to help new Drupalers get involved with core development.

Drupalize.Me recently started its own podcast (which picks up where the long-running Lullabot Drupal Podcast left off). Every other Friday, the Drupalize.Me team is posting a new podcast with interviews and discussions to help listeners keep up with the ever-changing world of Drupal. The team is also constantly upgrading and improving the site and they've got lots of great feature ideas for the future.

I couldn't be more proud of the work that's been done by Addi Berry, Joe Shindelar, Kyle Hofmeyer, and everyone who's helped with Drupalize.Me over the past 2 years. The site just keeps getting better and better. At this rate, I fully expect that 2 years from now I'll be bragging about their Drupal training neural implants and interstellar 3D streaming. But for now, I'm really happy with where we are – doing great, having fun, and sharing knowledge with people, empowering them to do great things.

Sep 24 2012
Sep 24

Drupal's built-in support for generating RSS feeds has long been an easy selling point. Do you have content? If so, Drupal will generate a feed of it, easy-peasy! Unfortunately, that RSS support hasn't kept up with the flexibility of Drupal's FieldAPI. The RSS format supports extensions to handle data types like file attachments, location data, and so on. By default, though, Drupal, jams all of your custom field data into the body of the RSS feed as simple text. That's where RSS Field Formatters comes in.

Screenshot of administration screen

RSS Field Formatters is a collection of slick formatters for Date, Geo Location, User Reference, Taxonomy, File Upload, and Media fields. Just set up your content type with the fields you'd normally use -- then, for the content type's "RSS" build mode, select the RSS-specific formatters for the fields you'd like to appear in proper feed elements.

Screenshot of the RSS feed

The results aren't flashy, unless you're an XML aficionado, but the improvement is important. If you're using RSS your RSS feed to move content from one site to another, the properly-formatted data fields can be imported more efficiently. In addition, geolocation and media attachments can be parsed and presented to users by many news feed readers: it's always better to pass along the data in a format they can work with.

RSS Field Formatters is smooth, easy to set up, and does a great job of sprucing up your RSS feeds with complex data that would otherwise go to waste.

*/
Sep 21 2012
Sep 21

Listen online: 

Join Addi chats while she chats with Jen Lampton about her Drupal history, her current work in Drupal 8, and big, scary animals. Jen is leading up the charge to get a new theme system into Drupal core, based on the Twig template engine, so we talk quite a bit about what that means, how it would change Drupal, and how everyone can help make it happen.

Podcast notes

Release Date: September 21, 2012 - 10:00am

Album:

Length: 31:59 minutes (18.3 MB)

Format: mono 44kHz 80Kbps (cbr)

Sep 17 2012
Sep 17

Drupal's Form API allows developers to easily integrate complex validation code into the workflow of a user input form. Unfortunately, the actual feedback from that form validation code is often a bit clunky by modern web standards. Fill out a form, submit it, and wait for a new page to load: only then do you discover that your cat's zip code was required. Providing speedy, immediate feedback on the client side is commonplace on modern web sites; fortunately, the Clientside Validation module makes it easy to implement on Drupal sites as well.

Screenshot of administration screen

Clientside Validation serves as a wrapper around the existing jQuery Validate plugin. It allows site administrators to set up the basic parameters for the plugin's work, like when validation actions should be triggered, where they should be displayed, and so on. Most of the display options look a bit clunky out of the box, but with some CSS styling to position them (positioned next to an invalid field, hovering contemptuously above an incorrect Zip code, etc.) they can be made to fit most designs.

Once the module is activated, basic FormAPI validations (required fields, for example) appear instantly on the client side without the need to submit the form. Even better, the module integrates with a host of additional modules like FormAPI Validation, Field Validation, and WebForm Validation. Custom validation rules added by those modules can be handled on the client-side, as well.

Screenshot of resulting change to site

If you're trying to smooth the rough edges off of complex input forms, providing better validation cues is a great place to start. The Clientside Validation module does a great job of integrating a slick jQuery library with a pile of other validation-related modules. Give it a try!

*/
Sep 10 2012
Sep 10

Drupal's Node Reference and Entity Reference fields do a lot of heavy lifting for complex sites with tangly content models. They allow one piece of content to point at another one, and those relationships can be leveraged when building Views of content, formatting individual piece of content for display, and so on. Unfortunately, when it comes time to display the output of a reference field, "Print the title of the referenced entity" is usually the only option. What if you want something else -- like its description, or the bio of its author? Sure, you could crack open a text editor, build a custom module, and write your own FieldAPI formatter plugin. Or, of course, install the Token Formatters module.

Screenshot of administration screen

With the module installed on your site, one subtle but important new addition appears: a "Token Formatter" for any FieldAPI node, user, or entity reference fields. Customizing the formatter's settings for a given field lets you use tokens to specify the text that will be printed and (optionally) the URL that the formatted output will link to. Want to print out a node along with the name of its author? Want to display a user reference with a link to the user's home page, instead of their Drupal user profile page? Just use the appropriate tokens, and the module does the rest.

Screenshot of resulting change to site

The only serious downside to the module is the difficulty of extracting complex field values (like formatted images) using the token system. While it's possible, the module is better suited for textual values and links. In addition, there's no token reference for site builders to look at when filling out the formatter's settings. Because there are so many tokens, it's a real challenge to guess the right ones. Token module provides a standalone page with just such a list, but if you're not familiar with it, this module's configuration screen can be a bit of a stumper. Those two issues aside, Token Formatters is a quick way of tweaking a reference field's output without writing any custom code.

*/
Sep 03 2012
Sep 03

There comes a time in every Drupal site builder's life when a content type must redirect to another page. Perhaps your site contains a collection of links, and you'd like visitors to see the destination site itself rather than the node that describes it. Perhaps your site features small chunks of promotional content that point to other nodes, and shouldn't be treated as primary content themselves. Wouldn't it be handy to redirect to the destination content automatically when the promotional node is visited? It certainly would -- and that's precisely what the Field Redirection module delivers.

The heart of the module is a well-implemented field formatter. If you have a content type that uses a Link field, a Node or User reference field, or an Entity Reference field, you can assign it the Field Redirection formatter. When the node (or any other entity with fields using this module) is viewed, Field Redirection doesn't render the field as HTML -- instead, it redirects the user's browser to the referenced user profile page, node page, link URL, and so on.

Screenshot of Field Redirection configuration screen

Because Field Redirection sidesteps the normal FieldAPI behavior of 'Building HTML for Display,' there are important caveats. The formatter should only be used on the Full Content build mode for an entity. If it's used for teasers, RSS feeds, Search indexing, Views listings, or other modes where it could be unintentionally triggered, your site will be redirecting itself to new URLs instead of executing important code. That warning aside, the module is an elegant and easily customizable solution to a common problem. If you're building lightweight "linking" content types that point at other elements or other URLs, Field Redirection can make life quite a bit easier.

*/
Aug 27 2012
Aug 27

One of the basic building blocks of a functioning publishing tool is the ability to view, edit, and preview content before it's published. In older versions of Drupal, that was an all-or-nothing affair: either you gave people the dangerously overpowered "Administer Content" permission, or you downloaded a comprehensive access control module and started building a custom permissions system of your own. In Drupal 7, it improved: roles can be given permission to view their own unpublished content, or given the ability to bypass all content access control, but it's still a bit clunky for a lot of very common use cases. Thankfully, there's the aptly-named View Unpublished module. It brings juuuuust enough control without requiring a complex permission matrix.

Screenshot of administration screen

Once you've installed View Unpublished, a set of new permissions becomes available. You can grant individual roles the ability to view other users' unpublished content on a per-content-type basis. With that one addition, it becomes much easier to build simple editorial dashboards that give editors or contributors limited access to other writers' work before publishing, or for all the members of a team to see what the site will look like once content goes live.

Normally, we'd include a screenshot of the module 'in action,' but the simplicity of the tool makes that a bit tough. When View Unpublished is working, most users don't see anything at all! There are a few limitations to the module's streamlined approach, of course. Permission to publish and unpublish content is still handled by other tools, and it's up to you or another site builder to construct a functional "dashboard" for unpublished content if it isn't already listed on the front page or another easy-to-find location. For quick and simple access permissions on limited publishing workflows, however, View Unpublished is a great tool.

*/
Aug 20 2012
Aug 20

Managing user roles in Drupal is easy -- at least, technically easy. It's a bit trickier if you have a large user base and need to manage a steady stream of people requesting access to specific privileges, new roles, or additional responsibilities. If you're in that situation, get ready for some quality time with your email client, and set up a regular appointment with Drupal's User Management screen. Fortunately, the Apply For Role module can simplify the process.

Screenshot of Apply for Role management screen

With Apply For Role, users can visit a new tab on their user account page and request access to a new role on the site. The request is queued up for an administrator, who can review, approve, or deny requests from a central management page. Requests can also be deleted -- allowing the original user to re-submit their request later -- or denied, ensuring that they can't send in more requests for the same role.

Screenshot of Apply for Role configuration form

The module allows site builders to set up which roles can be applied for (to prevent users from getting a glimpse at roles they should never have access to), prevent or allow multiple simultaneous role requests, and so on. For site builders who want extra control, the module also provides full Views integration, as well as integration with Trigger module. You can easily build a custom administration screen to manage role applications, complete with notification emails.

There are a few noticeable gaps in Apply For Role's functionality. The application form that users fill out is spartan and lacks any explanatory text; giving administrators a way to add more help text to that page would go a long way. In addition, it would be great to customize the names of the roles that are presented to applicants. Most sites' roles are never shown to normal users, so they're often named for brevity rather than clarity. Both of those oversights can be remedied with some minor hook_form_alter() work in a custom module, but it would be great to see them integrated. Even without those wish-list items, Apply For Role is a slick solution to the problem of processing large numbers of permission-change requests. If your site's user management workflow is a good match, you should definitely check it out.

*/
Feb 29 2012
Tom
Feb 29

Drupal Apps are crazy cool. They make Drupal site building fast and fun for everyone. That’s right, everyone and anyone. Even if you are new to Drupal. They are that simple.

Despite their coolness, apps are new to a lot of people. So we thought we would put together this quick Drupal Apps demo video.

After the demo, go ahead and take apps for a spin. A test drive is the best way to understand what apps can do for you. These instructions will get you started


Get Drupal help when you need it most! Find hundreds of great tutorials. Track, rate, comment and more. Create Account

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web