Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jun 06 2018
Jun 06

I recently had the privilege of helping PRI.org launch a new React Frontend for their Drupal 7 project. Although I was fairly new to using React, I was able to lean on Four Kitchens’ senior JavaScript engineering team for guidance. I thought I might take the opportunity to share some things I learned along the way in terms of organization, code structuring and packages.


As a lead maintainer of Emulsify, I’m no stranger to component-driven development and building a user interface from minimal, modular components. However, building a library of React components provided me with some new insights worth mentioning.

Component Variations

If a component’s purpose starts to diverge, it may be a good time to split the variations in your component into separate components. A perfect example of this can be found in a button component. On any project of scale, you will likely have a multitude of buttons ranging from actual <button> elements to links or inputs. While these will likely share a number of qualities (e.g., styling), they may also vary not only in the markup they use but interactions as well. For instance, here is a simple button component with a couple of variations:

const Button = props => { const { url, onClick } = props; if (url) { return ( <a href={url}> ... </a> ); } return ( <button type="button" onClick={onClick}> ... </button> ); }; constButton=props=>{  const{url,onClick}=props;  if(url){    return(      <ahref={url}>        ...      </a>    );  return(    <buttontype="button"onClick={onClick}>      ...    </button>

Even with the simplicity of this example, why not separate this into two separate components? You could even change this component to handle that fork:

function Button(props) { ... return url ? <LinkBtn {...props}/> : <ButtonBtn {...props} />; } functionButton(props){  returnurl?<LinkBtn{...props}/>:<ButtonBtn{...props}/>;

React makes this separation so easy, it really is worth a few minutes to define components that are distinct in purpose. Also, testing against each one will become a lot easier as well.

Reuse Components

While the above might help with encapsulation, one of the main goals of component-driven development is reusability. When you build/test something well once, not only is it a waste of time and resources to build something nearly identical but you have also opened yourself to new and unnecessary points of failure. A good example from our project is creating a couple different types of toggles.  For accessible, standardized dropdowns, we introduced the well-supported, external library Downshift.:

In a separate part of the UI, we needed to build an accordion menu:

Initially, this struck me as two different UI elements, and so we built it as such. But in reality, this was an opportunity that I missed to reuse the well-built and tested Downshift library (and in fact, we have a ticket in the backlog to do that very thing). This is a simple example, but as the complexity of the component (or a project) increases, you can see where reusage would become critical.


And speaking of dropdowns, React components lend themselves to a great deal of flexibility. We knew the “drawer” part of the dropdown would need to contain anything from an individual item to a list of items to a form element. Because of this, it made sense to make the drawer contents as flexible as possible. By using the open-ended children prop, the dropdown container could simply just concern itself with container level styling and the toggling of the drawer. See below for a simplified version of the container code (using Downshift):

export default class Dropdown extends Component { static propTypes = { children: PropTypes.node }; static defaultProps = { children: [] }; render() { const { children } = this.props; return ( <Downshift> {({ isOpen }) => ( <div className=”dropdown”> <Button className=”btn” aria-label="Open Dropdown" /> {isOpen && <div className=”drawer”>{children}</div>} </div> )} </Downshift> ); } } exportdefaultclassDropdownextendsComponent{  staticpropTypes={    children:PropTypes.node  staticdefaultProps={    children:[]  render(){    const{children}=this.props;    return(      <Downshift>        {({isOpen})=>(          <divclassName=dropdown>            <ButtonclassName=btnaria-label="Open Dropdown"/>            {isOpen&&<divclassName=drawer>{children}</div>}          </div>        )}      </Downshift>    );

This means we can put anything we want inside of the container:

<Dropdown> <ComponentOne> <ComponentTwo> <span>Whatever</span> </Dropdown> <Dropdown>  <ComponentOne>  <ComponentTwo>  <span>Whatever</span></Dropdown>

This kind of maximum flexibility with minimal code is definitely a win in situations like this.


The Right Component for the Job

Even though the React documentation spells it out, it is still easy to forget that sometimes you don’t need the whole React toolbox for a component. In fact, there’s more than simplicity at stake, writing stateless components may in some instances be more performant than stateful ones. Here’s an example of a hero component that doesn’t need state following AirBnB’s React/JSX styleguide:

const Hero = ({title, imgSrc, imgAlt}) => ( <div className=”hero”> <img data-src={imgSrc} alt={imgAlt} /> <h2>{title}</h2> </div> ); export default Hero; constHero=({title,imgSrc,imgAlt})=>(  <divclassName=hero>    <imgdata-src={imgSrc}alt={imgAlt}/>    <h2>{title}</h2>  </div>exportdefaultHero;

When you actually need to use Class, there are some optimizations you can make to at least write cleaner (and less) code. Take this Header component example:

import React from 'react'; class Header extends React.Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReactfrom'react';classHeaderextendsReact.Component{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

In this snippet, we can start by simplifying the React.Component extension:

import React, { Component } from 'react'; class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } export default Header; importReact,{Component}from'react';classHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSXexportdefaultHeader;

Next, we can export the component in the same line so we don’t have to at the end:

import React, { Component } from 'react'; export default class Header extends Component { constructor(props) { super(props); this.state = { isMenuOpen: false }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen() { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); } render() { // JSX } } importReact,{Component}from'react';exportdefaultclassHeaderextendsComponent{  constructor(props){    super(props);    this.state={isMenuOpen:false};    this.toggleOpen=this.toggleOpen.bind(this);  toggleOpen(){    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX

Finally, if we make the toggleOpen() function into an arrow function, we don’t need the binding in the constructor. And because our constructor was really only necessary for the binding, we can now get rid of it completely!

export default class Header extends Component { state = { isMenuOpen: false }; toggleOpen = () => { this.setState(prevState => ({ isMenuOpen: !prevState.isMenuOpen })); }; render() { // JSX } } exportdefaultclassHeaderextendsComponent{  state={isMenuOpen:false};  toggleOpen=()=>{    this.setState(prevState=>({      isMenuOpen:!prevState.isMenuOpen    }));  render(){    // JSX


React has some quick wins for catching bugs with built-in typechecking abilities using React.propTypes. When using a Class component, you can also move your propTypes inside the component as static propTypes. So, instead of:

export default class DropdownItem extends Component { ... } DropdownItem.propTypes = { .. propTypes }; DropdownItem.defaultProps = { .. default propTypes }; exportdefaultclassDropdownItemextendsComponent{DropdownItem.propTypes={  ..propTypesDropdownItem.defaultProps={  ..defaultpropTypes

You can instead have:

export default class DropdownItem extends Component { static propTypes = { .. propTypes }; static defaultProps = { .. default propTypes }; render() { ... } } exportdefaultclassDropdownItemextendsComponent{  staticpropTypes={    ..propTypes  staticdefaultProps={    ..defaultpropTypes  render(){    ...

Also, if you want to limit the value or objects returned in a prop, you can use PropTypes.oneOf and PropTypes.oneOfType respectively (documentation).

And finally, another place to simplify code is that you can deconstruct the props option in the function parameter definition like so. Here’s a component before this has been done:

const SvgLogo = props => { const { title, inline, height, width, version, viewBox } = props; return ( // JSX ) } constSvgLogo=props=>{  const{title,inline,height,width,version,viewBox}=props;  return(    // JSX

And here’s the same component after:

const SvgLogo = ({ title, inline, height, width, version, viewBox }) => ( // JSX ); constSvgLogo=({title,inline,height,width,version,viewBox})=>(


Finally, a word on packages. React’s popularity lends itself to a plethora of packages available. One of our senior JavaScript engineers passed on some sage advice to me that is worth mentioning here: every package you add to your project is another dependency to support. This doesn’t mean that you should never use packages, merely that it should be done judiciously, ideally with awareness of the package’s support, weight and dependencies. That said, here are a couple of packages (besides Downshift) that we found useful enough to include on this project:


If you find yourself doing a lot of classname manipulation in your components, the classnames utility is a package that helps with readability. Here’s an example before we applied the classnames utility:

<div className={`element ${this.state.revealed === true ? revealed : ''}`} > <divclassName={`element${this.state.revealed===true?revealed:''}`}>

With classnames you can make this much more readable by separating the logic:

import classNames from 'classnames/bind'; const elementClasses = classNames({ element: true, revealed: this.state.revealed === true }); <div classname={elementClasses}> importclassNamesfrom'classnames/bind';constelementClasses=classNames({  element:true,  revealed:this.state.revealed===true<divclassname={elementClasses}>

React Intersection Observer (Lazy Loading)

IntersectionObserver is an API that provides a way for browsers to asynchronously detect changes of an element intersecting with the browser window. Support is gaining traction and a polyfill is available for fallback. This API could serve a number of purposes, not the least of which is the popular technique of lazy loading to defer loading of assets not visible to the user.  While we could have in theory written our own component using this API, we chose to use the React Intersection Observer package because it takes care of the bookkeeping and standardizes a React component that makes it simple to pass in options and detect events.


I hope passing on some of the knowledge I gained along the way is helpful for someone else. If nothing else, I learned that there are some great starting points out there in the community worth studying. The first is the excellent React documentation. Up to date and extensive, this documentation was my lifeline throughout the project. The next is Create React App, which is actually a great starting point for any size application and is also extremely well documented with best practices for a beginner to start writing code.

Apr 28 2014
Apr 28

Sometimes you want to license files without people needing to purchase them. Even using coupon codes to make products free still requires them to be purchased through the Commerce Checkout system.

This is fine for physical products where you still want email and address details of potential future clients.

However when it comes to files, users require an account to access their files, so chances are you have all the details for them already. And there is no shipping required so why make them go through the checkout process just to get a license for a free file? (Seriously if you have reasons comment!)

Here is a snippet of how to generate a file license for a user:


Grammar Lesson:

Today I learnt the difference between 'license' and 'licence'. Unless you are American (in which case just ignore the existence of 'licence') read this.

Aug 20 2013
Aug 20

I’m using Vagrant to hand-off complete copies of my local development environment to other members of my team. This is a great way to lower setup time, isolate dependencies and eliminate inconsistencies. Frontend developers are able to work against a full local environment without wasting time on backend configuration. The following describes how Vagrant can make this possible without any additional provisioning tools. Those tools are powerful and offer even more efficiencies, but we’re leaving them out in the interest of simplicity.

A sample environment

On my MacBook Pro, I built a VM that runs the entire technology stack for a web application I’m currently working on called Jude. It’s a VirtualBox VM with things like Linux, Apache Web Server, MySQL, PHP, Memcache, APC, Drush, and Apache Solr installed and configured to work together. The codebase is checked out from a remote SVN code repository to a local directory on my Mac that’s also shared to the VM. I can use my regular Mac text editor (NetBeans) to edit code locally, and the changes are immediately available in the running VM. I can also use the command line (SSH), a database explorer (Sequel Pro), and a breakpoint debugger (NetBeans) to inspect the running web app.


Vagrant, Chef, and the Drupal Vagrant project made most of this configuration automatic, but manual configuration would have worked just as well. The point being it doesn’t matter how the initial VM gets created or what the technology stack is. It just matters that we set it up once and we want an easy way to copy it to another machine.

Sample workflow for spinning up a new VM copy

Step 1: Package the source VM

First we need to package up the initial VM from the source machine and make it available for download. The following command packs up the VirtualBox VM called vagrant_1374870184 and creates a file on the source machine called jude.box.

vagrant package --base vagrant_1374870184 --output jude.box --vagrantfile box.Vagrantfile

The box file then needs to be copied to the target machine or uploaded to a publicly accessible URL.

Step 2: Install the target VM

On the target machine we need to install VirtualBox and Vagrant, then open up a terminal window and run the following commands.

mkdir <project-directory>
cd <project-directory>
svn co https://path/to/repo/trunk/htdocs public/dev-site.vbox.local/www/
echo -e '\tdev-site.vbox.local' | sudo tee -a /etc/hosts
vagrant init jude https://path/to/file/jude.box

The first three lines create the project directory and checkout the codebase from a remote SVN repository. The public directory in the checkout location is the directory that will be mounted to the VM via NFS. The dev-site.vbox.local/www directory represents the web root of an Apache vhost on our VM.

Line four adds the site’s domain alias to our local hosts file. is the IP address we defined in the Vagrantfile on our source VM and dev-site.vbox.local is the vhost we defined in the source VM’s Apache conf.

Line five uses the source box file we packaged in the first step to initialize the target VM configuration. We now have a file called Vagrantfile in our project directory where we could override environment settings if we needed.

Step 3: Start the target VM

Now we’re ready to start up the new VM by running:

vagrant up

The first start up may take a few minutes, especially if the box file is remote and has not been downloaded yet. Start ups in the future will be much faster. Once this is complete we can access our new local copy of Jude at http://dev-site.vbox.local.

Further possibilities

At my company, we use a similar technology stack across many of our projects. Vagrant can be used to manage reusable VM components across all these projects. In addition to developer workstation installations, these could be used to spin up identical development, testing and production environments. Say goodbye to “works on my machine” bugs.

If you’re interested in using this approach on your next project and you’re using Drupal, be sure to check out Drupal Vagrant. It made the setup of my initial VM really simple. The only piece that needed to be manually configured was Apache Solr.

May 20 2013
May 20

20th May

This is a bit of a follow-up to Mike Bell's introductory article on using Codeception to create Drupal test suites. He concludes by stating he "need[s] to figure out a way of creating a Codeception module which allows you to plug in a Drupal testing user (ideally multiple so you can test each role) and then all the you have to do is call a function which executes the above steps to confirm your logged in before testing authenticated behaviour."

"Something along the lines of:


So, after skimming through Codeception and Mink documentation, I've tinkered with two potential ways of achieving this... for acceptance testing at least.

A crude toolbox

The first method is to use two custom classes to provide details of (a) a general Drupal site and (b) the specific site to be tested. This idea stemmed from this article which suggests that including literals - such as account credentials, paths and even form labels - in tests is bad practice. What if the login button label changes? etc.

Anyway, this is currently set up as follows. In the tests/_helpers directory, we include a new file providing an abstract class, DrupalSite:

  1. abstract class DrupalSite {

  2. // Site structure: login and registration.

  3. public $loginPage = 'user/login';

  4. public $usernameField = 'Username';

  5. public $passwordField = 'Password';

  6. public $loginSubmitField = 'edit-submit';

  7. // Site data: user accounts.

  8. public $adminUsername;

  9. public $adminPassword;

  10. }

It contains some defaults (the usual path to the login page, the default labels for Username & Password fields and the Login submit button) and two member variables to hold a test admin user's credentials. Then, in order to provide some values specific to the site we're testing, we extend that class to provide some of the missing information:

  1. class MySite extends DrupalSite {

  2. // Site data: user accounts.

  3. public $adminUsername = 'admin';

  4. public $adminPassword = 'test';

  5. }

Assuming we have such a system in place (it's proving useful in other areas already, such as managing HTTP authentication on testing and staging environments and dealing with Drupal's clean URLs) then we can also use the DrupalSite class to provide Drupal- or site-specific routines for, eg, logging in:

  1. abstract class DrupalSite {

  2. // Site structure: meta data.

  3. ...

  4. // Site structure: login and registration.

  5. public $loginPage = 'user/login';

  6. public $usernameField = 'Username';

  7. public $passwordField = 'Password';

  8. public $loginSubmitField = 'edit-submit';

  9. // Site data: user accounts.

  10. public $adminUsername;

  11. public $adminPassword;

  12. public $testUsername;

  13. public $testPassword;

  14. /**

  15.   * Acceptance helper to log in an (admin) user.

  16.   */

  17. public function logInAsAdminUser($I) {

  18. $this->logIn($I, $this->adminUsername, $this->adminPassword);

  19. }

  20. /**

  21.   * Acceptance helper to log in a test user.

  22.   */

  23. public function logInAsTestUser($I) {

  24. $this->logIn($I, $this->testUsername, $this->testPassword);

  25. }

  26. /**

  27.   * Acceptance helper to log in a user with given credentials.

  28.   *

  29.   * @param $I

  30.   * @param $username

  31.   * @param $password

  32.   */

  33. protected function logIn($I, $username, $password) {

  34. $I->amOnPage($this->getSiteUrl($this->loginPage));

  35. $I->see('User account');

  36. $I->see('Enter your [$site_name] username.');

  37. $I->amGoingTo('fill and submit the login form');

  38. $I->fillField($this->usernameField, $username);

  39. $I->fillField($this->passwordField, $password);

  40. $I->click($this->loginSubmitField);

  41. $I->expect('to be logged in');

  42. // @todo You'll probably have a much better way of verifying

  43. // whether we've successfully logged in.

  44. $I->see('My account');

  45. $I->see('Log out');

  46. }

  47. }

Of course we must set the site-specific user credentials in MySite.php. We can also override the method in the subclass to provide an alternative method of logging in, if the site provides alternate or customised login methods (such as a single sign-on implementation). In addition, we can benefit from building on these classes to provide, for example, a better structure for managing site roles and corresponding test user accounts or other helper functionality such as passing HTTP authentication or managing clean URLs for paths used in tests.

To actually put this into practice and use it in a test, we must first include the subclass in the acceptance suite's _bootstrap.php:

require_once 'tests/_helpers/MySite.php';

then instantiate an object of the MySite class in our test:

  1. $I = new WebGuy($scenario);

  2. $S = new MySite($I);

  3. $I->wantTo('log in as an admin');

  4. $S->logInAsAdminUser($I);

  5. // Verify login steps...

Using WebHelper

The second method is perhaps more in line with Mike's idea of using Codeception's helpers to build new methods into the WebGuy object $I:


Codeception achieves this by "emulat[ing] multiple inheritance for Guy classes (CodeGuy, TestGuy, WebGuy, etc)". Custom actions "can be defined in helper classes", Basically, the Guy classes have their methods defined in modules. They don't actually truly contain any of them, but act as a proxy for them. Codeception provides modules to emulate web requests, access data, interact with popular PHP libraries, etc. On top of this, we can provide additional methods using the corresponding Helper class to the relevant Guy class.

To 'enable' all of the gathered Guy methods, you use the build command. It generates the definition of the Guy class by copying the signatures from the configured modules:

$ codecept.phar build

For more about this, see the Codeception guide to Modules and Helpers.

Phew. With all that in mind, we can dive into editing the empty WebHelper class Codeception provides. We have to dig a little deeper to implement this: trying to use the test 'sub-routine' idea from above (i.e. implementing the login procedure as a series of $I scenario steps) doesn't really fit, but we can kludge it like so:

  1. <?php

  2. namespace Codeception\Module;

  3. // here you can define custom functions for WebGuy

  4. class WebHelper extends \Codeception\Module {

  5. /**

  6.   * Helper function to log in WebGuy with given credentials. We pass in

  7.   * in $I, resulting in a call like:

  8.   *

  9.   * $I->loginToDrupal($I, $name, $pass);

  10.   *

  11.   * This is horrible, and I'm probably missing something.

  12.   *

  13.   * @param $I

  14.   * @param $name

  15.   * @param $pass

  16.   */

  17. function loginToDrupal($I, $name, $pass) {

  18. $I->amOnPage('user/login');

  19. $I->see('User account');

  20. $I->see('Enter your [$site_name] username.');

  21. $I->amGoingTo('fill and submit the login form');

  22. $I->fillField('name', $name);

  23. $I->fillField('pass', $pass);

  24. $I->click('Log in');

  25. $I->expect('to be logged in');

  26. // @todo Provide the verification steps after successfully

  27. // being logged in here.

  28. $I->see(...);

  29. }

  30. }

but boy, does that seem nasty. If we're going down this route, it would be best left to using custom classes as above.

Shminky pinky (Chris Waddle)

Codeception by default uses the phpBrowser module for acceptance tests, and Mink to control it. The Mink Acceptance Testing documentation was a great place to start looking deeper - and of course with any OO frameworks the Session class API documentation also proved useful.

So, we can directly manipulate the browser session using Mink from within what will eventually be our new WebGuy method. I ended up with something like this:

  1. <?php

  2. namespace Codeception\Module;

  3. // here you can define custom functions for WebGuy

  4. class WebHelper extends \Codeception\Module {

  5. // Frist attempt at a custom login function...

  6. public function login() {

  7. $username = 'admin';

  8. $password = 'test';

  9. $session = $this->getModule('PhpBrowser')->session;

  10. $login_url = $session->getCurrentUrl() . '/user/login';

  11. $session->visit($login_url);

  12. // Fail the test step if we cannot access the login page.

  13. $this->assertTrue(

  14. $session->getStatusCode() == 200,

  15. 'could not access login page'

  16. );

  17. // Get login form elements from $page.

  18. $page =$session->getPage();

  19. $loginForm = $page->findById('user-login');

  20. $usernameField = $loginForm->findField('edit-name');

  21. $passwordField = $loginForm->findField('edit-pass');

  22. $submitButton = $loginForm->findButton('edit-submit');

  23. // Enter credentials and submit the form.

  24. $usernameField->setValue($username);

  25. $passwordField->setValue($password);

  26. $submitButton->click();

  27. }

  28. }

and the corresponding test:

  1. $I = new WebGuy($scenario);

  2. $I->wantTo('log in as an admin');

  3. $I->login();

  4. // Verify log in, if necessary.

  5. // Continue additional test steps.

Of course we can be a bit cleverer in passing in the test user's role and/or account credentials by using our custom class MySite - getting the best of both worlds. We use the custom classes to provide information about the structure of the site we're testing and WebHelper to add a 'proper' new method for WebGuy objects. Note there are site 'component' literals including in the login() method, such as the ids for the form elements and the path to the login page, but hey - WIP etc.

Caveat - login sessions and database refreshes

At this point I noticed some of my tests started failing. I realised that, in running multiple tests, subsequent session visits we're already logged in, resulting in a 403 HTTP code being returned when visiting the user login page. As you might have noticed in the code above, there is a slightly crappy assertTrue statement to check the page response is a 200. It's not, so the test fails. So our log in/session issue here is is mostly down to checks in the login method that could be improved somewhat.

Anyway, we might get away with it - tests should ideally be run on a clean, stable version of the site database and be cleaned-up or refreshed before any test is ran. One test should never affect another - it's likely that some of our tests will write to the database (testing creating a new node, creating a user etc.) so we should really use the Db module's cleanup configuration option. To set up database refreshes, do the following:

  1. place a clean SQL dump of the site's database in tests/_data/ using, eg, drush sql-dump --result-file=/path/to/suite/tests/_data/project_db_clean.sql
  2. edit the acceptance.suite.yml file to include the Db module and add configuration for your MySQL server:

  1. # Codeception Test Suite Configuration


  3. class_name: WebGuy

  4. modules:

  5. enabled:

  6. - PhpBrowser

  7. - WebHelper

  8. - Db

  9. config:

  10. PhpBrowser:

  11. url: 'http://project.drupal.dev:8080'
  12. Db:

  13. dsn: 'mysql:host=localhost;dbname=project_db'

  14. user: 'db_user'

  15. password: 'db_pass'

  16. dump: tests/_data/project_db_clean.sql

  17. populate: false

  18. cleanup: true

switching in appropriate values for dsn, user and password. Also ensure that the dump option points to the correct path within your test suite where the SQL dump is stored. Read more about this in the Cleaning Up section of the Codeception Acceptance Tests documentation.

So, which method shall we use for a login procedure?

Custom classes

  • With this method, the login procedure still effectively runs as a 'sub routine' of a test, i.e. it can (and does) contain wantTo, expect, see or other WebGuy method calls.
  • We can build on these classes to provide the roles (from stories or Drupal roles) and test user credentials for each.
  • Can be overridden in MySite.php if a site uses a alternate or customised login method.

Using WebHelper

  • No longer a test or 'subroutine' of a test, but effectively now a single step in a test scenario.
  • No longer site-specific.
  • Nicer integration with Codeception's framework.
  • Nicer syntax, e.g. $I->login('admin')

By introducing a new method to the WebGuy class, we effectively condense the login procedure into one, atomic test or scenario step. We can of course precede and follow this one step with wantTo, amGoingTo and see steps in our tests themselves. The step can also fail 'internally' and thus fail the calling test (for example if the session cannot access the login page).

However, we should realise that we have also removed the finer-grained steps of the original 'can log in' test. So, perhaps we should always use the WebHelper method providing we include a single test dedicated solely to testing the individual steps to login. Technically this could be a standard test or a 'subroutine' test as described in A crude toolbox above. However, the subroutine loses its value if we're to only call it once.

With that in mind, two of our tests might end up looking something like this:

AdminCanLoginCept.php - full

  1. $I = new WebGuy($scenario);

  2. // Used here for site structure/user credentials:

  3. $S = new MySite($I);

  4. $username = 'admin';

  5. $password = 'test';

  6. $I->wantTo('log in as an admin');

  7. $I->amOnPage($S->loginPage);

  8. $I->see('User account');

  9. $I->see('Enter your [$site_name] username.');

  10. $I->amGoingTo('fill and submit the login form');

  11. $I->fillField($S->usernameField, $username);

  12. $I->fillField($S->passwordField, $password);

  13. $I->click($S->loginSubmitField);

  14. $I->expect('to be logged in');

  15. // Verify login steps...

AdminCanLoginCept.php - optionally using custom classes and 'subroutine'

  1. $I = new WebGuy($scenario);

  2. $S = new MySite($I);

  3. $I->wantTo('log in as an admin');

  4. $S->logInAsAdminUser($I);

  5. // Verify login steps...

AdminCanPostArticle.php (and all other tests requiring login)

  1. $I = new WebGuy($scenario);

  2. $I->want to('post an article');

  3. $I->amGoingTo('login as an admin');

  4. $I->login('admin');

  5. $I->amGoingTo('post an article');

  6. $I->amOnPage('node/add/article');

  7. $I->fillField(...);

  8. ...

Where to go from here?

This brain-fart only really involves acceptance testing and of course has been delivered from an addled brain who has only just started looking into testing - and Codeception in particular. Once we've got some acceptance suites under our belts, the most sensible place to start looking next would be functional tests - for which we can provide Framework Helpers:

  1. <?php

  2. namespace Codeception\Module;

  3. class DrupalHelper extends \Codeception\Util\Framework {

  4. public function _initialize() {

  5. $this->client = new \Codeception\Util\Connector\Universal();

  6. // or any other connector you implement

  7. // we need specify path to index file

  8. $this->client->setIndex('index.php');

  9. }

  10. }

Following that? A fully-blown Drupal module as part of the Codeception framework? Codeception suggests that "if you have written a module that may be useful to others, share it. Fork the Codeception repository, put the module into the src/Codeception/Module directory, and send a pull request."

Back at you, Mike ;)

Addendum: This article was written on my Nexus 7 - it was only when coming to post it here that I realised just how overdue some loving is for my site... I also had a bit of a re-write when a network/sync issue on my tablet (and the subsequent accessing of the article via the web interface at evernote.com) led to a loss of most the latter half...

Apr 05 2013
Apr 05


jquery.modalize.js is a lightweight, pure-javascript approach to automatically turn part of any web page into a modal overlay.

I originally wrote it as a simple alternative for associating file upload fields to WYSIWYGs in Drupal, but can be used to modalize any chunk of HTML and significantly clean up overloaded pages (Drupal or non-Drupal).

The original dilemma

I’ve never loved any of the solutions for associating image and file uploads to WYSIWYGs in Drupal and yet I’ve had to do it on almost every project I’ve worked on. The Media module and the Wysiwyg Fields module are both ambitious projects that attempt this feature (and other things as well). Unfortunately I’ve run into issues with both. Being complex modules they are difficult to troubleshoot and hard to move away from if they don’t work out.

My usual solution

I normally end up sticking a multi-value image field and a multi-value file field underneath the WYSIWYG and then using the excellent Insert module to allow content editors to “send” HTML for the uploaded files to the WYSIWYG.


This is reliable and works nicely, but has a few drawbacks:

  1. It takes up lot of screen real estate - especially if you upload many files
  2. The WYSIWYG associations aren’t immediately intuitive to users - sometimes they have to scroll to see the whole picture
  3. If you have more than one WYSIWYG on a page it’s even harder to infer the associations.

The new modally-powered solution

With jquery.modalize.js, I start with my usual solution (as described above), add a single line of jQuery to turn my file fields into modals, and attach them to every WYSIWYG on the page.



To use it you just need JQuery, jquery.modalize.js, and a line of code like this:

$.modalize('#edit-field-image-attachments', '+ Attach images', '.field-type-text-with-summary');

This turns the #edit-field-image-attachments element into a hidden modal, replaces it with a link labeled + Attach images and prepends the link to all elements with the .field-type-text-with-summary class (covers WYSIWYG fields in Drupal). Clicking on the link will open the modal as a page overlay. The third argument is optional and if not provided, the modal link will be attached in the original element’s DOM location.

As a bonus, Modalize is Insert module aware, so clicking the Insert button will automatically close the modal and show the user the WYSIWYG it was inserted into.

Mar 30 2013
Mar 30


We recently built a community app in Drupal. It has:

  1. a fully abstracted (no Drupal), single page (no reloads) frontend
  2. a web service enabled Drupal backend
  3. an integrated Drupal overlay for edit and admin pages

Here’s how we did it:

Setting up Drupal web services

The Drupal Services module provides a standard framework for defining web services. This was our foundation for making Drupal data available to an external application. Out of the box it supports multiple interfaces like REST, XMLRPC, JSON, JSON-RPC, SOAP, AMF and provides functional CRUD services for core Drupal entities like nodes, users, and taxonomy terms. The response data is structured much like the Drupal entity objects you’re used to seeing in PHP and provides all the same data. We added our own group of “UI” services to clean up these objects and strip out some of the data that isn’t relevant to the UI layer (lots of [‘und’] arrays). I’m hoping to make this into a contrib module sometime soon.

A response to the standard Services resource /rest/user/<uid>.json looks something like this:

  uid: "222",
  name: "tway",
  field_first_name: {
    und: [
        value: "Todd",
        format: null,
        safe_value: "Todd"
  field_last_name: {
    und: [
        value: "Way",
        format: null,
        safe_value: "Way"
  field_location: {
    und: [
        tid: "604"
  field_department: {
    und: [
        tid: "614"

And a response to our UI resource /rest/user/<uid>/ui.json looks like this:

  uid: "222",
  name: "tway",
  field_first_name: {
    label: "Todd"
  field_last_name: {
    label: "Way"
  field_location: {
    id: "604",
    label: "KC-Airport",
    type: "taxonomy_term"
  field_department: {
    id: "614",
    label: "Technology",
    type: "taxonomy_term"
  edit_link: "user/222/edit",
  display_name: "Todd Way",

Making authenticated web service requests from your frontend app

The Services module comes with support for session-based authentication. This was essential for our app because we did not want any of our content or user data to be publicly available. Each request had to be associated with an authorized user. So basically, if a valid session key is set (as a request header cookie) on a service request, Drupal will load the user associated with that session key - just like any standard Drupal page request. There are two ways to accomplish this with an external frontend.

Option 1: Run your app on the same domain as Drupal

If your app can run on the same web domain as the Drupal services backend, you can use the built-in Drupal login form to handle authentication for you. It will automatically set the session key cookie and pass it on any service requests from the browser on that domain. So for example if your Drupal site is at http://mysite.com and your Drupal login is at http://mysite.com/user, your UI app will be at something like http://mysite.com/my-ui-path (more on how to set this up later).

To make a jQuery-based service request for a user object you would simply need to do this:

(function ($) {
    $.getJSON('/rest/user/1.json', function(data) {

The response data, if the request was properly authenticated, would be:

 "mail":"[email protected]",

and if unauthenticated would be:

  "Access denied for user anonymous"

Option 2: Run your app on a separate domain

If your app will be on a separate domain it will need it’s own server (e.g. node.js, etc.) to proxy all authenticated service requests. One reason for this is that web browsers do not allow the Cookie header to be set on XMLHttpRequest from the browser (see the W3C Spec). You can get around this on GET requests with JSONP if you do something like this:

     type: 'GET',
     url: 'http://mysite.com/rest/uiglobals.jsonp?callback=jsonpCallback',
     async: false,
     jsonpCallback: 'jsonpCallback',
     contentType: "application/json",
     dataType: 'jsonp',
     success: function(json) { console.log(json); },
     error: function(e) { console.log(e.message); }

However JSONP does not allow POST requests, so this is not a complete solution. For more details, check out this article.

Your proxy server will need to call an initial login service request (already part of the services module) on behalf of the client browser that takes a username and password and, if valid, returns a session key. The server then needs to pass the session key in the Cookie header on all service requests. If you were using a second Drupal site for your proxy, the PHP would look something like this:

function example_service_request() {
  $server = 'http://example.com/rest/';

  //login request - we need to make an initial authentication request before requesting protected data
  $username = 'username';
  $password = 'password';
  $url = $server . 'user/login.json';
  $options = array(
    'method' => 'POST',
    'headers' => array('Content-Type' => 'application/json'),
    'data' => json_encode(array(
      'username' => $username,
      'password' => $password,

  $result = drupal_http_request($url, $options['headers'], $options['method'], $options['data']);  //d6 
  //$result = drupal_http_request($url, $options); //d7

  if ($result->code != 200) {
    drupal_set_message(t('Authentication error: ') . $result->status_message, 'error');
  $login_data = json_decode($result->data);

  //build the session cookie from our login repsonse so we can pass it on subsequent requests
  $cookie = $login_data->session_name . "=" . $login_data->sessid . ";"; 

  //user search request
  //$url = $server . 'search/user_index/ui.json';
  $url = $server . 'search/user_index/ui.json?keys=joe&sort=field_anniversary:DESC'; 

  $options = array(
    'method' => 'GET',
    'headers' => array(
      'Content-Type' => 'application/json',
      'Cookie' => $cookie, //add our auth cookie to the header
  $result = drupal_http_request($url, $options['headers'], $options['method'], $options['data']);  //d6 
  //$result = drupal_http_request($url, $options); //d7
  dpm(json_decode($result->data), 'result data');

  //Log out request, since we are done now.
  $url = $server . 'user/logout.json';
  $options = array(
    'method' => 'POST',
    'headers' => array('Cookie' => $cookie),
  $result = drupal_http_request($url, $options['headers'], $options['method'], $options['data']);  //d6     
  //$result = drupal_http_request($url, $options); //d7

We didn’t use this code for our UI app, but it came in handy for testing and we eventually used it to interact with data from another backend system.

For our UI app, we used option 1 for two main reasons:
1. No need for a separate frontend server or custom authentication handling.
2. Better integration with the Drupal overlay (more on this later).

Hosting the frontend app for local development

We didn’t want our frontend developers to need any Drupal knowledge or even a local Drupal install in order to develop the app. We set up a web proxy on our shared Drupal development environment so frontend developers could build locally against it while appearing to be on the same domain (to maintain the cookie-based authentication). We used a simplified version of PHP Simple Proxy for this and added it to the Drupal webroot, but Apache can be configured to handle this as well. I wouldn’t recommend using a Drupal-based proxy since each request would perform unnecessary database calls during the Drupal bootstrap.

Our frontend developers used node.js and localtunnel, but other local dev tools could be used for this. As long as the Drupal development server can make requests to the frontend developer’s machine, the web proxy will work. Using this setup, the URL for frontend development looks something like this…


…where mysite.devserver.com is the domain alias of the dev server, proxy.php is the name of the PHP proxy script, and myfrontend.localtunnel.com is the domain alias for a frontend developer’s machine.

Hosting the frontend app in Drupal

To make the frontend app easy to deploy along with the Drupal backend, we set up a simple custom Drupal module to host it. Since the app is just a single HTML page (and some JS and CSS files), we define one custom menu item and point it to a custom TPL.

Here are the essential pieces for our judeui.module file:

 * implements hook_menu
function judeui_menu() {
  //define our empty ui menu item
  $items['ui'] = array(
    'page callback' => 'trim', //shortcut for empty menu callback
    'page arguments' => array(''),
    'access callback' => TRUE,
  return $items;

 * implements hook_theme
function judeui_theme() {
  //point to our custom UI TPL for the 'ui' menu item
  return array(
    'html__ui' => array(
      'render element' => 'page',
      'template' => 'html__ui',

 * implements hook_preprocess_html
 * @param type $vars
function judeui_preprocess_html(&$vars) {
  //if we're serving the ui page, add some extra ui variables for the tpl to use
  $item = menu_get_item();
  if ($item['path'] == 'ui') {        
    $vars['judeui_path'] = url(drupal_get_path('module', 'judeui'));
    $vars['site_name'] = variable_get('site_name', '');

    //add js to a custom scope (judeui_scripts) so we can inject global settings into the UI TPL
      "var uiglobals = " . drupal_json_encode(_get_uiglobals()), 
      array('type' => 'inline', 'scope' => 'judeui_scripts')
    $vars['judeui_scripts'] = drupal_get_js('judeui_scripts');

The hook_menu function defines our ui page, the hook_theme function points it at our custom TPL, and the hook_preprocess_html lets us add a few custom variables to the TPL. We use the judeui_scripts variable to get global settings from Drupal into the page - much like the Drupal.settings variable on a standard Drupal page. We also have a web service that the ui app could use for this, but adding this directly to the page saves an extra request when intially building the page. More on ui globals in the next section.

And here is our custom html_ui.tpl.php file:

<!DOCTYPE html>
        <title><?php echo $site_name ?></title>
        <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
        <?php echo $judeui_scripts ?>
        <script src="http://toddway.com/post/46638775215/<?php echo $judeui_path ?>/app.js"></script>
        <link href="http://toddway.com/post/46638775215/<?php echo $judeui_path ?>/app.css" rel="stylesheet"/>      

It contains the few very basic PHP variables that we set in hook_preprocess_html and a small amount of HTML to set the page up. Frontend developers can build and deploy app updates simply by committing new app.js and app.css files to the module folder. Drupal serves the page at http://mysite.com/ui.

Global settings for the UI

We added a custom web service to pass global settings to the UI app. The frontend app can call http://mysite.com/rest/uiglobals.json to load this or use the uiglobals variable we added to the UI TPL in the section above. Both of these methods use a function that returns an array of settings that are useful to the UI app.

function _get_uiglobals() { 
  return array(    
    'basePath' => base_path(), 
    'site-name' => variable_get('site_name', ''),
    'site-slogan' => variable_get('site_slogan',''),
    'publicFilePath' => file_stream_wrapper_get_instance_by_uri('public://')->getDirectoryPath(),
    'privateFilePath' => 'system',
    'main-menu' => _get_uimenu('main-menu'),
    'user-menu' => _get_uimenu('user-menu'),  
    'image-styles' => array_keys(image_styles()),
    'user' => $GLOBALS['user']->uid,
    'messages' => drupal_get_messages(NULL, TRUE),

You can see it contains global data like base path, site name, currently logged in user, public file path, image styles, messages, etc. This is a handy way for the frontend to access data that shoudn’t change during the browser session.

Integrating standard Drupal pages/URLs

In the early stages of frontend development it was quite useful to model the pages in a standard Drupal theme. For a while we thought we might still want some parts of the site to just be standard Drupal pages. Handling this incremently was fairly simple.

We established some conventions for URL aliases patterns in both Drupal and the frontend app. For example, one of our content types is post. The URL alias pattern for posts is post/[node:nid]. So we had a Drupal-themed URL for the post at http://mysite.com/post/123 and a frontend URL at http://mysite.com/ui#post/123.

Once the frontend app was ready to start handling posts, we used hook_url_inbound_alter to redirect http://mysite.com/post/123 to http://mysite.com/ui#post/123.

 * implements hook_url_inbound_alter
function judeui_url_inbound_alter(&$path, $original_path, $path_language) {
  //dpm($path, $original_path);
  if (variable_get('site_frontpage', 'node') == 'ui') {
    $oargs = explode('/', $original_path);
    if (in_array($oargs[0], array('post', 'user', 'group', 'tool')) 
        && !isset($oargs[2]) && isset($oargs[1]) && is_numeric($oargs[1])) {
      drupal_goto('ui/' . $original_path);
    if (strpos($original_path, 'user/edit') === 0) {
      $frag = 'modal/' . str_replace('user/edit', 'user/' . $GLOBALS['user']->uid . '/edit', $original_path);
      drupal_goto('ui/' . $frag);

This is incredibly handy for redirecting preconfigured links in Drupal views or email notifications to our abstracted UI URLs. And hook_url_inbound_alter can be expanded as more of the app moves to the abstracted UI.

Integrating the Drupal overlay

We wanted to use the standard content editing and admin pages that Drupal provides and have those pages open in an overlay just like any other Drupal 7+ site. To make it appear like part of the abstracted frontend, links to Drupal-rendered pages open in an iframe with 100% height and 100% width (just like the Drupal 7 admin overlay), and we made some minor CSS tweaks to the Drupal theme so that the page appears to be a modal window in front of the abstracted UI. Now edit and create links throughout our abstracted frontend can open the iframe overlay and display pure Drupal admin pages.


In addition we needed to facilitate closing the modal when appropriate. Setting up a close link in the top right corner of the modal was a pretty straightforward javascript exercise, but we also wanted to close it automatically when a user completed a task in the modal (For example, when a user clicks save after editing a content item, we want the modal to close on it’s own). Drupal already has a way to handle a redirect after a task (usually a form submit) is complete - the destination query string parameter. So in our frontend app, we add a destination query parameter to all of the edit and create links. The frontend app listens to the onload event of the iframe, and if it redirects to a non-modal page (e.g. /ui ), it closes the modal.

Finally, we want to pass Drupal messages back to the abstracted UI so the user can still see them even if the modal closes. Since the modal is redirecting to the ui callback when it closes, the messages variable of the uiglobals array will contain any related messages that should be displayed to the user.

Final thoughts

This was our first attempt using this kind of site architecture with Drupal. Although there were new development challenges inherent to any single page web application (and best saved for another post), the integration with Drupal as a backend and as an adminstrative frontend was suprisingly smooth. Our community site incorporates other contrib modules like Organic Groups, Search API Solr, Message Notify, and CAS without issue. Here are some additional benefits we discovered:

  1. Full suite of reusable UI web services for other client apps (Android, iOS, etc).
  2. Free to use a frontend development team with no Drupal knowledge.
  3. Avoided many of the usual Drupal theming struggles and limitations.
  4. Relatively seamless integration of Drupal UI and abstracted UI.
  5. Progressive integration (You don’t have to build the entire UI outside of Drupal - convert it later, if desired)
Mar 19 2013
Mar 19

By default, a Drupal site allows undefined arguments on any system path instead of returning 404 Not Found (e.g. http://drupal.org/node/1/asdlfkjadlfkdjaflakjsldkfa).  I created the Strict 404 module to override this behavior and enforce 404 responses for paths that are not actually defined in the menu system.

Some contrib modules rely on the undefined argument behavior (e.g. for admin pages) so you may want to enable Strict 404 only for certain path patterns (usually vistor-facing pages).  The module provides an admin page for selectively enabling Strict 404 responses. 

Jan 25 2013
Jan 25

Listen online: 

About two months ago we got a comment from a listener, KeyboardCowboy, about questions they had around contributing code to Drupal. Join Addison Berry, Karen Stevenson, Andrew Berry, Kyle Hofmeyer, Joe Shindelar, and Juampy Novillo Requena as we discuss those questions, and chat about how we got involved with contributing code, the challenges we face, and list out things that can be confusing or trip people up as they begin learning how the Drupal community works on code together.

Here is the text of the original comment that this podcast is based on (along with some handy links we've added):

Technical Newbs Podcast?

(asked by KeyboardCowboy)
I've been developing in Drupal since 5.2 but only within the last couple of years have really gotten involved in contributing and trying to be more involved in the community. I know the docs are resource out there on this are plentiful but I would love to hear some Drupal experts talk about some of the finer points of collaboration and contributing such as how they got started and their current process now.

I don't have much free time, but I want to help with D8 and Drupal is the first collaborative system I've worked in, so removing the grey area around these points could be the push I need to dive in more quickly.

1. What's your process to create a patch? Submit a patch? Test a patch?

2. How/Does this process differ between Contrib and Core?

3. How big can patches get and how do you handle the big ones?

4. Can more than one person work on the same patch? If so, how do you handle conflicts?

  • Interdiff: Show a diff between to patches so that you can see what's changed

5. What, exactly, do each of those statuses mean in the issue queue and who is responsible for changing the status of an issue?

6. What was/is your biggest challenge in collaborating on Drupal projects/issues/bugs/features?

7. How do you decide on a release state for a project (alpha, beta, rc)

And I'm sure I could think of others. Just thought I would pose that as an eager developer with limited time. Thanks again for keeping these podcasts going.

Ask away!

If you want to suggest your own ideas for podcasts, or have questions for us to answer on a podcast, let us know:
Contact us page

Release Date: January 25, 2013 - 9:17am


Length: 52:02 minutes (30.2 MB)

Format: mono 44kHz 81Kbps (vbr)

Aug 29 2012
Aug 29

Posted Aug 29, 2012 // 4 comments

Meta tags are one way that content authors can add extra information to a webpage, typically for the benefit of machines (like search engines) to learn more about the purpose and meaning of a webpage. You may recall that once upon a time it was a “search engine optimization” technique to fill the “keywords” meta tag with long lists of words to try to bump up your placement in search sites like Google. The “keywords” meta tag won’t help you much in Google anymore, but that doesn’t mean that meta tags have no use anymore. Perhaps you’d like to provide Open Graph for Facebook, or perhaps you have your own custom set of meta tags for use in an enterprise Google Search Appliance or other tool.

The Meta Tags module is your answer in Drupal 7 for adding these meta tags to your website and being able to customize them for individual pages. The Meta Tags module provides some of the traditional meta tags like “keywords” and “description” out-of-the-box, and has some plugins for Open Graph, and also has a fairly simple API for integrating your own custom meta tags.

To declare your own custom meta tags, you need to declare them in a custom module.

To get started, create your custom module directory my_metatags and create the following files:


name = My Metatags
description = Provides my custom Metatags.
core = 7.x
version = 7.x-1.x
dependencies[] = metatag
files[] = my_metatags.metatag.inc


 * Implements hook_ctools_plugin_api().
function my_metatags_ctools_plugin_api($owner, $api) {
  if ($owner == 'metatag' && $api == 'metatag') {
    return array('version' => 1);

What we’ve done here is to create a new custom module called my_metatags and we’ve declared in the .info file that we will be including a file called my_metatags.metatag.inc. In my_metatags.module we’ve implemented hook_ctools_plugin_api to tell CTools where to find our metatag plugin.

Now we need to create my_metatags.metatag.inc:

// Implements hook_metatag_info().
function my_metatags_metatag_info() {
  $info['groups']['my_metatags'] = array(
    'label' => t('My Custom Metatags'),
  $info['tags']['my_custom_metatag'] = array(
    'label' => t('My Custom Meta Tag'),
    'description' => t('This is a custom meta tag'),
    'class' => 'DrupalTextMetaTag',
    'group' => 'my_metatags',
  return $info;
// Implements hook_metatag_config_default_alter().
function my_metatags_metatag_config_default_alter(array &$configs) {
  foreach ($configs as &$config) {
    switch ($config->instance) {
      case 'global':
        $config->config += array();
      case 'global:frontpage':
        $config->config += array();
      case 'node':
        $config->config += array(
          'my_custom_metatag' => array('value' => 'This is a default value.'),
      case 'taxonomy_term':
        $config->config += array();
      case 'user':
        $config->config += array( );

In this file we are implementing two hooks provided by Meta Tags, hook_metatag_info() and hook_metatag_config_default_alter().

The code in hook_metatag_info() does two things: 1) Creates a new Meta Tags group called “My Custom Metatags” and declares a single custom metatag “my_custom_metatag.” By default, this metatag will get output on a page like:

<meta name="my_custom_metatag" content="This is a default value." />

The code in hook_metatag_config_default_alter() provides default values for our custom meta tag. The defaults can of course be overriden within the Meta Tags administration area and additionally on a per-entity basis (node, taxonomy term, etc) based upon your configuration of the module.

Brian is the foremost authority on all things Mobile at Phase2 Technology. Brian frequently speaks about topics like front-end web performance, jQuery Mobile, and RaphaelJS. He has been working with Drupal since 2005 and has presented at ...

Jun 04 2012
Jun 04

Posted Jun 4, 2012 // 2 comments

Perhaps you already know Panelizer, the new-ish module that lets you turn any content type into a Panel, allowing a content editor to customize the layout of individual pieces of content as they are created.

Panelizer is a pretty cool tool that I've only recently started to explore on a new project. One of the tasks this new project needed was for the default Panelizer settings and configuration to be put into code and enabled as part of the site install process.

Panelizer defaults are a combination of CTools exportables and a handful of variables, which means if your site is already using Features and Strongarm, you can just bundle up those defaults into a custom Feature and you should be good to go. On this particular project, however, we aren't using Strongarm, so we needed to do this the "old fashioned" way.

In case you do, too, whether its because you just love using just Drupal core functionality or because for whatever reason you aren't or can't use Strongarm and Features, here's how to export your default Panelizer configuration into a simple custom module:

To get started let's create our own custom module for Drupal 7: my_panelizer

Create a my_panelizer.info file and enter the following into the file:

name = My Panelizer Defaults
description = Default settings for Panelizer customizations
dependencies[] = panelizer
package = custom
core = 7.x
files[] = my_panelizer.panelizer.inc

If you notice in the installation instructions for Panelizer it says the following:

"Visit the Page Manager administer pages page and enable the node template system page (node_view) if it is not already enabled. Panelizer won't work without this enabled!"

Well, we don't want to have to have such a manual step when installing these defaults, so let's do this in code instead.

Create my_panelizer.install with the following:

 * Imlpements hook_install().
function my_panelizer_install() {
  // Set up variables and actions for Panelizer configuration
function my_panelizer_setup_panelizer() {
  // Enable the node/%node page manager plugin
  variable_set('page_manager_node_view_disabled', FALSE);

Now, when this module gets installed it will enable the node_view template in Page Manager for us.

Let's move on to the actual Panelizer defaults. After configuring your default panelizer settings via the admin UI for a content type, we can export the panelizers.

Enable the Bulk Export module that is part of CTools, go to its administration page at Administration -> Structure -> Bulk Exporter and select the Panelizer defaults you'd like to export. Enter my_panelizer into the Module Name field and press Export.

You'll now need to create two new files: my_panelizer.module and my_panelizer.panelizer.inc Copy and paste from the textareas on the Bulk Exporter page into those respective files and you've now captured a good portion of your default Panelizer settings.

What's still missing, however, are four variables for each content type for which you've enabled Panelizer. To get these values you can either head into MySQL or use a handy tool like Drush to help you. I prefer Drush so that's what these instructions will use.

The first variable that we need to get is panelizer_defaults_node_<content-type> where <content-type> is the machine name of the content type.

For the remainder of this example, we will assume that your content type's machine name is 'page'

In your command line get to your Drupal docroot and then type:

drush vget panelizer_defaults

You should get back something like this:

panelizer_defaults_node_page: Array
    [status] => 1
    [default] => 1
    [choice] => 

Now, in your my_panelizer_setup_panelizer function in my_panelizer.install add the following:

  // Enable Panelizer on Basic Pages
  variable_set('panelizer_defaults_node_page', array('status' => TRUE, 'default' => TRUE, 'choice' => FALSE));

At this point, you have enough to install this module and set up this content type with a Panelizer default that will allow all layout choices and all content options. If you don't need to restrict layouts or content options, you can stop here.

If, however, you do need to restrict those things, we have a bit more work to do. There are three more variables that we need to set to restrict layouts and content options. One of these variables, however, stores an Object of type panels_allowed_layouts so setting it with variable_set() is not as easy as it could be.

If you go to your Panelizer default settings and configure the Allowed Layouts to only allow the built in "Flexible" layout and then save the settings, we can find the variable with Drush:

drush vget panelizer_node:page_allowed_layouts

Drush should return something like:

panelizer_node:page_allowed_layouts: "O:22:"panels_allowed_layouts":4:{s:9:"allow_new";b:1;s:11:"module_name";s:19:"panelizer_node:page";s:23:"allowed_layout_settings";a:10:{s:8:"flexible";b:1;s:14:"twocol_stacked";b:0;s:13:"twocol_bricks";b:0;s:6:"twocol";b:0;s:25:"threecol_33_34_33_stacked";b:0;s:17:"threecol_33_34_33";b:0;s:25:"threecol_25_50_25_stacked";b:0;s:17:"threecol_25_50_25";b:0;s:6:"onecol";b:0;s:8:"flexgrid";b:1;}s:10:"form_state";N;}"

This isn't super useful -- it's a serialized object and is hardly legible. Moreover, when I was attempting to just take this serialized string and set it during the install process, it wasn't working. The trick lies in that this is a special type of object from Panels, and luckily there is still a way we can create it programatically.

Add the following into my_panelizer_setup_panelizer():

  ctools_include('common', 'panels');
  $allowed_layouts = new panels_allowed_layouts();
  $allowed_layouts->allow_new = TRUE;
  $allowed_layouts->module_name = 'panelizer_node:page';
  $allowed_layouts->allowed_layout_settings = array(
    'flexible' => TRUE,
    'twocol_stacked' => FALSE,
    'twocol_bricks' => FALSE,
    'twocol' => FALSE,
    'threecol_33_34_33_stacked' => FALSE,
    'threecol_33_34_33' => FALSE,
    'threecol_25_50_25_stacked' => FALSE,
    'threecol_25_50_25' => FALSE,
    'onecol' => FALSE,
    'flexgrid' => FALSE,

The above code includes a file from Panels which allows us to create a new object of type panels_allowed_layouts set some values to it and then call its save() method, which does the job of saving this to the variables table for us.

Last, but not least, is how to configure whether your Panelizer will allow all new content options or only specific values. The variable panelizer_node:page_default stores an array of which content option types allow all items added after this configuration is set and you'll set it in my_panelizer_setup_panelizer() like so:

  variable_set('panelizer_node:page_default', array(
    "token" => FALSE,
    "entity_form_field" => FALSE,
    "entity_field" => FALSE,
    "entity_field_extra" => FALSE,
    "custom" => FALSE,
    "block" => FALSE,
    "entity_view" => FALSE,
    "other" => FALSE,

Then, if you've allowed only specific content options, you need to set panelizer_node:page_allowed_types. This is a very large array of options and you'll set it something like this (array snipped for brevity):

  variable_set('panelizer_node:page_allowed_types', array(
    "node_form_author-node_form_author" => 0,
    "node_form_buttons-node_form_buttons" => 0,
    "node_form_comment-node_form_comment" => 0,
    "node_form_log-node_form_log" => 0,
    "node_form_menu-node_form_menu" => 0,
    "node_form_path-node_form_path" => 0,
    "node_form_publishing-node_form_publishing" => 0,
    "node_form_title-node_form_title" => 0,
    "node_attachments-node_attachments" => 0,
    "node_author-node_author" => 0,
    "node_body-node_body" => "node_body-node_body",
    "node_comment_form-node_comment_form" => 0,
    "node_comments-node_comments" => 0,
    "node_content-node_content" => 0,
    "node_created-node_created" => 0,
    "node_links-node_links" => 0,
    "node_terms-node_terms" => 0,
    "node_title-node_title" => 0,
    "node_type_desc-node_type_desc" => 0,
    "node_updated-node_updated" => 0,
    "node-node" => "node-node",
    "form-form" => 0,
    "panelizer_form_default-panelizer_form_default" => 0,

And that's it! With these four variables and your exported Panelizers, you should now be able to enable this module (perhaps as part of your install profile) and your content type should be set up with your configured Panelizer defaults.

One last final note: I was having some trouble getting Panelizer 7.x-2.0 to read the defaults from code, however when I updated to Panelizer 7.x-2.x-dev everything started working just fine.

Happy coding!

Brian is the foremost authority on all things Mobile at Phase2 Technology. Brian frequently speaks about topics like front-end web performance, jQuery Mobile, and RaphaelJS. He has been working with Drupal since 2005 and has presented at ...

Feb 16 2012
Feb 16

Drupal 7 introduced the EntityFieldQuery API. This class is a mechanism for quickly building queries of entities and fields – typically nodes, but of course not limited to that.

Drupal 7 features a much richer database abstraction layer than previous versions of Drupal, but in the end you are still more or less building SQL. EntityFieldQuery allows you to construct queries without much knowledge of SQL at all.

Drupal core itself mostly uses EntityFieldQuery as a utility function, in many cases simply to check that a given field has a value somewhere in the system or not. It’s strange that such a powerful mechanism is put to so little use by core, but no matter. Let’s see how we can use it.

Starting Your Query

Starting an entity field query looks like this:

$query = new EntityFieldQuery();

Extremely simple, right? That’s all it takes to get started. Let’s now add some conditions to our query.

 ->entityCondition(‘entity_type’, ‘node’)
 ->entityCondition(‘bundle’, ‘article’)
 ->propertyCondition(‘status’, 1)
 ->propertyOrderBy(‘created’, ‘DESC’);

Let’s take a look at what’s going on here. First of all, notice the lack of semicolons between method calls. Each EntityFieldQuery method returns the same EntityFieldQuery object the method is called on. This allows us to chain method calls together, which speeds up coding and makes it easier to read. This will look familiar to anyone who uses jQuery. This could just as easily be written like so:

$query->entityCondition(‘entity_type’, ‘node’)->entityCondition(‘bundle’, ‘article’)->propertyCondition(‘status’, 1)->propertyOrderBy(‘created’, ‘DESC’);

But, that is really not easy at all to read. I recommend putting each method on its own line. Future generations of coders will thank you.

Now let’s look at each method and see what’s happening.

->entityCondition(‘entity_type’, ‘node’)

The first method uses an entityCondition to tell the query to look for node entities. It may seem like specifying this is redundant, but as many other things in Drupal 7 are also entities – users, taxonomy terms, etc – you need to restrict your set.

->entityCondition(‘bundle’, ‘article’)

The second method tells the query which node types to restrict the set to. This can be left out if you would like to query all available nodes, but in most cases you will be looking for specific node types. Note also that the second argument, ‘article’ in this case, can be either a string or an array; the query will adjust automatically. So if you were looking for both article and page node types, you could rewrite that part of the chain as follows:

->entityCondition(‘bundle’, array(‘article’, ‘page’))

As you can see, it’s extremely easy to expand your query.

->propertyCondition(‘status’, 1)

The third method is a propertyCondition. A property in this case is any column in the base table for the given entity type. In the case of nodes, this could be whether it is published, the user that created the node, time of creation, etc. In our example above, we are modifying our query to only return published values.

->propertyOrderBy(‘created’, ‘DESC’)

The last method in our example above uses propertyOrderBy to set an order. In this case, we’re simply asking for the results to be returned in reverse chronological order.

Querying Fields and Limiting

Now let’s add a query on a field value using a fieldCondition. Let’s assume each of the node types in our example has a field ‘field_us_state’ assigned to it. We’re going to find nodes that are associated with the New York Tri-state area, which also includes Connecticut and New Jersey. This would look like this:

$query->fieldCondition(‘field_us_state’, ‘value’, array(‘CT’, ‘NJ’, ‘NY’));

We can order by field values as well, if that is useful to us. Note that ordering conditions are processed in the order that they are added to the query.

Perhaps for our purposes, we want to limit our query to the 10 most recent items. Let’s add a range:


Et Voilà

Finally, we execute the query and assign that to a variable:

$result = $query->execute();

This returns us an array of entity ids that match the conditions specified for the query. If we are querying nodes, the information will be under $result['node']. In most cases, what you actually want are the nids that have been returned. This is easily accomplished:

$nids = array_keys($result[‘node’]);

Putting that all together, we have:

$query = new EntityFieldQuery();
  ->entityCondition(‘entity_type’, ‘node’)
  ->entityCondition(‘bundle’, ‘article’)
  ->propertyCondition(‘status’, 1)
  ->propertyOrderBy(‘created’, ‘DESC’)
  ->fieldCondition(‘field_us_state’, ‘value’, array(‘CT’, ‘NJ’, ‘NY’))
$result = $query->execute();
$nids = array_keys($result[‘node’]);

This is not very much code at all for this result. Moreover, we don’t need to know anything about SQL or even about the underlying storage engine being used. We simply write the query in this simple syntax, and EntityFieldQuery API does the job of translating that into a query that Drupal’s database abstraction layer understands and executes it for us.

What would we do with this list of nids? Pretty much anything. We might load them, for starters:

$nodes = node_load_multiple($nids);

We might want to display these 10 nodes in a teaser style. That’s also easily done. Let’s generate a Drupal render array:

$output = node_view_multiple($nodes);

More To Come

This is just the beginning. Look for part 2 of this post, where we will show more concrete examples of putting EntityFieldQuery to use. Also look out for posts from other Treehouse engineers explaining more advanced topics related to EntityFieldQuery.

Nov 15 2011
Nov 15

We regularly need to create workflows that enable anonymous users to add content to one of our Drupal sites. Often it is desirable to make anonymous users confirm the nodes they post by following a generated 'secret' link they have received by email.

Thanks to the indispensable Rules and Flag modules this can be accomplished without ever leaving the administration area of a Drupal site. Now don't misunderstand me. I love to code. But the Rules module often offers a faster and more easily maintainable way to create complex processes. It also keeps Drupal installs relatively clean, since Rules is able to sit in for a whole bunch of contrib modules.

But let's get back to work on the anonymous email confirmation process! I do assume some basic experience with Rules. For a first introduction I can do no better then refer you to the great series of Rules tutorials by Johan Falk at Node One.

My screencasts below are of a Drupal 6 install. I had to recreate the rules for D7 last week, follows the same logic as the D6 version *. An export of both D6 and D7 rules are attached. Both have been exported using a content type called "Complaint" (machine name "complaint") - different from the content type name seen in the screencasts.

A. Prerequisites

1. Download and enable the necessary modules

Download and enable the following modules:

- cck
- email (for the email field)
- flag
- token
- rules
- rules_forms (only necessary for D7)
- pathrules (only necessary for D6)

2. Create the Content Type you want anonymous users to submit

Create the relevant Content Type.

Add a regular text field to the Content Type named "random" (do not hide this field under the "Manage Display" tab, this would mess with the first rule we will create - I will hide the Random form field later on using a form rule).

Add an e-mail field called "email".

Set the permissions for the content type, anonymous users should be able to create this particular content type, the "random" field should be editable and viewable by anonymous users.

3. Add a new node Flag

Add a new Flag at the flag administration page. Make it of type "nodes", set it to "global" and assign it to the Content Type created in step 2.

B. Rules

It's as easy as one, two, three!

5. First rule: Send e-mail on submit IF anonymous user

The first rule checks if someone who tries to submit a the relevant content type is an anonymous visitor of the site. If that is the case, it will generate a random string (borrowing the Drupal function user_password()), add it to the "random" field, save the content type and mail the user (who entered an e-mail address in the form) a link that is a combination of the content type's URL and the generated string.

[embedded content]

Relevant code :
return array(0 => array('value' =>  user_password()));

6. Second rule: Flag content as visible when correct URL from mail

The second rule checks if the value contained in the link received by the anonymous user conforms to the hidden saved value of the random field.

[embedded content]

Relevant code:

7. Third rule: If content NOT flagged hide for anonymous user

The third rule is simple: if an anonymous user tries to view content that is not flagged visible, redirect the user to the front page.

[embedded content]

9. The proof of the pudding

If everything went well, an anonymous user should receive the following mail on submitting our specially prepared content type:

Click the link - and our previously hidden content is visible! It worked!

The fourth rule makes sure that content added by logged in users is automatically be made visible. The fifth hides the random entry field on the submit page of our content type. These last two rules are complementary, so I did not include screencasts for them. All rules for both D6 and D7 versions of Rules are attached below (remember: content type "Complaint", machine name "complaint").

Though I had to remember to set the correct user for my flag rule to make it fire for anon users - the user "on whose behalf to flag" does not need to be the currently logged in or acting user in D7 Rules. You can specify any existing user that has sufficient rights to flag. Just switch to "direct data selection" and enter a uid of a user with sufficient rights to set a flag.

Preview Attachment Size complaint_rules_export_d7.zip 2.38 KB complaint_rules_export_d6.zip 2.25 KB
Sep 19 2011
Sep 19

Last week I teamed up with Lullabot for a super awesome give-away: five copies of my latest book (which *I* don't even have a copy of yet) were given away via Twitter. The contest is closed, but follow @diwd as I'm pretty sure they've got something else up their sleeve (*hint*hint*).

What does this have to do with a make-over? Well. Some time ago I whipped together a quick (and really dirty) theme for my personal site. I was trying to separate my "tech" writing from my "human" (craft/cooking/gardening) writing for various reasons that made a lot of sense at the time. And then I got really busy doing a lot of other things and pretty much stopped blogging. (Sound familiar?) I'd been trying to think of a way to solve the front page, but it was just never really a priority.

Until last Friday. Sweet mother of a cow, the twitter contest for my new book was pointing to my really awful home page! So I started trawling through free Drupal themes and static templates and I may have even started looking at WP themes that I could convert to Drupal. None of them were quite right. I was sad. My personal home page was still ugly and I didn't know how to make it suck less *immediately*.

Google Alerts to the rescue. Today I got an alert for "Drupal theming" letting me know that Drupal Style had updated their site. I clicked through to see what they were up to and found the PERFECT theme for my needs.

Here are the modifications I made:

  • Created custom images for the featured blocks (one for each of the books, one for twitter and one for my very neglected blog). In case you're curious, the twitter bird is from here; and the RSS icon is from there (a great set which Eaton told me about).
  • Custom block template files to put the images in the right spots for each of the featured blocks in the footer.
  • Customized the template page-front.tpl.php to remove the content-related variables and update the page title to use a bookmark-friendly title.
  • Customized the template page.tpl.php to move the "featured blocks" region to the bottom of the page, and completely removed the banner from inside pages. (This means I don't have to worry about customizing the blocks to only show on some pages.)
  • Updated the CSS to make the content appear as dark text on a light background (matches the mostly white front page) and adjusted the height of the featured footer blocks.

I'm sure I'll continue to make the odd tweak here and there, but that's basically it.

Total time from finding the theme to relaunch: about three hours. Total time to find a starting theme that "clicked" for me: at least three months.

Have you got a similar story about finding the perfect theme? I'd love to hear it--leave your story in the comments (don't forget to link to your site and the theme you used).

Jul 10 2011
Jul 10

Although the following code (a video embed code to be used by visitors of a video site) is discussed in the light of a MediaMosa related site, the implementation can easily be generalized to other video solutions.
It has been a pleasure to work with the MediaMosa framework, a tried and tested Dutch open source software solution enabling you to easily build a full featured, webservice oriented media management and distribution platform. Sort of Drupal based YouTube in a box, including both client and server solutions.

The client site we developed is based on the MediaMosa CK module, which adds support for MediaMosa videos to the Embedded Media Field module.

Almost everything we needed worked out of the box. There was just one major video related feature in our spec that we had to cater for ourselves: the embedding of our video's in other sites. Because the MediaMosa server provides temporary tickets for the within-site embed code, any embed solution has to retrieve a new ticket (URI) from the remote site.

Taking a cue from YouTube we decided the most efficient way to do this might be to simply provide the video in an iFrame.

To do this, we needed to implement two features:

- a special URL, showing only the video
- the iFrame link, easily to be copied and pasted by a user

As usual there are several ways to implement this kind of functionality in Drupal. We could, for example, have created a simple module. Yet in this case we decided to make use of modules already installed.

Basic Video URL

For the first step, the special URL, we used the ThemeKey module. ThemeKey allows you to define theme-switching rules allowing automatic selection of a theme depending on, among others, query parameters. That way we can conditionally select an empty, clean theme only showing the video just by adding a specific query-parameter to the default URL.

In order to get this to work, first download and enable the ThemeKey (themekey, themekey_ui) module, and on D6, ThemeKey Properties (already incorporated in ThemeKey for D7). Next, create a blank theme, containing almost no CSS or markup (like this one, for example) and enable this blank theme as well.

Now go to admin/settings/themekey and add the following:

Adding "?embedded" to any URL will from now on serve the page using our blank theme.

To really only show the video, we do still have to remove all the other fields that are part of the mediamosa_videocontent content type. To do this, we can add a custom node-mediamosa_videocontent.tpl.php (don't forget to copy a default node.tpl.php to the theme dir as well to enable theming content types this way) to our blank theme directory, which only shows the relevant video-field:

print $node->field_mediamosa_videofile[0]["view"];

Don't forget to clear the cache after making changes in your tpl files, go to a video page, add "?embedded" and there you are: a clean, embeddable video.

The Embed Code

Step two is even more easy than step one: all we have to do is make an iFrame available, containing the embeddable link to the video we just created. Let's test this by hand-typing an iFrame in my blog (different from the MediaMosa site) first:

<iframe width="425" height="375" src="http://site.com/content/video_page?embedded" frameborder="0" allowfullscreen></iframe>

If width and height are correct, the video should appear perfectly:

Now all there is left to do is to automate the generation of the embed links, making it available for the visitors of the site. Since this field is just a variation upon the URL, it can be easily "computed" from values already available within this node. Clearly something that can be solved using the Computed Field module.

As usual, first download and enable the computed_field module. Now add a Computed CCK field to the Video node, and use the following for the "Computed Code" section:

$node_field[0]['value'] = '<textarea class="share-embed-code"> <iframe width="425" height="375" src="http://www.druplicity.com/content/easy-embed/' .url("node/".$node->nid, array('query' => 'embedded','absolute' => TRUE)) . '" frameborder="0" allowfullscreen></iframe></textarea>' ;

After saving a video-node containing this computed field, a textarea with the embed code will appear, enabling any visitor to embed the video on any other site:

Preview Attachment Size blank.zip 7.94 KB
Jun 19 2011
Jun 19

I often need to integrate Flash AS3 elements into Drupal projects. Something made very easy by the great AS3 Drupal Proxy and Drupal AMFServer module from the guys at DPDK. Of the structures that are returned by the default Services module, the taxonomy tree needs a little more processing than most.

In order to parse the taxonomy tree in my AS3 frontend code I use the recursive function shown below (here on its own, stripped a bit - by default part of a DrupalUtils helper class & returning a as3ds tree).

As an aside: because Actionscript 3.0 primitive types like ints are immutable (in effect passing-by-value), I wrap the variable that keeps track of the callee level in a separate Reference class (enabling passing-by-reference) to forgo global variables, a no-no in recursive functions.

package org.drupal.amfserver

    public class DrupalNewsServer extends MovieClip


        public function DrupalNewsServer()
            proxy = new DrupalProxy(endpoint,DrupalProxy.VERSION_7);
            proxy.setHandler("taxonomy_vocabulary", "getTree", onTreeResult, onStatus);
            sequence.add(new CallBackTask(proxy.setRemoteCallId, "taxonomy_vocabulary.getTree(2)"));
            sequence.add(new DrupalInvokeTask(proxy, "taxonomy_vocabulary", "getTree", 2));

        private function onTreeResult(data : DrupalData):void

        /* recursive parser */

        public function parseDrupalTree( obj : *, level : int = 0, parentTid: int = 0,
            if (!main_level)
                main_level = new Reference();
                main_level.value = 0;
            // tabs for testing by trace
            var tabs:String = "";
            for (var i : int = 0; i < level; i++)
                tabs += "\t\t";

            var level_array = new Array();
            for (var prop:String in obj)
                if (obj[prop].depth == level
                    && obj[prop].parents.indexOf(String(parentTid)) != -1
                    && level_array.indexOf(obj[prop].tid) == -1)
                    if (level>main_level.value)
                        // do things on down tree
                    if (level<main_level.value)
                        // do things on up tree
                    // trace
                    trace( tabs + "[" + obj[prop].tid + "] " + obj[prop].name + " level " + level);
                    main_level.value = level;
                    parseDrupalTree( obj, level + 1, obj[prop].tid, main_level);

class Reference {
    public var value:*;

The result is a nicely indented taxonomy tree:

[1] Milieu level 0
                [7] Subthema 3 level 1
                [8] Subthema 4 level 1
[3] Scholen level 0
                [7] Subthema 3 level 1
                [9] Subthema 5 level 1
[4] Educatie level 0
                [5] Subthema 1 level 1
                                [7] Subthema 3 level 2
                [6] Subthema 2 level 1
[2] Vrouwen level 0
                [10] Subthema 6 level 1

When used with as3ds tree structure this translates for example into the following (AS3 application screenshot):

Jun 10 2011
Jun 10

When moving a Drupal install from one server to the next you often need to replace paths within several database fields and tables. To make this chore a little easier I have started to use the following DB script (thanks, krazyworks).

echo -n "Enter username: " ; read db_user
echo -n "Enter $db_user password: " ; stty -echo ; read db_passwd ; stty echo ; echo ""
echo -n "Enter database name: " ; read db_name
echo -n "Enter host name: " ; read db_host
echo -n "Enter search string: " ; read search_string
echo -n "Enter replacement string: " ; read replacement_string

MYSQL="/usr/bin/mysql --skip-column-names -h${db_host} -u${db_user} -p${db_passwd}"

echo "SHOW TABLES;" | $MYSQL $db_name | while read db_table
echo "SHOW COLUMNS FROM $db_table;" | $MYSQL $db_name| \
awk -F'\t' '{print $1}' | while read tbl_column
echo "update $db_table set ${tbl_column} = replace(${tbl_column}, '${search_string}', '${replacement_string}');" |\
$MYSQL $db_name

The DB info needed for the script can be found using Drush:

drush sql-connect

Jun 03 2011
Jun 03

In Drupal 6 as you could go into the taxonomy section of the admin area and look at the vocabulary edit URL to find the numerical vocabulary id. In Drupal 7 the URL is no longer as verbose as it now shows the machine name of the vocabulary, for example admin/structure/taxonomy/my_vocabulary/edit.

If you have access to Drush, there is another way to quickly find the VID though:

drush php-eval '$tax=taxonomy_vocabulary_machine_name_load("main_site_structure"); echo $tax->vid;'

May 18 2011
May 18

Facebook continues to grow at a rapid pace, and many sites have started to integrate Facebook Connect as a single sign on solution. Drupal has two modules available for integration with Facebook Connect: FB Module and Facebook Connect Module. This post uses our recent experiences to show how Facebook Connect can be extended in Drupal to offer a one-click to full profile (including image) solution. The results can look like this:

Stock functionality

As mentioned, there are two relevant modules here:

  • FB Module can be used for single sign on and complete Facebook application building, but it is quite complex.
  • Facebook Connect Module, on the other hand, only has the connect functionality and allows some access to the Facebook API, but it is simpler to use and extend.

Trellon's solution

While much of the functionality we needed was already available with the Facebook Connect Module, we needed a fully-integrated solution which would allow us to generate a full profile in one step. On the site we built, authenticated users can create acts of green, pledge to commit acts of green, create and register for events, and post comments. We wanted to eliminate barriers to participation which, in this case, meant creating an account quickly as part of a larger workflow.

Without Facebook integration, the user had to choose a password, supply a user name, and then upload his/her picture and fill out his/her profile information. This deterred many users. So, the site had many anonymous faces and lost quite a few potential users at the registration/login screens.

Our implementation of Facebook Connect changed this, and the many faces on the site prove this strategy to be a great success!

Now, when a user lands at the registration screen to pledge for an "Act of Green," s/he just needs to click "Quick-Register with Facebook," then allow the site access to his/her general information and email address. The user is then registered, logged in, and has pledged with his/her full name and user image.

This solution also increases site stickiness. When users return to the site and want to perform another action, they can easily login via facebook and be notified with a nice message telling them that they are already registered.

Can you make this any shorter?

Technical Challenges

The technical challenge here was to extend the facebook connect module to allow us to do all we wanted, while keeping our own customizations separate.

The first change we made to the module (which will soon be released as patches to the module) was to get destination= urls to work. Most of our workflow was dependent on this functionality.

if (user_is_anonymous())
drupal_goto('act_commitment', array('act_id' => $node->nid))

and act_commitment was showing the drupal registration form.

Once this was complete, we had to create the possibilities to add "quick-register" buttons to the registration and login forms which are customized.

function mymodule_custom_add_fbconnect($type, $text, $desc, &$form)
$attr = array( "text" => $text );  if ($type == 'register') {
$value = fbconnect_render_register_button($attr);
  else if (
$type == 'login') {
$value = fbconnect_render_login_button($attr);
  else {
$value = fbconnect_render_button($attr);
$form['fbconnect_mymodule_button'] = array(
'#type' => 'item',
'#description' => $desc,
'#value' => $value,
'#weight' => -20,
'#id' => 'fbconnect_edn_button',
'#suffix' => '<hr />',

Those customizations could then be added as easy as:

('register', t('Quick-Register via Facebook'), t('Register using Facebook'), $form);

We also added custom hooks to allow for easier usage of the Facebook API by other modules and by supplying the data used for registration.

= fbconnect_get_facebook();

Here we can now use $facebook object, which is giving us direct access to the Facebook SDK.

We also changed the quick registration to run the normal submit[] hooks to allow for usage of content profile and even made it possible to get the facebook data gained via quick registration:

function mymodule_custom_fbconnect_user_data($data) {
$user$content_profile = content_profile_load('profile', $user->uid);  if ($content_profile->nid > 0) {
$content_profile->field_first_name[0]['value'] = $data['first_name'];
$content_profile->field_last_name[0]['value'] = $data['last_name'];

And last, but not least, we added some APIs to easily retrieve the image URL for display.

= fbconnect_get_user_image_url($user->fbuid, 'big');

Conclusion and Future

All of those changes will be supplied as patches to the main fbconnect module in the following weeks. If you can't possibly wait any longer, you can download our fork below, which is attached to this post both as the module or as an archive of patches.

UPDATE: Since we originally posted this blog entry, some users reported issues with being able to log in using Facebook. We found an issue with how Varnish was configured that prevented some cookies from Facebook from properly being seen within Drupal. This has been resolved, and we adjusted our patches to prevent this error from happening again. The issue can be found here: http://drupal.org/node/1162960.

We also put up a demo site at http://fbconnect-demo.trellon.com/user/register that shows off the single click functionality. This site is not configured with Varnish, so it should be able to work for everyone regardless of how the server is configured.

AttachmentSize 58.27 KB 10.29 KB
Apr 25 2011
Apr 25

For a Webform based survey site I needed to create questions offering a choice between some *conditionally shown* answers. Because I had very little time, I decided to make some quick & easy changes to the Webform template, instead of creating a custom module.

My solution was to check for previous entries within the webform-form.tpl.php file & remove unneeded answers using some jQuery magic. Since it is no problem if the user unexpectedly might still see the hidden answers (for example, when JS is turned off), this suffices for now.

The resulting quick-n-easy adapted webform-form.tpl.php is attached below.

 * @file
 * Customize the display of a complete webform.
 * This file may be renamed "webform-form-[nid].tpl.php" to target a specific
 * webform on your site. Or you can leave it "webform-form.tpl.php" to affect
 * all webforms on your site.
 * Available variables:
 * - $form: The complete form array.
 * - $nid: The node ID of the Webform.
 * The $form array contains two main pieces:
 * - $form['submitted']: The main content of the user-created form.
 * - $form['details']: Internal information stored by Webform.

 // Retrieve total pages and current page.

$current_page = $form['details']['page_num']['#value'];
$total_pages = $form['details']['page_count']['#value'];

<div class="webformed-<?php echo $current_page; ?>">

echo '<div id="page-count">' . $current_page . " / " . $total_pages . '</div>';

  // If editing or viewing submissions, display the navigation at the top.
if (isset($form['submission_info']) || isset($form['navigation'])) {

  // Print out the main part of the form.
  // Feel free to break this up and move the pieces within the array.
print drupal_render($form['submitted']);

  // Always print out the entire $form. This renders the remaining pieces of the
  // form that haven't yet been rendered above.
print drupal_render($form);

  // Print out the navigation again at the bottom.
if (isset($form['submission_info']) || isset($form['navigation'])) {


<script type='text/javascript'>

// Check if current page is relevant one.
if ($current_page==15) {

  // Retrieve current submission id.
$sid = $form['#submission']->sid;
// Include webform.submissions.inc.
include_once(drupal_get_path('module', 'webform') .'/includes/webform.submissions.inc');
// Retrieve results
$subm = webform_get_submission($nid, $sid);

  // Remove relevant answers using jQuery - no case statement, loop or function, just some quick if/thens 
if ($subm->data[22]['value'][0]!=4) echo "$('#edit-submitted-question77-1-wrapper').remove();"
  if (
$subm->data[33]['value'][0]!=7) echo "$('#edit-submitted-question77-2-wrapper').remove();";
  if (
$subm->data[64]['value'][0]!=3) echo "$('#edit-submitted-question77-3-wrapper').remove();"
  if (
$subm->data[15]['value'][0]==1) echo "$('#edit-submitted-question77-4-wrapper').remove();"
  if (
$subm->data[12]['value'][0]!=7) echo "$('#edit-submitted-question77-5-wrapper').remove();"
  if (
$subm->data[13]['value'][0]!=3) echo "$('#edit-submitted-question77-6-wrapper').remove();";


Mar 03 2011
Mar 03

In researching more streamlined options for our Aegir based dev/live cycle, I (re)discovered several takes on the staging problem: everything in code methodology, the deploy module, the site_update module, migraine, the patterns module and of course features.

Each seems interesting in its own right, so I have decided to do some testing to see if one or more can help us better our current staging procedure. I will start with the "site update" module, since it seems to offer a light yet efficient take on sharing settings between sites. It might also be easily integrated in our current Aegir based workstream.

So I got to work. Following the spirit of the README.txt I created three new sites in Aegir, a DEV, LIVE and BASE install. Drush downloaded some essential modules (views, cck, rules), of course including site_update-6.x-1.x-dev. Added a test content type, a test view & changed some permissions on the base install. Then enabled the module on all three sites, necessitating the install of the "bad judgement" module(!).

Following the base site configuration I tried to run the database dump script from the sites/basesite/ directory. That didn't work out of the box on our Aegir server, since the database dump script retrieves its database settings from settings.php, which in an Aegir based site is to be found in drushrc.php.

Luckily, all that was needed for Aegir compatibility were some small changes to the site_update_dump script:


$parse_error = false;

// RvE 03-03-2011 added drushrc $options

if (isset ($db_url) && preg_match('/^mysqli?:\/\/(.*):(.*)@(.*)\/(.*)$/',$db_url,$matches)) {

  $db_conf->username = $matches[1];
  $db_conf->password = $matches[2];
  $db_conf->hostname = $matches[3];
  $db_conf->database = $matches[4];

} else if ( isset($options) && $options['db_type']='mysqli' ) {

  $db_conf->password = $options['db_passwd'];
  $db_conf->database = $options['db_name'];
  $db_conf->username = $options['db_user'];
  $db_conf->hostname = $options['db_host'];

} else {
  $parse_error = true;

if (!$parse_error) {


// RvE 03-03-2011 added -h option

function build_mysqldump_command($db_conf, $options, $ignore_tables, $include_tables) {
  $command = "mysqldump -u $db_conf->username -p$db_conf->password -h$db_conf->hostname  $options";


Now you are able to generate sites/basesite/database/site_update.sql by running site_update_dump from the base site root and copy the resulting site_update.sql to the previously created sites/devsite/database and sites/livesite/database directories.

Before running an update, there is one additional step specific to multi-site installs. Site_update by default looks for the SQL file in sites/all/database. In the case of a multi-site install you have to tell every site where its particular site_update.sql resides. Normally you set this path in settings.php. Within an Aegir controlled environment, you have to add the path to a local.settings.php file (since Aegir is allowed to override settings.php).

//put the following in the sites settings.php or local.settings.php
$conf['site_update_sql_file'] = 'sites/devsite/database/site_update.sql';

Run update.php and all base data is indeed nicely added to the dev and live sites. Still, I had hoped the module might offer me a little more by default. For example creation of content type tables when adding new CCK types. For now I'll keep this module in mind. Next module to be tested: deploy.

// added 06-03-2010:
made a patch available at http://drupal.org/node/1081230#comment-4175934

Dec 21 2010
Dec 21

On a recent project we had to retroactively enable a single sign-on for two existing Drupal sites. One a front end site for the general public, the other an intranet build on Open Atrium. Both were hosted on the same Apache server, administered by Aegir.

Luckily both sites shared their main domain, database and web-server. This meant we could make use of Drupal's build-in single sign-on feature, which only necessitates some changes to settings.php as explained very nicely by Nate Haug at Lullabot.

I could almost follow Nate's suggestions to the letter, but for some small additions.

First, we had to add a local.settings.php to the root of both sites, since Aegir overrides settings.php. The front-end one just containing:

$cookie_domain = .sitename.com;

Second, we had to create a MySQL user with access to both databases.

And finally we needed to explicitly state the new MySQL user (new dbuser) and pass (new dbpass) in the URL connection string, overriding Aegir's settings & add the database names assigned by Aegir to the front and Open Atrium sites:

$db_url['default'] = "$_SERVER[db_type]://dbuser:[email protected]$_SERVER[db_host]:$_SERVER[db_port]/$_SERVER[db_name]";$db_prefix = array(  'default'   => 'databasename_open_atrium_site.',  'users'     => 'databasename_front_site.',  'sessions'  => 'databasename_front_site.',  'authmap'   => 'databasename_front_site.',);$cookie_domain = '.sitename.com';

I left out the 'roles' table in the db_prefix array since in our case, each site has its own roles assigned to each user.


Addendum 1 - 2011-5-6:

Since updating to the latest version of Drush/Aegir, the line:

$db_url['default'] = "$_SERVER[db_type]://dbuser:[email protected]$_SERVER[db_host]:$_SERVER[db_port]/$_SERVER[db_name]";

does not get parsed anymore. For now I have fixed this by replacing the $_SERVER values by its correct, fixed, values.

Addendum 2 - 2011-5-6:

There seems to be a problem in saving CCK values when you set a value for the default prefix.

This can be fixed by using '' for this value, as in:

$db_prefix = array(  'default'   => '',  'users'     => 'databasename_front_site.',  'sessions'  => 'databasename_front_site.',  'authmap'   => 'databasename_front_site.',);

Feb 03 2010
Feb 03

This evening, I uploaded a downloads section at the heartbeat demo site. The modules the site is created with, are downloadable from the site.

The site is created with lots of features to look like a community site. The modules i used for this are flag, friendlist (currently disabled), facebook_status, user relation ships, ds, nd, nd_contrib, cd, ud and of course heartbeat. All heartbeat submodules are enabled.

Heartbeat logs its activity with both rules and custom code, provided in the heartbeat_example submodule and in the custom module at the downloads page. All blocks and pages are displayed on the site with next to it, the specific and current configuration for the demo site.

You can test most features by logging in with a character, or become a new user.  Here you can see what characters are available.


This entry was posted on Thursday, February 4th, 2010 at 12:39 am and is filed under Drupal, Heartbeat, PHP, Technology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Nov 05 2009
Nov 05

This blog post is a by-product of my preparation work for an upcoming talk titled "Why you should be using a distributed version control system (DVCS) for your project" at SAPO Codebits in Lisbon (December 3-5, 2009). Publishing these thoughts prior to the conference serves two purposes: getting some peer review on my findings and acting as a teaser for the actual talk. So please let me know — did I cover the relevant aspects or did I miss anything? What's your take on DVCS vs. the centralized approach? Why do you prefer one over the other? I'm looking forward to your comments!

Even though there are several distributed alternatives available for some years now (with Bazaar, git and Mercurial being the most prominent representatives here), many large and popular Open Source projects still use centralized systems like Subversion or even CVS to maintain their source code. While Subversion has eased some of the pains of CVS (e.g. better remote access, renaming/moving of files and directories, easy branching), the centralized approach by itself poses some disadvantages compared to distributed systems. So what are these? Let me give you a few examples of the limitations that a centralized system like Subversion has and how these affect the possible workflows and development practices.

I highly recommend you to also read Jon Arbash Meinel's Bazaar vs Subversion blog post for a more elaborate description of the limitations.

  • Most operations require interaction with the central repository, which usually is located on a remote server. Browsing the revision history of a file, creating a branch or a tag, comparing differences between two versions — all these activities involve communication via the network. Which means they are not available when you're offline and they could be slow, causing a slight disruption of your workflow. And if the central repository is down because of a network or hardware failure, every developer's work gets interrupted.
  • A developer can only checkpoint his work by committing his changes into the central repository, where it becomes immediately visible for everybody else working on that branch. It's not possible to keep track of your ongoing work by committing it locally first, in small steps, until the task is completed. This also means that any local work that is not supposed to be committed into the central repository can only be maintained as patches outside of version control, which makes it very cumbersome to maintain a larger number of modifications. This also affects external developers who want to join the project and work with the code. While they can easily obtain a checkout of the source tree, they are not able to put their own work into version control until they have been granted write access to the central repository. Until then, they have to maintain their work by submitting patches, which puts an additional burden on the project's maintainers, as they have to apply and merge these patches by hand.
  • Tags and branches of a project are created by copying entire directory structures around inside the repository. There are some recommendations and best practices on how to do that and how these directories should be arranged (e.g. by creating toplevel branches and tags directories), but there are several variants and it's not enforced by the system. This makes it difficult to work with projects that use a non-standard way for maintaining their branches and can be rather confusing (depending on the amount of branches and tags that exist).
  • While creating new branches is quick and atomic in Subversion, it's difficult to resolve conflicts when merging or reconciling changes from other branches. Recent versions of Subversion added support for keeping better track of merges, but this functionality is still not up to par with what the distributed tools provide. Merging between branches used to drop the revision history of the merged code, which made it difficult to keep track of the origins of individual changes. This often meant that developers avoided developing new functionality in separate branches and rather worked on the trunk instead. Working this way makes it much harder to keep the code in trunk a stable state.

Having described some downsides of the centralized approach, I'd now like to mention some of the most notable aspects and highlight a few advantages of using a distributed version control system for maintaining an Open Source project. These are based on my own personal experiences from working with various distributed systems (I've used Bazaar, BitKeeper, Darcs, git, Mercurial and SVK) and from following many other OSS projects that either made the switch from centralized to distributed or have been using a distributed system from the very beginning. For example, MySQL was already using BitKeeper for almost 2 years when I joined the team in 2002. From there, we made the switch to Bazaar in 2008. mylvmbackup, my small MySQL backup project, is also maintained using Bazaar and hosted on Launchpad.

Let me begin with some simple and (by now) well-known technical aspects and benefits of distributed systems before I elaborate on what social and organizational consequences these have.

In contrast to having a central repository on a single server, each working copy of a distributed system is a full-blown backup of the other repository, including the entire revision history. This provides additional security against data loss and it's very easy to promote another repository to become the new master branch. Developers simply point their local repositories to this new location to pull and push all future changes from there, so this usually causes very little disruption.

Disconnected operations allow performing all tasks locally without having to connect to a remote server. Reviewing the history, looking at diffs between arbitrary revisions, applying tags, committing or reverting changes can all be done on the local repository. These operations take place on the same host and don't require establishing a network connection, which also means they are very fast. Changes can later be propagated using push or pull operations - these can be initiated from both sides at any given time. As Ian Clatworthy described it, a distributed VCS decouples the act of snapshotting from the act of publishing.

Because there is no need to configure or set up a dedicated server or separate repository with any of today's popular DVCSes, there is very little overhead and maintenance required to get started. There is no excuse for not putting your work into revision control, even if your projects starts as a one-man show or you never intend to publish your code! Simply run "bzr|git|hg init" in an existing directory structure and you're ready to go!

As there is no technical reason to maintain a central repository, the definition of "the code trunk" changes from being defined by a technical requirements into a social/conventional one. Most projects still maintain one repository that is considered to be the master source tree. However, forking the code and creating branches of a project change from being an exception into being the norm. The challenge of the project team is to remain the canonical/relevant central hub of the development activities. The ease of forking also makes it much simpler to take over an abandoned project, while preserving the original history. As an example, take a look at the zfs-fuse project, which got both a new project lead and moved from Mercurial to git without losing the revision history or requiring any involvement by the original project maintainer.

Both branching and merging are "cheap" and encouraged operations. The role of a project maintainer changes from being a pure developer and committer to becoming the "merge-master". Selecting and merging changes from external branches into the main line of development becomes an important task of the project leads. Good merge-tracking support is a prerequisite for a distributed system and makes this a painless job. Also, the burden of merging can be shared among the maintainers and contributors. It does not matter on which side of a repository a merge is performed. Depending on the repository relationships and how changes are being propagated between them, some DVCSes like Bazaar or git actually provide several merge algorithms that one can choose from.

Having full commit rights into his one's own branch empowers contributors. It encourages experimenting and lowers the barrier for participation. It also creates new ways of collaboration. Small teams of developers can create ad-hoc workgroups to share their modifications by pushing/pulling from a shared private branch or amongst their personal branches. However, it still requires the appropriate privileges to be able to push into the main development branch.

This also helps to improve the stability of the code base. Larger features or other intrusive changes can be developed in parallel to the mainline, kept separate but in sync with the trunk until they have evolved and stabilized sufficiently. With centralized systems, code has to be committed into the trunk first before regression tests can be run. With DVCSes, merging of code can be done in stages, using a "gatekeeper" to review/test all incoming pushes in a staging area before merging it with the mainline code base. This gatekeeper could be a human or an automated build/test system that performs the code propagation into the trunk based on certain criterions, e.g. "it still compiles", "all tests pass", "the new code adheres to the coding standards". While central systems only allow star schemas, a distributed system allows workflows where modifications follow arbitrary directed graphs.

Patches and contributions suffer less from bit rot. A static patch file posted to a mailing list or attached to a bug report may no longer apply cleanly by the time you look into it. The underlying code base has changed and evolved. Instead of posting a patch, a contributor using a DVCS simply provides a pointer to his public branch of the project, which he hopefully keeps in sync with the main line of development. From there, the contribution can be pulled and incorporated at any time. The history of every modification can be tracked in much more detail, as the author's name appears in the revision history (which is not necessarily the case when another developer applies a patch contributed by someone else).

A DVCS allows you to keep track of local changes in the same repository, while still being able to merge bug/security fixes from upstream. Example: your web site might be based on the popular Drupal CMS. While the actual development of Drupal still takes place in (ghasp) CVS, it is possible to follow the development using Bazaar. This allows you to stay in sync with the ongoing development (e.g. receiving and applying security fixes for an otherwise stable branch) and keeping your local modifications under version control as well.

I've probably just scratched the surface on what benefits distributed version control systems provide with this blog post. Many of these aspects and their consequences are not fully analyzed and understood yet. In the meanwhile, more and more projects make the switch, gather experiences and establish best practices. If you're still using a centralized system, I strongly encourage you to start exploring the possibilities of distributed version control. And you don't actually have to "flip the switch" immediately — most of the existing systems happily interact with a central Subversion server as well, allowing you to benefit from some of the advantages without you having to convert your entire infrastructure immediately.

Here are some pointers for further reading on that particular subject:

Oct 29 2009
Oct 29

So you're a small startup company, ready to go live with your product, which you intend to distribute under an Open Source License. Congratulations, you made a wise decision! Your developers have been hacking away frantically, getting the code in good shape for the initial launch. Now it's time to look into what else needs to be built and setup, so you're ready to welcome the first members of your new community and to ensure they are coming back!

Keep the following saying in mind, which especially holds true in the Open Source world: "You never get a second chance to make a first impression!". While the most important thing is of course to have a compelling and useful product, this blog post is an attempt to highlight some other aspects about community building and providing the adequate infrastructure. This insight is based on my own experiences and my observations from talking with many people involved in OSS startups and projects.

First of all, realize that your community is diverse. They have different expectations, skills and needs. Pamper your early adopters. They are the multipliers that help you to spread the word, if they are convinced and excited about what you provide. Put some faith and trust in them and listen to their input. In the beginning, you might want to focus on your developer community and the tech-savvy early adopters, but this of course depends on the type of product you provide and on what your target audience looks like. In any case, make sure that you provide the necessary infrastructure to cater the respective needs of these different user bases.

Also remember that you can not overcommunicate with your community. Blog heavily, write documentation/FAQs/HOWTOs, build up Wiki content and structure, create screencasts. Don't rely on the community to create any of this in the early stages. But be prepared to embrace and support any activities, if they arise. Solicit input, provide opportunities and guidelines for participation!

While it's tempting to do: don't establish too many communication channels in the beginning. Keep it simple and reduce the different venues of communication to an absolute minimum at this point. A new forum with many different topics but no comments looks like an art gallery with a lot of rooms, but they are either empty or there's just a single picture hanging at the wall. Nobody wants to visit that, he'd feel lost in the void. At the early stage of a project, I think it's essential to keep the discussions in as few places as possible. This helps you to identify your key community contributors (the "regulars" aka the "alpha geeks") and to build up personal relationships with them (and among themselves).

Consider establishing a forum with only a few topics, start with one or two mailing lists. Also make sure that these are actively being followed (e.g. by yourself or your developers) and that questions are being answered! I personally prefer mailing lists over forums, but I'm probably not representative. Ideally, it would be nice if there would be a unified communication hub that supports both posting via the web site like a forum, or via email or NNTP (similar to Google Groups). This keeps the discussions on one central place (which eases searching for specific keywords/topics) and still allows users to choose their preferred means of communication. Unfortunately, I haven't really found any suitable platform for this approach yet — suggestions are welcome! And once your community grows and people start complaining about too many or off-topic discussions, you can think about further separation of the discussion topics.

Allow your users to submit and comment on issues and feature requests by providing a public bug/feature tracking system. Use this system for your release tracking and planning as well, to give your users a better insight into what they can expect from upcoming versions. Also, make it very clear to your users where bug reports and feature requests should be sent to! Should one use the Forums or the bug tracker for that? A mailing list or forum makes it easier for users to participate in these discussions, but makes it more difficult to keep track of them and to ensure they are being followed up on. For the sake of simplicity, I would actually suggest to remove any separate forums about these topics. Instead, educate your community early about which is the right tool and venue to use for such requests. This saves time and resources on your side and helps to build up an initial core of community members that can then educate others about "the ropes". Otherwise you end up with the burden of keeping track of every feature request or bug report that was posted somewhere, ensuring it has been added to the bug tracker...

If your community infrastructure consists of separate building blocks to provide the required functionality (e.g. forums, bug tracking, wiki), consider setting up a single-sign on (SSO) technology and establish a unified look and feel between these applications. Your users should not be required to log in with more than one username and password, and every application should share the same login and profile data. However, only require a login, if absolutely necessary! Many users feel alienated by having to enter their personal data, even if they only want to lurk around or browse through existing discussions or documentation. As an additional benefit, it helps you to quickly identify your "community stars" in the various sections of your site: Who reports the most bugs? Who is the most helpful person on our Forums? This information could also be published on your community site, giving users the opportunity to build up reputation and karma. Community infrastructure sites like Drupal or Joomla provide an excellent foundation to get you started, while offering enough room for improvement and additional functionality at a later point.

Lower the entrance barrier and make it as easy as possible for people to get started with your application. Don't just throw a source archive at them, hoping that someone else will take care of doing the binary builds. Put some effort into building and providing binary, ready-to-install packages for the most popular platforms that your target audience is likely to use. The three most important platforms to cover are Microsoft Windows, Mac OS X and Linux. While users of the latter usually have the required tools and experience in building stuff from source, Windows and Mac users are usually "spoiled" and don't want to be bothered with having to install a full-fledged development environment before they could eventually evaluate your application.

When it comes to Linux distributions, you should look into building distribution-specific packages. This heavily depends on the requirements for external libraries that your application is using, which might differ on the various flavours of Linux. Depending on the purpose of your application, you may either focus on the more desktop/developer-centric distributions like Mandriva, openSUSE, Ubuntu, or on the distributions commonly used in server environments, e.g. Debian, CentOS, Fedora, RHEL, SLES (Yes, I am aware that most distributions are multi-purpose and serve both tasks equally well, and it's of course possible to use each of them to get the job done — it's a matter of taste and preference). If possible, make use of existing build infrastructure like Fedora's Koji build system, Launchpad's Personal Package Archives (PPA) or the openSUSE Build Service (which even allows you to build RPMs and DEBs for non-SUSE distributions) to automate the building and provisioning of distribution-specific packages for a wide range of Linux flavours. If your application is slightly complicated to install or set up, consider providing a live demo server that people can access via the Internet to give it a try. Alternatively, create ready-to-run images for virtual machines like Parallels, VirtualBox or VMWare. Everything that makes it easier to access, install and test your software should be explored.

In closing, make community involvement a part of your company culture and make sure that you preserve enough time to take care of it. Community engagement has so many different aspects, you don't necessarily have to be a developer or a very technical person to get involved. I'm aware that doing community work can be seen as a distraction and definitely takes away time from other tasks. But community involvement should become a habit and a well-accepted part of everyone's job — this is much easier to establish while you're still small and growing.

Oct 22 2008
Oct 22

Developers who wrote a custom node module in drupal will know that you always have to make a couple of choices at the start of a project. Will I write a custom node module or use CCK to generate a content module. Or when using views, you might ask yourself it is better to write your own view. Or could it be wiser to use it all and take the best of all things into your project. Once I have such questions, I hear a little voice in my head “Time to test and compare”.

The initial project

We want a system with shops, products and presentations where shops are categorized in shop regions.

Fair enough, let’s start and build a couple of cck node types (content types) . So go to cck > add content type and prepare shops, products and presentations. In this fictive example a shop will have products and the shop would like to show presentations at location, to show their products. We can use taxonomy “shop groups” to group the shops.
Now we can make some views like “latest presentations” or “shops list per region” and so on … . This works very nice for simple projects and i get the feeling cck and views is more than enough for this. Besides there is no faster framework or website system that achieves what Drupal does in only ten minutes. With cck you can add all kinds of fields and use them in the views under the “content” section. Customizable and unbelievably fast.

new request for proposal

The presentatons should be shown with a preview button in the list, that shows an flash movie slideshow with the prensentations and thus product images. After that we would like to clone a presentation if a new one is required that differs little from an existing one.

Damn, I’d say, why didnt I write that presentation custom in my own code so I could change whatever I want. Could we build a list of presentations with views that shows a preview button to load a movie and a clone button to clone a presentation? Is it then also possible to add code to clone a presentation and yet showing the clone button next to preview in “operations” of the views list.
The answer is brief and simple: ow yeah you can.
To add stuff to views that seems rather custom, we can use the hook_view_data hook to join tables and expose fields. This adds new features to views and that’s exactly what we want. In views/modules you can peek at the code how the fields are built for modules we know (node, content, taxonomy, user, …).

Implementation of hook_views_data

The function hook_views_data has no parameters and returns a data array of database tables and their fields. The only thing you need to know about the database schema for presentations is that it is linked with node, has a start date and has a xml field where generated xml is cached (with product entries). Let’s build the body of presentations_views_data.
As you will see, the fields can have handlers as well as sort and filter handlers. The handlers are class extensions. In drupal , OOP? Indeed, views2 is an incredidable module that is written in OOP.  merlinofchaos is a leading figure in how to code OOP for drupal in my opinion.

I printed the hook_views_data with the defined handler classes underneath. The comment helps to understand what is going on.

  1. function presentations_views_data() {

  2.   $data = array();

  3.   // Presentations table

  4.   $data[‘presentations ‘][‘table’][‘group’]  = t(‘Presentations’);

  5.   $data[‘presentations ‘][‘table’][‘base’] = array(

  6.     ‘field’ => ‘presentation_id’,

  7.     ‘title’ => t(‘Presentations’),

  8.     ‘help’ => t("Presentations are groups of products."),

  9.   );

  10.   // Join node with presentations on node id

  11.   $data[‘presentations’][‘table’][‘join’] = array(

  12.     ‘node’ => array(

  13.       ‘left_field’ => ‘nid’,

  14.       ‘field’ => ‘nid’,

  15.     ),

  16.   );

  17.   // FIELDS

  18.   // start_date field

  19.   $data[‘presentations’][’start_date’] = array(

  20.     // The item it appears as on the UI

  21.     ‘title’ => t(‘presentation play date’),

  22.     // The help that appears on the UI

  23.     ‘help’ => t(‘The date the preentation will start playing.’),

  24.     ‘field’ => array(

  25.        // Default views handler for field dates

  26.       ‘handler’ => ‘views_handler_field_date’,

  27.       ‘click sortable’ => TRUE,

  28.     ),

  29.     ’sort’ => array(

  30.       // Default views handler for sorting

  31.       ‘handler’ => ‘views_handler_sort_date’,

  32.     ),

  33.     ‘filter’ => array(

  34.       // Default views date filter handler

  35.       ‘handler’ => ‘views_handler_filter_date’,

  36.     ),

  37.   );

  38.   // Clone a presentation

  39.   $data[‘presentations’][‘clone_node’] = array(

  40.     ‘field’ => array(

  41.       ‘title’ => t(‘Clone’),

  42.       ‘help’ => t(‘Provide a link to clone a presentation.’),

  43.       ‘handler’ => ‘views_handler_field_presentations_link_clone’,

  44.     ),

  45.   );

  46.   // Link to peek/preview a presentation

  47.   $data[‘presentations’][‘preview_node’] = array(

  48.     ‘field’ => array(

  49.       ‘title’ => t(‘Preview’),

  50.       ‘help’ => t(‘Provide a simple link to open the presentation on javascript onclick event.’),

  51.       ‘handler’ => ‘views_handler_field_presentations_link_preview’,

  52.     ),

  53.   );

  54.   return $data;

  55. }

You can see that the array of start_date is composed with predefined handlers and there is nothing to it. There is a handler class for each database field type known in drupal. In my case, i wanted to add a simple link to a custom menu callback function that I wrote that cloned a existing presentation. And it would be nice to pop-up a javascript overlay box with a flash animation showing the presentation.
In the last two field arrays, I defined my own handlers : views_handler_field_presentations_link_clone and views_handler_field_presentations_link_preview.
Let’s take a look at the code for the class that handles the link to clone a presentation:

  1. /**

  2.  * Field handler to peek/preview a presentation

  3.  */

  4. class views_handler_field_presentations_link_preview extends views_handler_field_node_link {

  5.   function render($values) {

  6.     global $base_url;

  7.     // Load extra javascript

  8.     $site_url = $base_url.‘/sites/’.SITENAME.‘/files’;

  9.     drupal_add_js(‘var files_url = "’.$site_url.‘";var base_url = "’.$base_url.‘";’ , ‘inline’);

  10.     drupal_add_js(drupal_get_path(‘module’, ‘presentations’) . ‘/swfobject.js’);

  11.     drupal_add_js(drupal_get_path(‘module’, ‘presentations’) . ‘/simplemodal/js/jquery.simplemodal.js’);

  12.     drupal_add_css(drupal_get_path(‘module’, ‘presentations’) . ‘/simplemodal/css/basic.css’);

  13.     drupal_add_js(drupal_get_path(‘module’, ‘presentations’) . ‘/presentations.js’);

  14.     // ensure user has access to edit this node.

  15.     $node = new stdClass();

  16.     $node->nid = $values->{$this->aliases[‘nid’]};

  17.     $node->status = 1; // unpublished nodes ignore access control

  18.     $text = !empty($this->options[‘text’]) ? $this->options[‘text’] : t(‘Preview’);

  19.     $attribs = array(

  20.       ‘attributes’ => array(

  21.         ‘onclick’ => ‘javascript:presentations_peek($node->nid,"$base_url/presentations/peek/$node->nid");return false;’

  22.       ),

  23.     );

  24.     return l($text, "node/$node->nid/edit", $attribs);

  25.   }

  26. }

The code to add a javascript event will need a little more work. To be more specific, we have to include the javascript files that we need. I am not showing the javascript files because it is rather simple because I use basicModal for the overlay (a jquery plugin I came across) and swfobject to render the presentation movie.

  1. /**

  2.  * Field handler to clone a presentation

  3.  */

  4. class views_handler_field_presentations_link_clone extends views_handler_field_node_link {

  5.   function render($values) {

  6.     // ensure user has access to edit this node.

  7.     $node = new stdClass();

  8.     $node->nid = $values->{$this->aliases[‘nid’]};

  9.     $node->status = 1;

  10.     $text = !empty($this->options[‘text’]) ? $this->options[‘text’] : t(‘clone’);

  11.     return l($text, "presentations/$node->nid/clone", array(‘query’ => drupal_get_destination()));

  12.   }

  13. }

I am not sure if the extension on views_handler_field_node_link is really neccessary, and I will look into this later on. The important thing is that we override the render method of our class and make the field link look as we want it to. If you want to use extra fields from your database table, you have can add these fields in the constructor. If you override the constructor, you always have to call the parents constructor as well:

  1. function __construct() {

  2.   parent::__construct();

  3. }


To use extra fields in you render method, than you can set them in the constructor like this :

  1. function construct() {

  2.   parent::construct();

  3.   $this->additional_fields[‘uid’] = ‘uid’;

  4. }

Later you can place this fields and sort them how you please. If you did not look into all features of views, I can assure you that you can do the most marvelous things with that module. Add image cache presets to show thumbnails with in your custom views.
I hope this was interesting for anyone. Please comment on this article if it was one of the reasons why you decided to digg views. Playtime!

This entry was posted on Wednesday, October 22nd, 2008 at 1:27 pm and is filed under Drupal. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Oct 21 2008
Oct 21

Draggables and sortables are commonly used in the drupal core. Taxonomy, menu, cck, …  . The items that are sortable always belong to ‘a parent’. If this parent is listed as sortable in another parent, then it is a cascading system with a maximum of levels. The typical tree listing together with draggable handle icons, will tell the users that they can drag. This belong to relation controls a parent with its children that all have a positional sortorder variable. In Drupal, these are called weights. I wondered if this could be quickly implemented in custom modules where you have this relation ship. Since this is everywhere in drupal, why not do the test now? I will try to describe how to build sortable nodes in table view in a custom content type

Common examples in drupal that have this behaviour are taxonomy terms belonging to a vocabulary, menu items belonging to another menu item or menu. Examples of single level sortables are taxonomy vocabulary and blocks in regions.  Could this be altered so you can use it in your custom modules and content types? I will do this here with a custom content type slideshow that holds slides, which is a custom content type as well.  Check Install cck types, taxonomy, content and roles through multiple AJAX calls, if you want to see an example on how to import an exported cck with all its fields.

The example that goes with this is a slideshow application with slides. We hook the slideshow node form to load a list of slide nodes.  The child slides will be belong to the slideshow through a coupling table slideshow_slides with a common slideshow_id.  Slideshow_slides also holds a slide_id that relates to the slide_id of the table slides.  This way, I can clone slides to be used in other slideshows as well.  In the hook_load slideshow_load, i fetch a list of slides and  attach it to the node ($node->slides).  The hook_form will contain the custom form with a fieldset “slides” and a markup element in it with the value representing the themed output of slides.

First the code to fetch slides from the database.

  1. /**

  2.  * get list of slides belonging to a slideshow

  3.  */

  4. function slideshow_get_slides($slideshow_id, $max=35) {

  5.   static $slides;

  6.   if(!isset($slides) || count($slides) == 0) {

  7.     $result = db_query_range("SELECT n.nid, n.title, ss.*

  8.    FROM {node} n

  9.    INNER JOIN {slide} s ON n.nid = s.nid

  10.    INNER JOIN {slideshow_slides} ss ON ss.slide_id = s.slide_id

  11.    WHERE ss.slideshow_id = %d ", $slideshow_id, 0, $max);

  12.     while($row = db_fetch_object($result)) {

  13.       $slides[$row->nid] = $row;

  14.     }

  15.   }

  16.   return $slides;

  17. }

This list of slide nodes will be made sortable in a slideshow node form. The example where I looked to start off things, was the taxonomy vocabulary listing view. The only difference is, that I have to include a fieldset in the existing node form instead of building the form specific for the purpose of sorting and listing.  Most modules that uses the drupal sortables use the callback drupal_get_form where our form will go through the process of drupal saving a node.
In the hook_form function we will get the form fields array for the slides with the function slideshow_overview_slides. After that we will theme the fields to make them draggable and sortable within a slideshow. The function theme_slideshow_slides will do perform this task. Underneath are the three functions printed.

function slideshow_form

  1. function slideshow_form(&amp;$node, $form_state){

  2.   // … the node form fields here

  3.   $form[’slides’] = array(

  4.     ‘#type’ => ‘fieldset’,

  5.     ‘#title’ => t(‘Slides in this slideshow’),

  6.     ‘#collapsible’ => TRUE,

  7.     ‘#collapsed’ => FALSE,

  8.     ‘#weight’ => -4,

  9.     ‘#tree’ => TRUE,

  10.   );

  11.   $subform = slideshow_overview_slides($node->slideshow_id, $node);

  12.   $form[’slides’][’slides_wrapper’] = theme_slideshow_slides($subform, $form_state);

  13.   // Add a button to add a slide with slideshow nid as parameter

  14.   $form[’slides’][’slides_wrapper’][‘add_slide’] = array(

  15.     ‘#type’ => ‘button’,

  16.     ‘#value’ => t(‘add slide’),

  17.     ‘#prefix’ =>,

  18.     ‘#attributes’ => array(

  19.       ‘onclick’=>‘javascript:window.location.href="/node/add/slide/’.$node->nid.‘"; return false;’

  20.     ),

  21.   );

  22.   // … the rest of the form fields

  23. }

function slideshow_overview_slides

  1. /**

  2.  * Form builder to list and manage slides.

  3.  *

  4.  * @ingroup forms

  5.  * @see slideshow_overview_slides_submit()

  6.  * @see theme_slideshow_slides()

  7.  */

  8. function slideshow_overview_slides($slideshow_id, &amp;$node=null) {

  9.   $node->slides = slideshow_get_slides($slideshow_id);

  10.   $form = array(‘#tree’ => TRUE);

  11.   if (!empty($node->slides)) {

  12.     foreach ($node->slides as $slide_nid => $slide) {

  13.       $slide_id = $slide->slide_id;

  14.       $form[$slide_id][‘#slide’] = (array)$slide;

  15.       $form[$slide_id][‘name’] = array(‘#value’ => check_plain($slide->title));

  16.       $form[$slide_id][‘weight’] = array(

  17.         ‘#type’ => ‘weight’,

  18.         ‘#name’ => ‘weight_’.$slide_id,

  19.         ‘#id’ => ‘weight_’.$slide_id,

  20.         ‘#delta’ => 10,

  21.         ‘#default_value’ => $slide->weight,

  22.         ‘#value’ => $slide->weight

  23.       );

  24.       $form[$slide_id][‘delete’] = array(‘#value’ => l(t(‘Delete slide’), ‘node/’.$slide_nid.‘/delete’));

  25.     }

  26.   }

  27.   // sortable is needed for more than one slide

  28.   $node->num_slides = count($node->slides);

  29.   if ($node->num_slides == 1) {

  30.     unset($form[$slide_id][‘weight’]);

  31.   }

  32.   return $form;

  33. }


  1. /**

  2.  * theme list of slides

  3.  */

  4. function theme_slideshow_slides($form, &amp;$form_state) {

  5.   $rows = array();

  6.   foreach (element_children($form) as $key) {

  7.     if (isset($form[$key][‘name’])) {

  8.       $slide = &amp;$form[$key];

  9.       $row = array();

  10.       //$row[] = drupal_render($slide['name']);

  11.       $updated = $slide[‘#slide’][‘updated’] ? ‘ <span style="color: red;"> updated</span>’ : ;

  12.       $row[] = l(t($slide[‘name’][‘#value’]), "node/".$slide[‘#slide’][‘nid’]."/edit") . $updated;

  13.       if (isset($slide[‘weight’])) {

  14.         $slide[‘weight’][‘#attributes’][‘class’] = ’slides-weight’;

  15.         $slide[‘weight’] = process_weight($slide[‘weight’]);

  16.         $weightinput = drupal_render($slide[‘weight’]);

  17.         $row[] = $weightinput;

  18.       }

  19.       $row[] = drupal_render($slide[‘delete’]);

  20.       $rows[] = array(‘data’ => $row, ‘class’ => ‘draggable’);

  21.     }

  22.   }

  23.   // Start the form

  24.   $form = array();

  25.   $form[‘#type’] = ‘markup’;

  26.   $form[‘#prefix’] =

  27. <div id="slides_wrapper">’;

  28.   $form[‘#suffix’] = ‘</div>

  29. ;

  30.   if (empty($rows)) {

  31.     $form[‘#value’] = .t(‘No slides available.’);

  32.   } else {

  33.     $header = array(t(‘Name’));

  34.     if(count($rows) > 1) {

  35.       $form[’save_sortorder’] = array(

  36.         ‘#type’ => ‘button’,

  37.         ‘#value’ => t(‘Save sortorder’),

  38.         ‘#ahah’ => array(

  39.           ‘path’ => ahah_helper_path( array(’slides’,’slides_wrapper’) ),

  40.           ‘wrapper’ => ’slides_wrapper’,

  41.           ‘event’ => ‘click’

  42.         ),

  43.       );

  44.       $header[] = t(‘Weight’);

  45.       drupal_add_tabledrag(’slides-overview’, ‘order’, ’sibling’, ’slides-weight’);

  46.     }

  47.     $header[] = array(‘data’ => t(‘Operations’));

  48.     $form[‘#value’] = theme(‘table’, $header, $rows, array(‘id’ => ’slides-overview’));

  49.   }

  50.   return $form;

  51. }

The slideshow button only is visible when there are enough items to sort, more than one to be more specific. The same thing happens with the draggable handles. Once there are enough slides and a user touches the handle of the draggable list item, a message appears to tell the user that he needs to save the sortorder. The save sortorder action is performed with an ajax call with ahah. I was testing the ahah_helper module so i used it for this as well. More details on this you find here.

The slides weight will be saved in the table slideshow_slides in this excercise. The drupal sortables work with hidden select options, which you can see thanks to firebug :) . In my form, I have this fields inside my fieldset slides and a slides_wrapper. The only thing we need to know is a value pair : the slide with its weight value. I used a quick string replace function on a conventionally named field “weight_[slide_id]” . So how i save my sortorder:

function slideshow_slides_save_sortorder

  1. /**

  2.  * Updates changed to slide weights.

  3.  *

  4.  * @see slide_form()

  5.  * @see theme_slideshow_slides()

  6.  */

  7. function slideshow_slides_save_sortorder($values) {

  8.   $sortables = array();

  9.   $parent_slideshow_id = $values[’slideshow_id’];

  10.   if(empty($values) || $parent_slideshow_id <= 0) {

  11.     return t(‘Sorry, no slides.’);

  12.   }

  13.   foreach($values as $keystring => $weight) {

  14.     $key = str_replace(‘weight_’,,$keystring);

  15.     if(is_numeric($key) &amp;&amp; is_numeric($weight)) {

  16.       $sortables[$key] = $weight;

  17.     }

  18.   }

  19.   foreach($sortables as $slide_id => $weight) {

  20.     $sql = "UPDATE slideshow_slides SET weight = %d

  21.      WHERE slideshow_id = %d AND slide_id = %d ";

  22.     db_query($sql, $weight,$parent_slideshow_id , $slide_id);

  23.   }

  24.   return t(‘Sortorder saved!’);

  25. }

Please comment on this article if you could use something from it.

This entry was posted on Tuesday, October 21st, 2008 at 8:39 pm and is filed under Drupal. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Aug 19 2008
Aug 19

19th Aug

Simple bit of Drupal module code from yesterday: permissions are provided for each node type which can have attachments, providing a more granular permission set based on node type. Then, we alter any node add/edit forms and set the #access property for the attachments part of the form based on these new permissions.

Check the code out after the break, hope its of some use somewhere. Does anyone have any thoughts on unsetting form elements like this? Is it a wise thing to do, or is there a better way? Cheers to those who commented, I've modified the snippet accordingly.

  1. /* @file attachments_by_nodetype.module

  2.  *

  3.  * Simple granular permissions for uploading files per node type.

  4.  */

  5. /* Implementation of hook_perm(). */

  6. function attachments_by_nodetype_perm() {

  7. $permissions[] = "upload files to $type nodes";

  8. }

  9. }

  10. return $permissions;

  11. }

  12. /* Implementation of hook_form_alter(). */

  13. function attachments_by_nodetype_form_alter($form_id, &$form) {

  14. if ($form['type']['#value'] .'_node_form' == $form_id) {

  15. $form['attachments']['#access'] = user_access("upload files to {$form['type']['#value']} nodes");
  16. }

  17. }

Jul 16 2008
Jul 16

The .htaccess file included with Drupal tells Apache to send all 404 requests to Drupal to handle. While this is great in some cases, the performance degradation can have a huge impact on a site that has millions of users.

When Drupal processes a 404, it has to bootstrap Drupal, which includes Apache loading up the PHP process, gathering all of the Drupal PHP files, connecting to the database, and running some queries. This is quite expensive when Apache can be told to simply say "Page not found" without having to incur any of that overhead.

Now you might say your site doesn't have any broken URLs as you haven't changed any. Well that's great, but as your site grows, it is going to be a target for spammers and hackers. They are going to start requesting all sorts of file to see if they can find an exploit. Instead of bootstrapping Drupal each time to tell them that DLL file doesn't exist, it would be much better if Apache could just say that, to save resources for your real users.

So, what can you do? How can you stop Drupal from handling 404s but not break modules like imagecache?

Imagecache is one of the few modules that relies on Drupal's 404 handling. It is a very smart module that automatically resizes images. Instead of resizing every single image as they are uploaded, it only resizes them when they are requested, which is great. So if we're going to tell Drupal not to handle 404s, we need to be careful not to break this highly useful module.

To see this in action, visit the ParentsClick Network and test out some 404s. You'll notice that 404s for files and Drupal paths show the same page. The following is the procedure we used to prevent Drupal from handling 404s.

A note, this functionality should really be in core, and this patch is where the necessary .htaccess code used below comes from, only being slightly modified to prevent Drupal from handling 404s completely. The below code is tested and working on Drupal 5.

Step 1 - Update your .htaccess file

- ErrorDocument 404 /index.php
+ ErrorDocument 404 /sites/all/themes/foundation/404.php # path to your 404 file

+  RewriteCond %{REQUEST_FILENAME} !-f
+  RewriteCond %{REQUEST_URI} !^/files/ # this makes it work with imagecache
+  RewriteCond %{REQUEST_URI} \.(png|gif|jpe?g|s?html?|css|js|cgi|ico|swf|flv|dll)$
+  RewriteCond %{REQUEST_URI} !^404.%1$
+  RewriteRule ^(.*)$ 404.%1 [L]
   RewriteCond %{REQUEST_FILENAME} !-f
   RewriteCond %{REQUEST_FILENAME} !-d
   RewriteRule ^(.*)$ index.php?q=$1 [L,QSA]

What this basically does is it removes Drupal from handling 404s (removing the /index.php part) and tells Apache to use a specific file if it encounters a 404 (like a missing image or CSS file).

Step 2 - Tell Drupal to stop on 404s too

In your template.php inside of the phptemplatevariables(), add in this code:

// show custom 404 page
$headers = drupal_get_headers();
if (strpos($headers, 'HTTP/1.1 404') !== FALSE) {
  // make sure this path = ErrorDocument in .htaccess above
  include_once './sites/all/themes/foundation/404.php';

This tell Drupal to serve up this 404 page if it can't find the path. The benefit of this is your designers can work on the same file that handles 404s for both Apache and Drupal. It also stops Drupal from fully executing. In Drupal 6 this could happen much earlier using the preprocess templating functions.

Step 3 - Create a 404 file

Create a 404.php file (or 404.html or whatever you want) and place the file where ever you want. Make sure to update the ErrorDocument in the .htaccess to point to this file along with the Drupal template code.

And voila!

Written by on July 16, 2008

blog comments powered by
Jul 01 2008
Jul 01

1st Jul

Sometimes we'd like to list users who have registered with a Drupal site, but haven't been placed into any proper roles yet via subscriptions, purchases, membership approval or whatever.

Try this join query to get started:

  1. SELECT users.uid, name, created, access, login, status

  2. FROM {users}

  3. LEFT OUTER JOIN {users_roles}

  4. ON users.uid = users_roles.uid

  5. WHERE users_roles.uid IS NULL AND users.uid != 0

Jun 21 2008
Jun 21

Friend Mark Bernstein promotes "software as craft" with the phrase NeoVictorian Computing. Jeremy recalls that "Part of his argument is that software creators have something to learn from the ideals of the arts and crafts movement: the software world is full of soulless bits and bytes, and maybe we would all be a little happier if we embraced handcraft ... During the talk, I remember Bernstein proposed that software creators should sign their work as a painter signs a painting, which is a lovely visual metaphor that I hope to keep around." And Greg Wilson has a book called Beautiful Code.

Happily, I already agree - they're all echoes of my own belief in "code shui", be it XML (a Morbus Rant from 2002 on "why beauty is important in computer file formats") or in code from 2004 ("His style is quite unique. [Morbus' AmphetaDesk] source reads almost like a paper, instead of terse code. He documents his code well and I've thus far found nothing that was very hard to understand. Best of all, its so un-Perl. He doesn't seem to use really clever tricks to do simple things, so the code has been very easy to understand").

May 21 2008
May 21

21st May

Just a quickie - embedding views in PHP snippets etc. Sometimes blocks or panels don't quite cut the mustard and we need to directly insert a view via some PHP.

I started off here - the method I've used before to achieve this. There's another howto on Innovating Tomorrow. However, views_build_view, and indeed theme_view, are both no longer part of Views...

After a bit of searching I found mention of a new function; its also mentioned in the Views 2 documentation (work in progress).

So, the resulting PHP is wonderfully simple:

print views_embed_view($view_name, $display_id = 'default');

I was tearing my beard out, so I hope this helps someone.

Oct 24 2006
Oct 24

The blog post Drupal vs. Joomla - Fight! pointed me to a discussion on the drupal-devel mailing list about the ohloh.net website which tries to gather some statistics/metrics about the code of Free Software projects. Their slogan:

Explore Open Source

Mapping the open source world by collecting
objective information on open source projects.

Anyway, their stats about Drupal and about Joomla! are flawed (for example) because they seem to include the whole contrib CVS tree of Drupal (not just the core Drupal), which is huuuge. But there's no need to use any fancy website anyway, there are Free Software tools out there which can produce some metrics, too.

The following data is generated using David A. Wheeler's SLOCCount (for the respective tarballs of the current stable releases):

Drupal 4.7.4

Total Physical Source Lines of Code (SLOC)                = 8,012
Development Effort Estimate, Person-Years (Person-Months) = 1.78 (21.34)
 (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months)                         = 0.67 (8.00)
 (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule)  = 2.67
Total Estimated Cost to Develop                           = $ 240,198
 (average salary = $56,286/year, overhead = 2.40).

Joomla 1.0.11

Total Physical Source Lines of Code (SLOC)                = 65,880
Development Effort Estimate, Person-Years (Person-Months) = 16.25 (194.94)
 (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months)                         = 1.54 (18.54)
 (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule)  = 10.51
Total Estimated Cost to Develop                           = $ 2,194,486
 (average salary = $56,286/year, overhead = 2.40).

Not that I think these numbers mean anything ;-) Lines of Code is a very, very unreliable indicator for code quality (or anything else, for that matter)...

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web