Feb 03 2014
Feb 03

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Jan 29 2014
Jan 29

on January 29, 2014 //

When a client doesn't have time for a glamorous, blue-sky project, what's a consultant to do?

Soon after I started working at Lullabot, I got my first client, and like all clients this one had a problem. They were a university whose site was running on Drupal 6: 2500 page nodes filled with HTML. The layout was largely managed through the WYSIWYG, and the quality of the HTML was all over the place. They wanted to take this site and migrate it to Drupal 7, with a properly designed content model and responsive layout, possibly using Panels.

In six months.

With one developer on staff.

Cautious optimism

Despite the difficult demands and daunting schedule, I was excited! It is relatively rare to be approached by someone with a solid technical background who wants to do what is architecturally right for their organization. We started off trying to get a handle on what content was out there, and what the content types and fields might look like. As we progressed, it became apparent that each department was really doing their own thing, and any attempt to build a content model was going to require some discussion to get them all on the same page.

At the same time, we were discussing issues around migration of the content, and it became apparent that getting these HTML blobs into fields was going to be a big problem. In some cases, if the HTML is tightly structured, you can automate this process by scraping the HTML and extracting the data. However in this case, with no consistency at all, turning this HTML into fielded data was going to be an almost completely manual task.

Reality

Only a couple weeks into the project I realized that the schedule was completely unrealistic for what they were attempting to do. So I sat down with the client and we started talking about their priorities. I knew something had to give, but you can't figure out what until you know what is most important. As we talked, it became apparent that in this case, the schedule was a 100% hard dependency - the new site needed to be launched in time for the start of the next school year. Not only that, people were reasonably happy using the site, with the exception of pain around media handling.

Given this, I recommended that they simply migrate their existing architecture to Drupal 7. This would reduce the number of unknowns to a very small number (mostly related to individual module upgrades) and would give them a very basic migration path. In order to start getting some more structure around their layouts, they would start using Panelizer in some cases (like landing pages) which would give their editors more freedom to place blocks of content without having to hand code HTML. On top of that, we now also had time to address some of the problems around media handling with the addition of some modules that were new for Drupal 7, and a bit of custom code.

Many devs would look at this solution and shake their heads. You've taken a site that was not much more than hand-coded HTML shoved into a CMS, and turned it into more of the same. What a waste, what a failure!

I would respectfully disagree. As consultants, our job is not to make a site with the best possible architecture, but to make a site with the best possible architecture within the framework of the client's priorities. Knowing the kind of site this client wanted to build, I was a little reluctant to propose the solution I did, even though I knew it was the best of all the available solutions. While this client was disappointed that they couldn't build the site the way they wanted, they were also hugely relieved to have a plan that looked manageable and achievable. It allowed them to build the site in a way that enabled future upgrades as time permitted, but didn't force the investment immediately.

Success

What does a successful consulting project look like? It is a juggling act, and to some extent the rules are different for every one. One of the most important things that we as consultants can do, especially when we are devs or architects at heart, is to leave our own priorities at the door and focus on the client. What are their priorities? What are their pain points? What are their criteria for success? Taking the time to pull all of this data out of the client, and using it to craft a solution, is really the heart of our job, and for me personally, it is what gives me the most joy and satisfaction.

This is where the real puzzles are solved, where you can make the most of your experience, where you can take all the data you have, and craft something the client didn't even know they wanted in the first place. Now you have a plan that makes sense and meet's the client's goals, both spoken and unspoken. That, my friends, is what success looks like.

Greg Dunlap

Senior Drupal Architect

Want Greg Dunlap to speak at your event? Contact us with the details and we’ll be in touch soon.

Jan 22 2014
Jan 22

on January 22, 2014 //

AngularJS forms are nifty, HTML5 friendly, and incredibly fun to code!

AngularJS is an MVC JavaScript framework which elegantly separates controller, business and model logic in your application. Although it takes getting used to after years of writing server-side code, it simplifies a lot of backend logic in our projects and we've had wonderful success with it so far.

While working at the MSNBC project, we were asked to build a form that submits its results to a third party system via an HTTP request. Following up with the strategy outlined at our previous article about Decoupling the Drupal frontend with AngularJS, we implemented an AngularJS form that validated user input and submitted the data to an external service.

In this article, we'll walk through an example of this code and see how it works. The code is available in this GitHub repository, and you can view it in action. Let's take a look at the details.

Bootstrapping AngularJS

First, we must bootstrap AngularJS in the header of our HTML. We do that by adding a directive to the <html> tag which will define the name of our AngularJS module. We will also load three files in the <head> section:

  • The AngularJS library.
  • Promise Tracker: an AngularJS module to track a request's progress in order to display a loading alert.
  • Our AngularJS application, which will render and process the contact form.

<!DOCTYPE html>
<html lang="en" data-ng-app="myApp">
  <head>
    <title>AngularJS Form</title>
    <script type="text/javascript" src="http://code.angularjs.org/1.1.4/angular.min.js"></script>
    <script type="text/javascript" src="http://feedproxy.google.com/~r/lullabot/planet-feed/~3/CxswJHs-Z3c/processing-forms-angularjs/js/modules/promise-tracker.js"></script>
    <script type="text/javascript" src="http://feedproxy.google.com/~r/lullabot/planet-feed/~3/CxswJHs-Z3c/processing-forms-angularjs/js/app.js"></script>
    <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.0.3/css/bootstrap.min.css">
  </head>

That's it — we are ready to start using AngularJS in our page. Let's see what our form's HTML would look like.

Note: In this example we will be using data-ng-* instead of ng-* attributes when defining directives to keep the HTML W3C compliant.

Our contact form

Our form looks pretty much like any normal form, but with extra attributes that AngularJS will use. Let's start by having a quick glance at its markup, then we will dive into its details:

<div data-ng-controller="help">
  <div id="messages" class="alert alert-success" data-ng-show="messages" data-ng-bind="messages"></div>
  <div data-ng-show="progress.active()" style="color: red; font-size: 50px;">Sending&hellip;</div>
  <form name="helpForm" novalidate role="form">
    <div class="form-group">
      <label for="name">Your Name </label>
      <span class="label label-danger" data-ng-show="submitted && helpForm.name.$error.required">Required!</span>
      <input type="text" name="name" data-ng-model="name" class="form-control" required />
    </div>

    <div class="form-group">
      <label for="email">Your E-mail address</label>
      <span class="label label-danger" data-ng-show="submitted && helpForm.email.$error.required">Required!</span>
      <span class="label label-danger" data-ng-show="submitted && helpForm.$error.email">Invalid email!</span>
      <input type="email" name="email" data-ng-model="email" class="form-control" required /> 
    </div>

    <div class="form-group">
      <label for="subjectList">What is the nature of your request?</label>
      <span class="label label-danger" data-ng-show="submitted && helpForm.subjectList.$error.required">Required!</span>
      <select name="subjectList" data-ng-model="subjectList" data-ng-options="id as value for (id, value) in subjectListOptions" class="form-control" required>
        <option value=""></option>
      </select>
    </div>

    <div class="form-group">
      <label for="url">URL of Relevant Page</label>
      <span class="label label-danger" data-ng-show="submitted && helpForm.$error.url">Invalid URL format!</span>
      <input type="url" name="url" data-ng-model="url" class="form-control" />
    </div>

    <div class="form-group">
      <label for="comments">Description</label>
      <span class="label label-danger" data-ng-show="submitted && helpForm.comments.$error.required">Required!</span>
      <textarea name="comments" data-ng-model="comments" class="form-control" required></textarea>
    </div>

    <button data-ng-disabled="progress.active()" data-ng-click="submit(helpForm)" class="btn btn-default">Submit</button>
  </form>
</div>

As you can see, the above snippet has HTML on steroids that AngularJS will process. In order to manage the state of the form and its fields, we have defined a model for our form under the variable helpForm. We tell angular which model the form corresponds to by using the directive <form name="helpForm" novalidate>. Let's dive in deeper into this.

Our controller's scope

The controller is an AngularJS function that will take care of processing part of the HTML of the page. We define the scope of our controller using the ng-controller directive at <div data-ng-controller="help">. This means that inside our AngularJS application, we will implement a controller named help which will be in charge of processing the contents of this piece of HTML.

One of the main concepts that AngularJS introduces is Two-Way Data Binding between the Model and the View. You assign Model variables to the View (the HTML), then everytime values change in the controller, the View is automatically updated to reflect them and vice-versa.

<div id="messages" class="alert alert-success" data-ng-show="messages" data-ng-bind="messages"></div>

The above <div> tag will be our placeholder for status messages. The ng-bind directive binds it with a variable called messages that we will populate in our controller. So when we change a variable using the following code: $scope.messages = 'some text'; we will see it automatically in the screen. Isn't this just freaking great?

Form validation

Each form field in our form has different validation depending on its nature. Here is the HTML for the email field:

<label for="email">Your E-mail address</label>
<span class="label label-danger" data-ng-show="submitted && helpForm.email.$error.required">Required!</span>
<span class="label label-danger" data-ng-show="submitted && helpForm.$error.email">Invalid email!</span>
<input type="email" name="email" data-ng-model="email" class="form-control" required /> 

In the above snippet we define:

  • An error message to be displayed when the field has not been filled out on submission.
  • An error message to be displayed when the contents of the field are not a valid email address.
  • The field definition, attached to a model variable and defined as required.

Given the above statements, AngularJS takes care of keeping an object with the status of each form field. That is why we can automatically toggle an error message using the ng-show directive and evaluate the field state with helpForm.email.$error.required.

Form submission

Our form submit handler will take care of the following:

  1. If the data has not passed validation. Simply return. Errors will be shown automatically.
  2. If the data passed validation, prepare an object to be sent through a JSONP request.
  3. Once we get the response, evaluate it and inform the user depending on its data.

Note: JSONP is a communication technique to make HTTP requests to domains different than the current one.

$scope.submit = function(form) {
  // Trigger validation flag.
  $scope.submitted = true;

  // If form is invalid, return and let AngularJS show validation errors.
  if (form.$invalid) {
    return;
  }

  // Default values for the request.
  $scope.progress = promiseTracker('progress');
  var config = {
    params : {
      'callback' : 'JSON_CALLBACK',
      'name' : $scope.name,
      'email' : $scope.email,
      'subjectList' : $scope.subjectList,
      'url' : $scope.url,
      'comments' : $scope.comments
    },
    tracker : 'progress'
  };

  // Perform JSONP request.
  $http.jsonp('response.json', config)
    .success(function(data, status, headers, config) {
      if (data.status == 'OK') {
        $scope.name = null;
        $scope.email = null;
        $scope.subjectList = null;
        $scope.url = null;
        $scope.comments = null;
        $scope.messages = 'Your form has been sent!';
        $scope.submitted = false;
      } else {
        $scope.messages = 'Oops, we received your request, but there was an error.';
        $log.error(data);
      }
    })
    .error(function(data, status, headers, config) {
      $scope.progress = data;
      $scope.messages = 'There was a network error. Try again later.';
      $log.error(data);
    });
 
  // Hide the status message which was set above after 3 seconds.
  $timeout(function() {
    $scope.messages = null;
  }, 3000);
}

If you've ever implemented a JavaScript form submission, then the above should feel pretty familiar, but do you notice the AngularJS difference? We are displaying a response to the user by setting variables, instead of changing CSS classes or DOM elements. By altering variables in our scope, we are automatically altering the View. Angular handles updating the HTML that the user sees.

Extending our application

We saved some bits of our form until the endintentionally: an AngularJS module to track the progress of the form submission. Specifically, the angular-promise-tracker module was added in the <head> tag. In our view, we reference it in two places. First, we bind it to display a Sending… message like this:

<div data-ng-show="progress.active()" style="color: red; font-size: 50px;">Sending&hellip;</div>

<button data-ng-disabled="progress.active()" data-ng-click="submit(helpForm)" class="btn btn-default">Submit</button>

In our controller, we start by adding the module as a dependency of our custom module, then injecting it into our controller:

angular.module('myApp', ['ajoslin.promise-tracker'])
  .controller('help', function ($scope, $http, $log, promiseTracker) {

// Default values for the request.
$scope.progress = promiseTracker('progress');
var config = {
  params : {
    'callback' : 'JSON_CALLBACK',
    'name' : $scope.name,
    'email' : $scope.email,
    'subjectList' : $scope.subjectList,
    'url' : $scope.url,
    'comments' : $scope.comments
  },
  tracker : 'progress'
};

The module will take care of updating the status of the progress object depending on the response data, and the View will be updated accordingly.

Go try it out!

Now, it is your turn to try this approach out. You can see the whole example and its source code at the Github repository.

Want Juan Pablo Novillo Requena to speak at your event? Contact us with the details and we’ll be in touch soon.

Related Bits

Jan 17 2014
Jan 17

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Jan 15 2014
Jan 15

on January 15, 2014 //

Oftentimes, I run into issues with drush commands that needed more debugging power than dpm() provides. In search for a way to debug PHP scripts from the CLI, or drush commands more specifically, I stumbled upon PHPStorm’s Zero-configuration Debugging which turned out to be perfect for the job.

First, you will need Xdebug installed. http://xdebug.org/ has some excellent documentation on installing XDebug. For OSX users, I would recommend using homebrew with the formulae here https://github.com/josegonzalez/homebrew-php.

In the CLI, we will need to set the XDEBUG_CONFIG variable.

In bash,

export XDEBUG_CONFIG="idekey=PHPSTORM"

Once Xdebug is installed and the XDEBUG_CONFIG variable set up, start a new project in PHPStorm. Click on the Magic Button to "Start Listen PHP Debug Connections"

Start Listen PHP Debug Connections

In the CLI, we can then run any drush command inside the drupal docroot and a breakpoint should trigger on the first line of drush.php.

Breakpoint

Set up breakpoints and debug like you normally would. As long as PHPStorm is listening for a connection and the XDEBUG_CONFIG variable is set, any PHP script run on the CLI will trigger the debugger to break on the first line of the script. Once you are done with debugging, click the Magic Button again to "Stop Listen PHP Debug Connections".

Drush commands always trigger a break at the first line, unless drush is included in the project. When that gets a little old, uncheck "Force break at the first line when a script is outside the project" to stop the break at the first line.

Force break at the first line when a script is outside the project

I am in the debugger so much I ended up with the xdebug.idekey set up in my php.ini permanently.

xdebug.idekey="PHPSTORM";

That way the XDEBUG_CONFIG variable is not necessary anymore. In fact, this way any PHP activities including browsing a local site will pass through the debugging as long as PHPStorm is listening for a connection. 

Angus Mak

Senior Developer

Want Angus Mak to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Jan 10 2014
Jan 10

Jeff Eaton and Karen McGrane look back at the events and content strategy trends of 2013, and make their predictions for the coming year. Along the way, they discuss the challenge of content marketing overload, the future of WYSIWYG editors, the evolution of content migration tools, and more.

Jan 08 2014
Jan 08

on January 8, 2014 //

Ideas for how to gracefully retire (or semi-retire) a Drupal site using HTTrack and GitHub Pages.

Drupal is a great tool for creating a site. It has lots of modules and functionality that allow you to build interesting and complex features. But sometimes those sites lose their relevancy. It's a site for an event that has passed, for instance. Or a site for a topic that was really important at one time but now is mostly useful as a reference for the content it contains. Or it's a site you just don't have time to keep on top of. In all these cases you could just take the site down entirely, but often it contains useful information that you'd like to keep online, and if there are other people linking to it, it would be nice not to break all those connections.

But maintaining an inactive Drupal site can be a pain. There is a constant stream of security releases that you need to apply. And it's really maddening if you apply a security release to an inactive site only to find out the release contains other changes that break things that used to work, so that you have to spend time trying to get that inactive site working again. Not to mention that it's expensive to pay for hosting that can securely deploy Drupal sites if you aren't even using any Drupal interactivity any more.

One solution is to convert the site to static HTML pages. A site serving up only static pages, with no database or Drupal back end running, is likely to be pretty secure. And it will serve pages very quickly as well.

My Solution: HTTrack and GitHub Pages

There are various ways to accomplish this. You can use wget to spider a site and copy pages, or try out the Drupal Boost module (which creates static pages but still requires that Drupal be installed behind it). I finally settled on a solution that uses HTTrack to spider my Drupal site and create static pages without any dependency on Drupal. To serve those pages I will use GitHub Pages. I'm already using GitHub and GitHub Pages are free. GitHub Pages can be used to deploy Jekyll sites, but Jekyll is perfectly happy to serve up static HTML, so I don't have to do anything but create functional HTML pages to get this solution working.

I created a project on GitHub to try this idea out. I created the original Drupal site, Save My Airport, as a protest when the FAA announced they were going to close the airport watch towers at dozens of smaller airports as a cost-cutting move. My airport was one of the ones affected, but I was equally incensed about the impact to other small airports, so I did what I do, I created a web site about the problem. The problem has receded in urgency, but is likely to re-emerge because they didn't come up with a permanent solution. So what I really want to do is semi-retire the site. I can re-deploy it later if necessary.

It's a fairly complex site, created using Panels and lots of views. I used Feeds to pull in statistics about all the airports in the country and created a page for each airport with a map, traffic and other statistics, and information about what FAA actions affected it. I also pulled in links to news about the topic from all over the web, and there are a couple paged views of airports and news.

pagermaps

Transforming all this to static pages would not be a walk in the park.

Inactivate the Site

There are several steps to take with any site that is not going to get regular attention, whether or not you are going to archive it or create static files. These include:

Clean up any Views views

  • Remove exposed filters
  • Remove clickable table column headers
  • Don't use ajax

Other Tasks

  • Disable all comments (or only use third party comments, like Disqus or Facebook comments)
  • Remove the contact form
  • Disable search (or only use third party search, like Google)
  • Remove login and user blocks
  • Make sure js and css aggregation are turned on

One final task is to make sure no error messages will appear in your static content. Find the following in page.tpl.php and either remove it or comment it out while you're spidering the site:

<?php
print $messages;
?>

Finally, review the site as an anonymous user to see if there are any other elements that won't work if Drupal is not actively running in the background.

Create GitHub Page

I started by creating a new repository and setting it up to use GitHub Pages. I just followed the instructions to create a simple Hello World repository to be sure it was working. Basically it's a matter of creating a branch called "gh-pages" in the repository, and then committing a index.html file that echos back "Hello World".

Create Static Pages with HTTrack

The easiest way to install httrack on a Mac is with Homebrew:

brew install httrack

I spent some time trying to find the ideal way to use HTTrack from the documentation. I finally came up with the following command. Change into the new GitHub Pages directory on your machine, and execute the following command:

httrack http://LOCAL_URI -O . -N "%h%p/%n/index%[page].%t" -WqQ%v --robots=0

One of the biggest problems of transforming a dynamic site into static pages is that the urls must change. The 'real' url of a Drupal page is 'index.php?q='/news', or 'index.php?q=/about', i.e. there is really only one HTML page that dynamically re-renders itself depending on the requested path. A static site has to have one HTML page for every page of the site, so the new url has to be '/news.html' or '/news/index.html'. The good thing about the second option is that incoming links to '/news' will automatically be routed to /news/index.html' if it exists, so that second pattern is the one I want to use.

The -N flag in the command will rewrite the pages of the site, including pager pages, into the pattern "/about/index.html". Without the -N flag, the page at "/about" would have been transformed into a file called "about.html".

The pattern also tells httrack to find a value in the query string called "page" and insert that value, if it exists, into the url pattern in the spot marked by [page]. Paged views will create links like "/about/index2.html", "/about/index3.html" for each page of the view. Without specifying this, the pager links would have been created as meaningless hash values of the query string. This way the pager links are user friendly and similar (but not quite the same) as the original link urls.

Shortly after the process starts it will stop and ask you a question about how far to go in following links. I answer '*' to that question:

question

I ran HTTrack on a local version of my site and it took about a half hour to spider the site and create about 2,000 files, including pages for every airport and news item and every page of my paged views. You can use HTTrack across the network on the live site url, but that would be very slow, so it makes sense to do this on a local copy if possible.

Watch the progress as it goes to see what sections of the site it is navigating into. The '%v' flag in the command tells it to use verbose output.

verbose

If you see it veering into sections you don't want saved, you can add something like the following to keep it out of a particular sub-section:

-/news*

I then committed this to the gh-pages branch of my repository, and in a few minutes I could view the result at http://karens.github.io/savemyairport.

There was one final bit of clean up to make. Although incoming links to /airports/ryan-field will now work, internal links still look like this in the HTML:

/airports/ryan-field/index.html

A quick command line fix to clean that up is to run this, from the top of the directory that contains the static files:

find . -name "*.html" -type f -print0 | xargs -0 perl -i -pe "s/\/index.html/\//g"

That will change all the internal links in those 2,000 pages from "/airports/ryan-field/index.html" to "/airports/ryan-field/", and I now have a static site that pretty closely mirrors the original file structure and URL pattern of the original site.

The final step is to have the old domain name redirect to the new GitHub Pages site. GitHub provides instructions about how to do that.

Next Steps

For some sites, this is all there is to do. The sites are now retired and will never change again. The Drupal site that created it can be taken down and these pages can live on as a permanent archive of the site.

But in the case of a semi-retired site there is the question about how to make occasional changes in the future.

My current plan is to maintain the local Drupal installation but keep it offline. If I want to make changes in the future, I'll update my local site and then re-generate the static pages using the method above. Since Drupal is not publicly available, I won't have to update or maintain it, or worry about security updates, as long as it works well enough to re-generate the site when necessary. Each time I make changes locally I'll have to re-generate the static pages using HTTrack and push the changes up, but if I'm not making changes very often that will work out fine and it preserves the option of bringing the site back up as a Drupal site in the future if events warrant.

Another idea for a site in semi-retirement is to use HTTrack to actually transform it into a Jekyll site, where the static pages can live on as-is, but I can periodically add some new content to the News section. I decided that is another intriguing idea that I'll explore in another article.

If you're interested, you can view Save My Airport, which is now a fully static site hosted in GitHub pages, created from a Drupal site using the process outlined above.

Want Karen Stevenson to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Jan 06 2014
Jan 06

on January 6, 2014 //

Assign a role to any user after an arbitrary amount of time

It is not uncommon to want to assign a role to a user after they have been a member-in-good-standing of a site for a period of time. Perhaps this role will grant them access to do some content moderation, or gain access to new features like voting on posts. The Role Delay module was created for just this purpose, and it fulfills its 'role' admirably.

Picture of the the Protected Pages email screen

On installation the Role Delay module adds a field to the role edit form which allows the user to specify an arbitrary delay after which this role will be added. This delay can be specified in any format that strtotime() recognizes, which makes it enormously flexible. You can set a different delay for each role if needed, to allow permissions to be gradually assigned over time. Once a delay is set, it will appear next to the role in the role listing screen.

Picture of the Protected Pages password screen

One thing the module does not do is assign these roles to users that already exist in the system, so if you add it to a site and want to assign the role retroactively you will have to script that yourself. Beyond that, Role Delay works well and is a great addition to many community sites.

Greg Dunlap

Senior Drupal Architect

Want Greg Dunlap to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Jan 03 2014
Jan 03

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Dec 20 2013
Dec 20

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Dec 13 2013
Dec 13

Jeff Eaton and Eileen Webb discuss the unique challenges faced by values-driven nonprofits, the ups and downs developers experience when diving into the content strategy world, and the best way to integrate Ducklings into site-building.

Dec 04 2013
Dec 04

on December 4, 2013 //

How we kept content strategists and developers in sync when building MSNBC.com

In the world of content strategy, spreadsheets are a critical tool for planning and communication. In particular, content types are often defined and refined in spreadsheets before they're committed to code or CMS configuration.

The challenge comes once everyone "agrees" and the content types are implemented. If the model is updated in any way, it's easy for them to fall out of sync. The CMS is tweaked, but the spreadsheet is never updated to match, or decisions are made by the content team and entered in the spreadsheet but they never make it to the CMS configuration. In addition, if developers misunderstand the spreadsheet or make mistakes when implementing the content types, the mismatch can be easily overlooked.

While working on the recently launched redesign of MSNBC.com, we found ourselves in just that situation. The back-and-forth fixes between the content modeling people and the developers turned into an ongoing time-sink and everyone was frustrated.

The Solution

I hate doing work that computers can do better. Faced with this spreadsheet/CMS synchronisation challenge, I built a tool that handles it automatically: the CheckSheet module for Drupal. It takes a spreadsheet describing a site's content types and fields, compares it to a Drupal site's content type settings, and flags any discrepancies for review. It provides an admin screen on the Drupal site for site builders and a Drush command for those who prefer the command line.

In the screenshot above, the Article and Page content types are both out of sync with the spreadsheet. For example, the Page's body field should be required, and it should have a publish_date field -- the CheckSheet module spotted that mismatch and alerted us.

How it works

During the MSNBC development process, we used Google Docs to store the "master" spreadsheet: it was treated as the canonical source for information on our content types. If the CMS didn't match the spreadsheet, we assumed the CMS was wrong. We exported this spreadsheet to .ods format, and checked it into the project's source control tree to preserve a historical record of the type definitions.

Whenever someone wanted to verify that the site was "in sync," they ran the Drush command or checked the admin page to spot mismatches. Because it's available as a drush command, it's relatively easy to make it part of an automated testing and continuous integration process. It's still up to the developers to fix the mismatches, but the process of spotting them is unambiguous and easy to document.

Use it, improve it, and share your techniques

The CheckSheet module is a quick-and-dirty tool to simplify our work, not a polished product: it assumes a very specific format for the spreadsheet. It can verify the name, help text, data type, required flag, and "single/multiple value" setting for any given field. Additional columns can be added to the spreadsheet to store more information for documentation purposes, but the module will ignore them.

It also assumes that the spreadsheet is in .ods format, and located in the actual module directory: if you're using Excel, Pages, or Google Docs you'll need to export to .ods before the module can parse it. In the future, we'd like to integrate it with Google Docs directly… for our work on MSNBC.com, though, it served its purpose well.

The CheckSheet module is currently living in Lullabot's GitHub repository, and includes an example .ods format spreadsheet that demonstrates how a few content types can be defined. Give it a spin, add features, and post ways that your team has helped keep strategists, architects, and developers working together smoothly.

Want Sally Young to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Related Bits

Oct 25 2013
Oct 25

In this podcast Larry Garfield, Alex Bronstein, Juampy, and Joe Shindelar join Kyle Hofmeyer to discuss why they are excited about Drupal 8. From the changes to the block system, and the subtle but excellent polishes to the UI, during this podcast you can certainly tell that Drupal 8 is coming together, and there are many reasons to get excited about it. HTML5, REST APIs, Views, and Twig — Drupal 8 core truly makes Drupal usable out of the box for the very first time. Getting excited about Drupal 8 may be the theme of this podcast, but learning lots about what Drupal 8 has to offer is something you will take away after giving this a full listen. And if that isn't enough you'll at least discover why Drupal 8 is like a donkey, human, ant, and a chameleon.

Oct 21 2013
Oct 21

After a fall hiatus, Insert Content Here is back for a second year of content strategy and digital publishing goodness!

In episode 19, Jeff Eaton and Harvard's Mike Petroff talk about the recent redesign of the Harvard Gazette on Wordpress, the challenges of serving a large university's communications needs, and trends in social publishing and visitor interaction.

Oct 17 2013
Oct 17

on October 17, 2013 //

How to speed up Drupal by migrating server-side logic to the browser

At Lullabot, we always aim to make sites as performant and maintainable as possible. Recently, we've started to decouple bits of logic from Drupal and move them to the client's browser using JavaScript.

Let's look at an example. We want to display the weather of a given city in our website. This involves:

  1. Calling a public API with some parameters. We have chosen OpenWeatherMap for this example.
  2. Extract weather data from the response.
  3. Show the data in the browser.

The result would look like the following:

In Drupal, we could create a block that uses drupal_http_request() to fetch the data, then passes its results to a theme function that renders it. That is simple and maintainable, but why does Drupal needs to take care of this? There is no database involved, nor session management. If our site relies on caching to improve performance, we'll have to clear that cache whenever the block's content is updated.

Instead, let's move this to pure JavaScript and HTML so the client's browser will be the one in charge of fetching, processing and caching the data.

Meet AngularJS

AngularJS is an MVC JavaScript framework which elegantly separates controller, business and model logic in your application. Although there is a lot to learn, it removes a lot of backend logic in our Drupal projects and we've had wonderful success with it so far.

Bootstrapping our AngularJS application

Let's start by adding a directive to bootstrap our AngularJS application. Add the following attribute to your html.tpl.php file:

<html data-ng-app="myapp" xmlns="http://www.w3.org/1999/xhtml" xml:lang="<?php print $language->language; ?>" version="XHTML+RDFa 1.0" dir="<?php print $language->dir; ?>"<?php print $rdf_namespaces; ?>>

The attribute data-ng-app="myapp" is telling AngularJS to bootstrap our application named "myapp". For the moment this is all we need, so let's move on. We will implement our AngularJS application later.

Rendering the skeleton in Drupal

Our custom Drupal module contains some simple code that implements a block. The mymodule_block_view() function also includes a JavaScript file (the AngularJS controller) and a template which holds the markup that the AngularJS controller will use:

/**
* Implements hook_block_view().
*/
function mymodule_block_view($delta = '') {
  $block = array();  switch ($delta) {
    case 'weather':
      $path = drupal_get_path('module', 'mymodule');
      $block['subject'] = t('Weather status');
      $block['content'] = array(
        '#theme' => 'weather_status',
        '#attached' => array(
          'js' => array(
            'https://ajax.googleapis.com/ajax/libs/angularjs/1.0.7/angular.min.js',
            $path . '/mymodule.js',
          ),
        ),
      );
      break;
  }
  return $block;
}

That is all that Drupal will do; build the foundation.

Processing in the browser

When the page is delivered to the client, the AngularJS controller will kick in early to fetch the data from OpenWeatherMap, then process it for the view:


/**
* Renders the weather status for a city.
*/
var app = angular.module('myapp', [])
.controller('MyModuleWeather', function($scope, $http, $log) {
  // Set default values for our form fields.
  $scope.city = 'Madrid';
  $scope.units = 'metric';

  // Define a function to process form submission.
  $scope.change = function() {
    // Fetch the data from the public API through JSONP.
    // See http://openweathermap.org/API#weather.
    var url = 'http://api.openweathermap.org/data/2.5/weather';
    $http.jsonp(url, { params : {
        q : $scope.city,
        units : $scope.units,
        callback: 'JSON_CALLBACK'
      }}).
      success(function(data, status, headers, config) {
        $scope.main = data.main;
        $scope.wind = data.wind;
        $scope.description = data.weather[0].description;
      }).
      error(function(data, status, headers, config) {
        // Log an error in the browser's console.
        $log.error('Could not retrieve data from ' + url);
      });
  };

  // Trigger form submission for first load.
  $scope.change();
});

Rendering the results

Our template simply references our controller (that is how AngularJS does the binding) and outputs the variables we set in the $scope object previously.

<div ng-controller="MyModuleWeather">
  <label for="city">City</label>
  <input type="text" ng-model="city" /></br>
  <label for="units">Units</label>
  <input type="radio" ng-model="units" value="metric"/> Metric
  <input type="radio" ng-model="units" value="imperial"/> Imperial</br>
  <button ng-click="change()">Change</button>
  <h3>{{data.name}}</h3>
  <p>{{description}}</p>
  <p>Temperature: {{main.temp}}</p>
  <p>Wind speed: {{wind.speed}}</p>
</div>

There you have it! We have a fully functional block that is processed in the browser. If we apply this pattern to other frequently-changing blocks on the page, we'll be able to simplify the work that Drupal does, make the page's caching more efficient, and achieving better performance. You can even use this pattern to lazy load content that varies from user to user, making the rest of the page easier to cache.

On consuming external APIs

Whenever you are building a page on one domain and requesting data from another in the client's browser, remember that browser security mechanisms can sometimes stand in the way. There are two popular ways of overcoming this. One is through Cross-Origin Resource Sharing: of course, there is a module for that on Drupal.org. The other method is using JSONP. That's the method we used in this example, and it is supported by AngularJS and JQuery.

Why not build it with jQuery?

Technically, it is possible to build the same functionality using JQuery. However, it would require more code: you would have to take care of hiding the template while the page is being built, define a listener for the submit button, sanitize data, and bind it to the template yourself. Even with such a simple example, AngularJS offers a simple, more structured approach. It's also possible to use jQuery within AngularJS code.

Next steps

Want Juan Pablo Novillo Requena to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Oct 07 2013
Oct 07

on October 7, 2013 //

Keep your content types in order, and simplify your views

By the time most large Drupal sites have been around for a year or two, they've accumulated a menagerie of content types. Articles, press releases, product pages, reviews, biographies, landing pages, home pages, promo rotators, photo galleries, and more litter the list of content, and making sure they're treated consistently can be a problem.

A view that was created last year, for example, might have displayed just the right list of content types then, but who's making sure the new stuff gets handled properly? The Content Type Groups module can simplify that problem: it lets administrators define groups of content types for organizational purposes.

Picture of the Content Type Groups admin screen

Once installed, the module adds a new administration page to Drupal's Structure section. Admins can define new Groups and assign content types to them -- they're not exclusive, so one content type can belong to multiple groups if it makes sense. The most important impact of these groups felt when building Views. If you're adding content type filters (or contextual filters) to a listing page, you can use a Content Type Group filter instead. Building a view that lists "All news content types" or "All user-created content types" rather than specifying each type individually makes it easier to ensure that future changes and additions will ripple through to previously-created views.

Picture of the Content Type Groups views filter page

While the concept is simple, the module's execution is perfect. The admin screens are self-explanitory and easy to use, the Views integration works nicely, and the content type groups can be exported as part of a Feature module. In the future, it would be great to leverage group information to organize the often-overwhelming "Add New Node" overview page. For now, though, Content Type Groups is a great tool for future-proofing Views and documenting the intended purpose of a site's varied content types.

Jeff Eaton

Senior Digital Strategist

Want Jeff Eaton to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Oct 04 2013
Oct 04

Last week a number of us went to beautiful Prague, Czech Republic to immerse ourselves at DrupalCon. The city was amazing, and the conference was a blast. In this episode Addi, Kyle, Juampy, and Micah share their experiences, favorite sessions and news, and give a general rundown on how DrupalCon Prague turned out. We also have a bit of a discussion on European versus US DrupalCons, and talk about some other events that are coming up. Importing Data With Migrate and Drupal 7

Oct 03 2013
Oct 03

on October 3, 2013 //

Killer Keynotes, Drupal 8, and Lullabots Galore

This year's European DrupalCon took the community to the beautiful city of Prague, in the Czech Republic. From September 23rd to 27th, Drupal users and contributors from around the world gathered for training, code sprints, heated technical discussions, and a peek into what's coming in Drupal 8.

Hugging Druplicon

Taking the Community's Temperature

Drupal 8 was definitely the topic on everyone's mind: it's on track to have the longest development cycle of any Drupal release in history, with a set of new features and underlying API changes to match. The past several months have seen heated public discussions about the state of D8, the difficulty of porting modules to its new APIs, and uncertainty around fundamental architecture changes.

Prague offered a host of sessions focused on explaining the new APIs, bringing developers up to speed on module syntax changes, and showing site-builders the new out-of-box capabilities that will ship with Drupal 8. While there are still quite a few major architectural issues to iron out before Drupal 8 is ready for prime time, this conference gave many developers their first extended look at the new version's code.

Over the years, the conference's Keynote sessions have turned into a chance for the community to hear from important voices outside of the Drupal world. Strategist Lisa Welchman offered a deep analysis of collaboration and decision-making in large communities, and experience design specialist Aral Balkan preached the importance of conscious, user-focused product design for open source projects that want to make a difference.

Lullabots' Sessions

Both Lullabot and Drupalize.Me were both out in force at the conference: Emma Jane Westby and Joe Shindelar kicked off the first day of with advanced git training, while Addison Berry officiated the Community Summit. Meanwhile, Jeff Eaton spoke to members of the Large Scale Drupal initiative about the future of device-independent content.

Later in the week, more 'Bots piled on! Micah Godbolt's session on creating responsive prototypes with Angular.js packed the room; Joe Shindelar's session on the importance of ramping up Drupal 8 training helped kick off an extended discussion of D8 developer resources; Nate Haug showed off the latest improvements to the WebForm module; and Emma Jane Westby dug deep into the reasons git frustrates new developers and old hands alike.

Micah Godbolt talking Angular.js

Outside of the main session schedule, Sally Young and Juan Pablo "juampy" Novillo Requena hosted a Birds-of-a-Feather session on using Drupal as a content API for client-side web apps. The Create-API project and the Angular Seed project demonstrate the front and back end components of the technique, and further articles about the approach should be hitting Lullabot.com soon. Jeff Eaton and Roy Scholten also revisited the Snowman install profile project in a core conversation about leveraging Drupal 8's site-building capabilities.

Working With Other Projects: Symfony and CKEditor

One of the common themes of the Drupal 8 development process has been "getting off the island" -- collaborating with other open source projects and leveraging proven code from other communities, rather than building everything from scratch. Drupal's 2007 adoption of jQuery was an early success story for this approach, but its server-side PHP code has rarely included work from other projects. That's changing in Drupal 8, with heavy use of libraries from the Symfony2 project, and tight integration of the CKEditor WYSIWYG tool.

Representatives from both projects gave several presentations in Prague. The CKEditor team gave in-depth tours of the editor's capabilities and its tight integration with Drupal for both site-builders and developers; even Eaton, normally suspicious of 'WYSIWYG' tools, was won over by its customization options and rich support for semantic markup.

Picture of Eaton hugging CKEditor lead

Meanwhile, Symfony project lead Fabien Potencier offered up two sessions for Drupal developers trying to understand the ins and outs of the framework. Standardization, The Symfony Way covered new object-oriented patterns introduced in D8, and Twig: A Templating System for Web Designers gave PHPTemplate die-hards a gentle introduction to D8's new theming language.

Community Sprints

DrupalCon community tools

The sprint sessions before and after the conference also gave attendees an opportunity to roll up their sleeves and help with the D8 development process. Addi, Joe, and Emma helped acclimate newcomers with a half-day Community Tools Workshop, preparing them for the Drupal.org issue queue and community's standard approaches to testing and problem-solving. According to Dries Buytaert's keynote, Drupal 8 has already received code contributions from more than 1500 contributors, a new record for the project.

DrupalCon code sprint

…Next year, in Amsterdam

For those who couldn't make it -- or who had to make hard choices between multiple sessions in the same slot -- video (and, in many cases, slides) from every session have been posted on the DrupalCon site. As always, DrupalCon closed with the announcement of next year's European location: the city of Amsterdam! The drop, as they say, is always moving…

DrupalCon Group Photo

Sep 30 2013
Sep 30

on September 30, 2013 //

A number of popular base themes provide many useful tools to help in front-end development. These range from simple things such as disabling the theme registry that allieviates the need to clear cache when adding new templates, functions etc, to giving you complete control over how and where your CSS and JavaScript are added to the page.

The Magic Module consolidates a number of these useful tools into one place.

Picture of the them settings page with Magic module installed

Instead of each of each theme re-implementing useful front-end functionality, Magic moves it all into the module space. Currently it provides some really handy features for themers including:

  • Enhancements to CSS Aggregation
  • Explicitly exclude CSS and JS files
  • Move JavaScript to the footer
  • Rebuild Theme Registry on Page Reload
  • Display a Viewport Width indicator
  • Backport of Drupal 8 JavaScript handling (Watch Théodore Biadala's DrupalCon Prague presentation on Upgrading your JS to Drupal 8 for a sneak preview)

The module will also allow each of your themes to export a set of Magic settings. Visit the "Settings" page of your theme for an example (e.g. /admin/appearance/settings/bartik).

Now your custom themes can quickly and easily take advantage of a whole host of front-end development tools!

Want Sally Young to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Sep 25 2013
Sep 25

on September 25, 2013 //

Building on OS X's Standard Services for Local Development

I recently got a new Mac and needed to configure it as a local web server for the many Drupal sites I work on. I used to use MAMP for this, but lately have been using the built-in functionality that comes on a Mac instead. MAMP is easy to install, but it creates a duplicate version of PHP and a duplicate version of Apache. That takes up space on my machine and occasionally causes trouble when some operation uses the wrong version of PHP because of confusion about which installation should take precedence. Setting up a Mac without MAMP used to be sort of complicated, but it's been getting easier and easier with every version of Mac OS, and it's not that hard any more. I thought I'd share the process I'm using now.

Helper Apps and Terminal Window

To start with, I install a couple of apps that make all of this much easier to do. Pathfinder is a replacement for Mac's Finder that adds a number of nice improvements. The most important feature is that you can view invisible files with it. Once installed, go to the View option in the menu bar at the top of the page, and check the option to view invisible files.

The other handy app I depend on is Text Wrangler. The thing I especially like about that is that it makes it easy to edit protected files. When I try to edit a protected file using Text Wrangler it asks if I want to unlock the file. If I say yes, the file is unlocked while I make my changes, then reset to its previous permissions. Without this tool I would either have to edit it in my terminal window using sudo, or keep changing the permission on the file before I edit and then change it back again afterward.

You need to use the terminal window for many of these steps. To find that, click on the Launchpad icon in the dock, then choose Others and then Terminal.

PHP

Next install PHP. This is easy, it's already installed! To confirm that type the following to see where it is located:

which php

And type the following to see what version is installed:

php --version

Set up php.ini, if it doesn't already exist, by copying php.ini.default:

sudo cp /etc/php.ini.default /etc/php.ini

You may need to edit it to do things like increase memory.

Apache

Apache is also already installed. There is a default web root located at

/Library/WebServer/Documents/

You can put a web root in other places, but that makes configuration more complicated, so I've been leaving that alone.

To start Apache, using the terminal type:

sudo apachectl start


To stop Apache:

sudo apachectl stop


To restart Apache:

sudo apachectl restart

To test this, start Apache and go to http://localhost in a browser.

You should see 'It works!'

You'll be dropping our Drupal files in the web root, so add a bookmark to that using Pathfinder. Go to

/Library/WebServer/Documents/

Then choose Go from the menu at the top, then Favorites and Add to Favorites

Now you have a quick bookmark to the web root. Go to that location to add the Drupal files for your web site.

There are a couple final tweaks to Apache. One is to enable it's handling of PHP by enabling the PHP module. Using Pathfinder, navigate to

/private/etc/apache2/httpd.conf

Using Text Wrangler, uncomment the line in that file by removing the '#' in front of it:

LoadModule php5_module libexec/apache2/libphp5.so

If you want to use Virtual hosts, set them up in

/private/etc/apache2/extra/httpd-vhosts.conf

Edit http.conf and remove the '#' in front of the following line so the Virtual hosts get used:

Include /private/etc/apache2/extra/httpd-vhosts.conf

Finally, to get clean URLs working find and change the following in http.conf. Find:

<Directory "/Library/WebServer/Documents">
….
    AllowOverride None
….
</Directory>

And change it to:

<Directory "/Library/WebServer/Documents">
….
    AllowOverride All
….
</Directory>

Homebrew

The easiest way to do the remaining tasks is to install Homebrew. In a terminal window type the following:

ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"

Once it's installed, keep typing brew doctor until all errors are fixed. It should be pretty self-explanatory.

MYSQL

The dead easy way to install MYSQL is with Homebrew. Once you have that, just type:

brew install mysql

Check where it's located and which version you have:

which mysql
mysql --version

Confirm that it's working:

mysql

MYSQL is set up without a my.cnf file. You may want to create one at /etc/my.cnf with your preferred configuration settings.

MYSQL is set up without a root password by default. You should set a MYSQL root password using your terminal window, where 'ROOT_PASSWORD' is whatever password you want to use for this:

cd /usr/local/share/mysql
mysqladmin -u root password 'ROOT_PASSWORD'

Once last step. Some programs expect to find the mysql.sock file at /var and it isn't there by default. Using the terminal window:

sudo mkdir /var/mysql
sudo ln -s /tmp/mysql.sock /var/mysql/mysql.sock

PHPMyAdmin

Use Homebrew to Install phpmyadmin, which requires some additional dependencies. In the terminal window type:

brew tap homebrew/dupes
brew tap josegonzalez/homebrew-php
brew install phpmyadmin

Once that's installed, in the terminal type:

sudo cp /usr/local/share/phpmyadmin/config.sample.inc.php /usr/local/share/phpmyadmin/config.inc.php

Use Pathfinder to navigate to the Apache http.config file at:

/etc/apache2/http.config

Edit it using TextWrangler. Add the following to the bottom of http.config:

Alias /phpmyadmin /usr/local/share/phpmyadmin
<Directory /usr/local/share/phpmyadmin/>
  Options Indexes FollowSymLinks MultiViews
  AllowOverride All
  Order allow,deny
  Allow from all
</Directory>

Restart apache. Now navigate to http://localhost/phpmyadmin in a browser and you should see a place to log into PHPMyAdmin.

To change the way the login works, using PathFinder go to:

/usr/local/share/phpmyadmin/config.inc.php

Edit config.inc.php using TextWrangler.

If you don't want to have to log in each time, add the following to the Server config:

$cfg['Servers'][$i]['user']          = 'root';
$cfg['Servers'][$i]['password']      = 'ROOT_PASSWORD';
$cfg['Servers'][$i]['auth_type']     = 'config';

Change the following to bypass the root password. You can do this if you want to change the root password or if you can't get logged in. Change the root password using the UI in PHPMyAdmin, then reset it to false to go back to using the password:

$cfg['Servers'][$i]['AllowNoPassword'] = true;

Once logged in, set up the users and databases you need.

Git

You will probably want to be able to use Git to checkout projects and files. It should already be installed. To confirm that, and see where it is and what version you have, in the terminal type:

which git
git --version

Drush

That's enough to get a local version of a web site working, but for Drupal you'll probably also want to install Drush, which makes management of a Drupal site much much easier. To do that,

cd /usr/local/library
git clone drush
sudo chmod u+x /usr/local/Library/drush/drush
sudo ln -s /usr/local/Library/drush/drush /usr/bin/drush

Go to your home location, /Users/YOURNAME.
Look for a file called .profile (the name starts with a dot, it is a protected file). If it doesn't exist already, created it. Edit .profile with TextWrangler, and add this line:

alias drush="/usr/bin/drush"

Done!

That's it, everything necessary for a local installation of your web site should be working at this point. It's a little bit of work, but not terrible. Even installing MAMP requires a few extra steps that will send you to a terminal or require that you have a way to find hidden files and edit them, so this process is not a huge amount of extra effort.

Want Karen Stevenson to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Sep 20 2013
Sep 20

For this episode, Addison Berry is joined by Nate Haug and Jen Lampton to talk about their new project, Backdrop, which is a fork of Drupal. This is a shocking move in the community, and has generated a lot of questions and concerns. We talk about the motivation behind the fork, who's working on it, and ask about the negative impact this can have on the community. We asked on Twitter and Facebook for what questions folks had, and we got a ton of responses. Addi asks these questions to Nate and Jen, directly from the voices in the community. Join us as we try to clarify what is going on with Backdrop, and the implications it has.

The questions we managed to get to are:

  • I'd really like to know about their UX strategy to reach the sitebuilder - Bojhan (Twitter)
  • Do you plan to release a commercial version? - Nadir Palacios (Facebook)
  • Do you have a roadmap, what type of dev will it best suit, how do you differentiate against other CMS, are you planing a camp? - pdjohnson (Twitter)
  • Any details on timelines would be great. Also the whole contrib module issue - Paul_Rowell (Twitter)
  • At some point in the future module developers will need to pick @backdropcms or @drupal. Doesn't this profoundly hurt both? kcolwell (Twitter)
  • Will current D7 or future D8 modules and themes work with #backdrop? - ModulesUnraveled (Twitter)
  • What is the compelling reason for customers to choose Backdrop over established platforms like WordPress, Joomla, EE, & Drupal? - gdemet (Twitter)
  • For nonprofits on tight budgets, D7 is already expensive. I worry D8 will be worse. Might @backdropcms help this? - hanabel (Twitter)
  • Do you feel D8 prioritizes enterprise dev to the point where BackDrop's needed to fill smaller niche? - hanabel (Twitter)
  • Backdrop rolls back 1.5 years of core commits, removing many many patches representing improvements to a whole host of systems. All the community work on efforts on issues related to things totally unrelated to CMI and Symfony was tossed out. Work on mobile, work on accessibility, work on lots of little fixes, all gone. With a small core team, how do you fix that? Does Backdrop just toss away all those improvements and the community that made them? It just seems like a lot of wasted work. - MarcDrummond (Twitter)
  • How would you respond to criticisms that say #backdrop is a bad idea like this http://www.freelock.com/blog/john-locke/2013-09/drupal-8-vs-backdrop - ModulesUnraveled (Twitter)

[As a note, we will also have a podcast with Drupal 8 contributors to get another perspective on this story, so please understand that this conversation is not complete with just this podcast.]

Sep 11 2013
Sep 11

on September 11, 2013 //

Exposing options and configuration

Welcome to the third part of our series on writing Views query plugins! In part 1, we talked about the kind of thought and design work that needs to be done before coding on the plugin begins. In part 2, we went through the basics of actually writing a query plugin. In this final chapter, we will investigate some enhancements to make your plugin more polished and flexible.

Exposing configuration options

In part 2, we hardcoded things like the ID of the Flickr group we wanted to retrieve photos from, and the number of photos to retrieve. Obviously it would be better to expose these things as configuration options for the user to control.

In order to define configuration options for your plugin you need to add two methods to its class: option_definition() and options_form() (yes, the first is singular and the second is plural.) option_definition() provides metadata about the options your plugin provides, and options_form() provides form elements to be used in the Views UI for setting or modifying these options. Let's look at some code.

<?php
function option_definition() {
 
$options = parent::option_definition();  $options['num_photos'] = array(
   
'default' => '20',
  );
 
$options['group_id'] = array(
   
'default' => '',
  );  return
$options;
}
?>

As you can see, option_definition() is just an info hook, providing data about our options. The only required piece of data we need to provide is a default value, but there are several other options available including special handling for booleans and translations. Check out the full API description for more detail. The one important thing to note is that at the beginning of the function we are calling the parent. This ensures that any options defined by the base class are carried forward into ours. Forgetting to call the parent is a very common source of problems with Views plugins.

<?php
function options_form(&amp;$form, &amp;$form_state) {
 
$form = parent:: options_form($form, $form_state);  $form['num_photos'] = array(
   
'#type' => 'textfield',
   
'#title' => t('Number of photos'),
   
'#description' => t('The number of photos that should be returned from the specified group.'),
   
'#default_value' => $this->options['num_photos'],
  );
 
$form['group_id'] = array(
   
'#type' => 'textfield',
   
'#title' => t('Flickr group ID'),
   
'#description' => t('The ID of the Flickr group you want to pull photos from. This is a string of the format ######@N00, and it can be found in the URL of your group\'s "Invite Friends" page.'),
   
'#default_value' => $this->options['group_id'],
  );
}
?>

Assuming you've done everything correctly, you should now be able the see the following form at Advanced -> Query Settings.

Having implemented these forms, we now need to be able to retrieve the saved values and use them in our query. These values are stored in the 'options' array on your view, and the individual options are keyed just as they are in your form definition (just as if they were being referred to in $form in FAPI).

<?php
function execute(&amp;$view) {
 
$flickr = flickrapi_phpFlickr();
$photos = $flickr->groups_pools_getPhotos($this->options['group_id'], NULL, NULL, NULL, NULL, $this->options['num_photos']);
   foreach (
$photos['photos']['photo'] as $photo) {
   
$row = new stdClass;
   
$photo_id = $photo['id'];
   
$info = $flickr->photos_getInfo($photo_id);
   
$row->title = $info['photo']['title'];
   
$view->result[] = $row;
  }
}
?>

Functionally, of course, this is exactly the same as the last version. However, it is much more flexible and empowers your users to make the changes they need to.

Displaying images

Now we're retrieving data from Flickr, but we're still waiting for the stuff that is the whole point of this exercise: the images! There are a couple things we need to do to make this happen. We need to extend the query code to get the image data out of the Flickr API, and as we discussed in Part 1, getting all the data we need to display an image is a bit of a challenge given Flickr's API. We'll need to do the following:

  • Iterate through each photo retrieved to get its ID.
  • Call flickr.photos.getSizes for the photo and choose the size we want. The list of sizes is unpredictable, but every photo has an 'Original' size so we will always choose that one.

We will also need to create a new field handler to display the images.

Let's create the field handler first. The setup is the same thing we did before with the Title. First we add an entry to hook_views_data() in flickr_group_photos.views.inc to describe the field we are making available.

<?php
  $data
['flickr_group_photos']['image'] = array(
   
'title' => t('Image'),
   
'help' => t('The actual image from Flickr.'),
   
'field' => array(
     
'handler' => 'flickr_group_photos_field_image',
    ),
  );
?>

Then we create a new handler called 'flickr_group_photos_field_image' as we have named it above. We will put this in a file called flickr_group_photos_field_image.inc in our handlers directory.

<?php
/**
* @file
*   Views field handler for Flickr group images.
*//**
* Views field handler for Flickr group images.
*/
class flickr_group_photos_field_image extends views_handler_field /**
   * Called to add the field to a query.
   */
 
function query() {
   
$this->field_alias = $this->real_field
  }}
?>

Pretty much the same as our text field handler, but this is just going to return the text of whatever image URL we have, and that isn't what we want. We want to display the actual image! In order to do that we need to override the render() function and rewrite the data we're returning. This function should return the HTML we want to be displayed when we add this field to our view. So we could do something like this.

<?php
 
/**
   * Render the field.
   *
   * @param $values
   *   The values retrieved from the database.
   */
 
function render($values) {
   
$image_info = array(
       
'path' => $values->{$this->field},
    );
   
$return = theme('image', $image_info);
  }
?>

The most notable thing is how we retrieve the data from our field. The render() function recieves an object with all the data for a specific row in our view, and we retrieve the property named the same as our field, which we retrieve from our instance of the handler object. This makes the code a little more portable since we aren't just hardcoding the name of our field in there. Then we pass this path to theme_image() to generate the output.

This will work, however its not really optimal because it will display the image in its original size, and that will rarely be what we want. We could add the 'width' and 'height' keys to the $image_info array, but that is really suboptimal when we have no idea what our source images will look like. What we really want to do is apply an image style to our image! In theory this would be pretty simple, however Drupal's image styles only work on images that are stored locally, and not having any locally stored files was sort of the entire point of this exercise.

Contrib to the rescue! The Imagecache External module allows you to use core's image styles on external images. Phew. We can implement this in our field by calling theme('imagecache_external') with the path to our image, and the style we want to apply. Here's the newly modified code.

<?php
 
/**
   * Render the field.
   *
   * @param $values
   *   The values retrieved from the database.
   */
 
function render($values) {
   
$image_info = array(
       
'path' => $values->{$this->field},
       
'style_name' => 'thumbnail',
    );
   
$return = theme('imagecache_external', $image_info);
  }
?>

And finally, let's not forget to add this class to our .info file!

<?php
    files
[] = handlers/flickr_group_photos_field_image.inc
?>

If you've done everything correctly to this point, you should be able to go into an appropriate view, click Fields->Add, and see the Flickr Groups: Image field available to be added. If you try and add it and get the 'Broken or missing handler' error, then something is mostly likely improperly named somewhere along the way.

OK so the field type is in place, now we need to get the data from our query plugin. This just involves retrieving the new data we need, and saving it to an appropriately named property in our row object.

<?php
 
function execute(&amp;$view) {
   
$flickr = flickrapi_phpFlickr();
   
$photos = $flickr->groups_pools_getPhotos($this->options['group_id'], NULL, NULL, NULL, NULL, $this->options['num_photos']);
    foreach (
$photos['photos']['photo'] as $photo) {
     
$row = new stdClass;
     
$photo_id = $photo['id'];
     
$info = $flickr->photos_getInfo($photo_id);
     
$row->title = $info['photo']['title'];
     
$sizes = $flickr->photos_getSizes($photo_id);
      foreach (
$sizes as $size) {
        if (
$size['label'] == 'Original') {
         
$row->image = $size['source'];
        }
      }
     
$view->result[] = $row;
    }
  }
?>

As you can see we've added another loop where we iterate over the available sizes until we hit the one labeled 'Original', and we use the 'source' property of that size as our image property on the row. Pretty simple stuff in the end. Once again, getting the data out of Flickr and into the view is the simple part. It's the pieces that surround and support that which take most of the work.

So having done all this, and clearing cache of course, you should now be able to see titles AND images in your view!

Field options

There's one more thing that's irritating in this code: the image style is hardcoded into the field handler. Wouldn't it be nicer if we could choose which image style we want? Thankfully, fields support options forms just like queries do. In fact, pretty much all views handlers and plugins support this functionality. Just add this code to your flickr_group_photos_field_image class.

<?php
 
function option_definition() {
   
$options = parent::option_definition();
   
$options['image_style'] = array('' => '-');
    return
$options;
  }  function
options_form(&amp;$form, &amp;$form_state) {
   
// Offer a list of image styles for the user to choose from.
   
parent::options_form($form, $form_state);    $form['image_style'] = array(
     
'#title' => t('Image style'),
     
'#type' => 'select',
     
'#default_value' => $this->options['image_style'],
     
'#options' => image_style_options(FALSE),
    );
  }
?>

Not much to explain there: it looks like the query options we implemented above. After adding this code, you should have an option to choose an image style when you add a Flickr Groups: Image field to your view.

We will also need to tweak the field rendering to use the image style the user has chosen like so.

<?php
 
function render($values) {
   
$image_info = array(
             
'path' => $values->{$this->field},
             
'style_name' => $this->options['image_style'],
          );
         
$return = theme('imagecache_external', $image_info);
  }
?>

Now we can have nicely styled images and their titles! Things are really looking nice now, aren't they?

Wrapup

We've covered a lot in this series, and there's so much more we can dig into! While we've looked at a lot of code, I don't think that any of it has been horribly complicated or mind-bending. It's mostly a matter of knowing what to put where, with a healthy dose of planning to make sure our data fits into the Views paradigm properly. In summary, the steps are:

  • Make a plan of attack, taking into account the data you're retrieving and the way Views expects to use it.
  • Create field handlers for your data.
  • Write remote queries to retrieve your data and store it in rows in the view object.

There's a lot of work in those steps, but after running through it a couple times the architecture makes a lot of sense.

Get the code!

I've made the code from this article available on Github! In addition to the functionality described here, it makes a couple more fields available and integrates them into the query engine. Feel free to fork and send pull requests if you find anything wrong or want to add more features.

Thanks for reading and following along. Now, go forth and consume APIs!

Greg Dunlap

Senior Drupal Architect

Want Greg Dunlap to speak at your event? Contact us with the details and we’ll be in touch soon.

Access professional Drupal training at Drupalize.Me

A product of Lullabot, Drupalize.Me is a membership-based instructional video library. It can be supplemented with on-site training workshops, custom built for any organization.

Related Bits

Aug 07 2013
Aug 07

on August 7, 2013 //

I’ve been spending some time learning Android app development. I was initially attracted to it as I liked the openness of the various Android distribution channels compared to the heavy-handed approach Apple takes with iOS. While I’m still relatively new to Android development, I’ve made a few observations that I thought would be interesting to those in the Drupal community.

Android uses Dalvik, a re-implementation of the standard Java Virtual Machine to run apps and substantial portions of the Android operating system. The Android SDK includes most of the standard Java APIs, packaged under the java.* namespace.

Where things get interesting is where the Android API, packaged under the android.* namespace, offers similar or enhanced functionality of the standard Java APIs. For example, Android includes XML utilities under android.util.xml. Likewise, Java provides (under the Java Extensions namespace) XML utilities under javax.xml. In fact, the Android API offers enhancements or alternatives to many core Java APIs.

This is very similar to how Drupal works, where we have enhancements and alternatives in the drupal_ functions. Many of the array, file, and string functions mirror core PHP functionality. Re-implementing language APIs isn’t necessarily a bad thing, especially where the language APIs have critical flaws. However, in both Android and Drupal it adds additional complexity for new developers to learn as they have to research each API alternative and decide what is best for their use case.

Much of the PHP world is going through a change where the dynamically typed nature of PHP is being directed to semi-typed code. While variables themselves do not have a declared type, Drupal (and Symfony) now use type hinting to enforce parameter types on method calls. Take a look at ConfigImporter::__construct(); every method parameter has an explicit type.

This semi-static nature of PHP code limits the effectiveness of static code analysis. Using an IDE like Eclipse shows what’s possible with a static language like Java. For example, it can detect misassignments in variable or return types as you write them in the code itself. Exceptions tend to be more specific (no catch(Exception $e)) while also being easier to detect earlier in the development process as each method must document what exceptions it throws.

Generics in particular solve a pain point of a dynamically-typed language like PHP. Load up any one of your production Drupal sites and check the watchdog log for warnings and notices (assuming they are logged at all). Odds are, a good number of them are from trying to iterate over non-arrays or non-objects, or are the result of a random integer or string being stuck into an array of entities or fields. Java solves this by allowing variables and methods to not just declare that they return a map, but that the map keys and values must be of a specified type. If Drupal 7 was written in Java, the declaration for hook_menu() might be something like:

// We return a map (like a PHP array with named keys) where the keys are strings and they point to a map.
public HashMap<String, HashMap> mymodule_menu() {
  …
}

Don’t get me wrong; I’m not saying that we should abandon dynamically typed languages and that all languages should be a re-implementation of Java. But, as Drupal developers, it’s important we keep an eye on what the rest of the programming world is doing so we don’t find ourselves behind current best practices.

Of course, all of this strictness over types and the preference for explicit getter / setter methods in Java leads to a tonne of boilerplate code. Reflection is possible in Java, but it’s nowhere near as easy as in PHP. We get used to being able to iterate over object properties, or using strings as method names or variables. Being able to use arrays as shorthand for accessing a set of object properties in a loop lets us write common methods in PHP in four or five lines. For example, imagine a scenario where we are loading a node from a remote service where we can’t ensure that the data is complete. Writing this validation in PHP could be very simple:

<?php
$properties
= array(‘nid’, ‘title’, ‘author’);
foreach (
$properties as $property) {
  if (!isset(
$node->{$property} || empty($node->{$property}))) {
    throw new
MissingPropertyException(“Required $property is not set.);
  }
}
?>

In Java, odds are you’ll end up inlining each if statement, increasing the possibility of bugs or simple copy-paste errors. Sometimes, it’s easy to look at PHP code like this and focus on how much more opaque it is. But, when it comes down to it, in the real world code like this is just too useful to not miss when using other, stricter languages.

Like many mobile and desktop application APIs, Android offers a persistent storage layer backed by SQLite. As a PHP and web developer, this sounds great! Most skills for database management should apply, even if we’re used to using a feature-rich database like MySQL or Postgres. Unfortunately, Android’s database APIs and examples seem like a step back to the days when PHP developers used mysql_query as the primary method of database interaction.

The first issue you’ll run into when you’re setting up your tables to store data. Unlike Drupal with it’s Schema API, Android creates tables by executing a SQL string manually created in your code. Since it’s Java, string concatenation isn’t nearly as simple as with PHP, leading to code that’s difficult to read and littered with string constants. While the android.database.sqlite package offers methods for most common database queries, it is missing some functionality that Drupal developers will immediately notice. Most notably is execute MergeQueries. Fetching results is done with the SQLiteCursor class, which is serviceable but doesn’t have some of the convenience methods Drupal developers are used to such as fetchAll().

Android’s API includes a SQLiteQueryBuilder which is great for dynamically constructing SELECT queries. Unfortunately, it’s limited to only SELECT queries. Want to dynamically construct an INSERT or UPDATE query? Back to raw manipulation of a query string, just like the dreaded db_rewrite_sql().

Sometimes we like to complain about the abstraction presented by db_select() and friends. Using Android’s DB APIs is a great reminder of just how developer friendly Drupal 7’s database layer is.

It’s been an interesting experience to dig into a completely different language, API, and application paradigm after spending many years focusing on both Drupal and the web in general. I’m sure there will be more striking similarities and differences I run across as I keep exploring and learning. Have you learned something that made your Drupal-influenced mind shocked (or made your jaw drop) while learning a new language or API? Let us know in the comments!

Andrew Berry

Senior Drupal Architect

Want Andrew Berry to speak at your event? Contact us with the details and we’ll be in touch soon.

Aug 05 2013
Aug 05

on August 5, 2013 //

Give content editors more control over image cropping

Drupal image fields allow content editors to upload photos and pictures without tedious manual cropping and scaling. One posted, images are piped through a series of automatic cropping and scaling presets, ensuring everything is fast and consistent. Unfortunately, all that automation can be a problem when content creators do need to tweak how an image will appear at different sizes. When that's the case, the Imagefield Focus module can help.

Picture of the module being set up

Once installed, ImageField focus gives site builders a new option when setting up the rules for those automatically-generated image derivatives. Imagefield Focus adds "smart" versions of the standard Scale and Crop actions that take into account an image's "focus point" -- a portion of the image that should always be visible, even when it's scaled down and trimmed to fit other dimensions. Editors can specify that focus region when uploading an image using a simple Javascript widget; if no focus is specified, the normal cropping and scaling behaviors take over.

Picture of the module in action

Although it doesn't give editors explicit control over the precise appearance of every version of an uploaded image, Imagefield Focus does the next best thing. It's a quick and easy addition to most sites, and can dramatically improve the quality of small thumbnails when used judiciously.

Jeff Eaton

Senior Digital Strategist

Want Jeff Eaton to speak at your event? Contact us with the details and we’ll be in touch soon.

Aug 02 2013
Aug 02

Karen McGrane and Jeff Eaton discuss their experiences building editorial interfaces for large CMS projects, the challenge of bridging online process with offline reality, and the fact that Microsoft Word will never, ever die.

Jul 31 2013
Jul 31

on July 31, 2013 //

Apache Solr can take your site's search to the next level, but it requires special setup.

Overview

Solr is a powerful and feature-rich search platform released by Apache. Integrating it with Drupal allows for faster and more advanced search options. However, it also means that a Solr instance needs to be installed and running somewhere, similar to how a database like MySQL is required.

Solr is a Java application, and can be run independent from any server technology. However, for a production environment, it is typically best to run it in a J2EE server environment such as Tomcat, Glassfish, JBoss, etc. This article describes how to install Solr 4.3.0 for use by Drupal running under Tomcat 7 on a Linux server.

Java Installation

Tomcat and Solr are both Java applications, so the only real prerequisite is to install Java. The method of installation will vary by Linux distribution and the desired flavor of Java. Redhat, CentOS, Debian, and Ubuntu all provide OpenJDK implementations of Java 7 in their software repositories, which is used here. Any Java implementation should work here, though. So feel free to skip this step if if a different flavor is desired, or if Java is already installed on the server. Please note that a full Java Development Kit (JDK) must be installed. A Java Runtime Environment (JRE) installation is not sufficient.

Redhat/CentOS:

yum install java-1.7.0-openjdk

Debian/Ubuntu:

aptitude install java7-jdk

Tomcat Installation

Some Linux distributions provide a Tomcat package in their software repositories. However, installing the latest version from the Apache Software Foundation ensures that all of the latest security and bug fixes are present. It also keeps all of the configuration and data files consolidated into one location, and works the same regardless of Linux distribution.

Step 1: Create a low-privilege user, which will be used to run the Tomcat service.

useradd -Mb /usr/local tomcat

Step 2: Download the latest tar.gz binary of Tomcat 7 from http://tomcat.apache.org/download-70.cgi to /usr/local/src/ on the server.

Step 3: Unpack the Tomcat tar.gz file to /usr/local/tomcat

tar -C /usr/local -zxf /usr/local/src/apache-tomcat-7.*.tar.gz
mv /usr/local/apache-tomcat-7.* /usr/local/tomcat

Step 4: By default Tomcat listens on port 8080. However, that is a commonly used port for other services as well. To avoid conflict, change Tomcat to use port 8983 instead with this search/replace comand.

sudo sed -i s/8080/8983/g /usr/local/tomcat/conf/server.xml

Step 5: Finally, change the ownership of the Tomcat directory, and start it to verify that it is working

chown -R tomcat:tomcat /usr/local/tomcat
sudo -u tomcat /usr/local/tomcat/bin/startup.sh

It is important to note that there are some security implications of running Tomcat that need to be considered. Tomcat security is outside of the scope of this article, but there are a number of good resources available that describe how to harden a Tomcat installation, including one provided directly by the Tomcat project at http://tomcat.apache.org/tomcat-7.0-doc/security-howto.html. Generally speaking, though, if the only use for this Tomcat instance is to serve internal Solr requests, blocking outside access with the use of a firewall is usually sufficient.

Solr Installation

Some Linux distributions also provide a Solr package in its repository, but it is typically an old version. Just like with Tomcat, installing the latest package from the upstream project is the method used here.

Step 1: Download Solr-4.3.0 from http://lucene.apache.org/solr/ to the server and unpack the downloaded file.

tar -zxf solr-4.3.0.tgz

Step 2: Copy the java libraries provided by Solr to the Tomcat library directory

cp solr-4.3.0/dist/solrj-lib/* /usr/local/tomcat/lib/

Step 3: Copy the log4j configuration file provided by Solr to the Tomcat configuration directory

cp solr-4.3.0/example/resources/log4j.properties /usr/local/tomcat/conf/

Step 4: Copy the Solr webapp file to the Tomcat webbapp directory

cp solr-4.3.0/dist/solr-4.3.0.war /usr/local/tomcat/webapps/solr.war

Step 5: Create the Solr context file at /usr/local/tomcat/conf/Catalina/localhost/solr.xml with the following contents.

<Context docBase="/usr/local/tomcat/webapps/solr.war" debug="0" crossContext="true">
  <Environment name="solr/home" type="java.lang.String" value="/usr/local/tomcat/solr" override="true" />
</Context>

Solr Indexes

Solr is capable of providing multiple search indexes, or cores, using just one instance of the Solr application. Each core is independently configured, and there is a single configuration file to define each of the cores. The steps below show how to create a core named drupal. These steps can be used to create as many cores as required, each with a unique name.

Step 1: Create the base Solr directory, and create a copy of the example configuration

mkdir -p /usr/local/tomcat/solr
cp -r solr-4.3.0/example/solr/collection1/conf /usr/local/tomcat/solr/

Step 2: Download the latest version of the apachesolr Drupal module from https://drupal.org/project/apachesolr to the server and unpack the downloaded file.

tar -zxf apachesolr-*.tar.gz

Step 3: Copy the Solr configuration files from the Drupal module to the example Solr configuration directory from above.

rsync -av apachesolr/solr-conf/solr-4.x/ /usr/local/tomcat/solr/conf/

Step 4: Create the Solr core definition file at /usr/local/tomcat/solr/solr.xml with the following contents to define the drupal core.

<?xml version="1.0" encoding="UTF-8" ?>
<solr persistent="false">
  <cores adminPath="/admin/cores">
    <core name="drupal" instanceDir="drupal" />
  </cores>
</solr>

Step 5: Create the drupal Solr core directory as defined above, and copy the example Solr configuration files to that location. These configuration files can be further modified as needed for any specific requirements for this core.

mkdir /usr/local/tomcat/solr/drupal
cp -r /usr/local/tomcat/solr/conf /usr/local/tomcat/solr/drupal/

Step 6: Stop Tomcat, make sure the permissions are correct, and start Tomcat back up

/usr/local/tomcat/bin/shutdown.sh
chown -R tomcat:tomcat /usr/local/tomcat
sudo -u tomcat /usr/local/tomcat/bin/startup.sh

The new Solr core admin interface is now available at http://localhost:8983/solr/#/drupal. The URL to use in Drupal's Apache Solr configuration is http://localhost:8983/solr/drupal.

Wrapping Up

The last order of business is to provide a method for which Solr can be started automatically when the server reboots. Attached to this article is an init script which will provide exactly that, and will work with both Redhat based and Debian based Linux distributions. Create the init file at /etc/init.d/tomcat. Then, make sure it is executable, and configure it to start on reboot with the following commands.

chmod +x /etc/init.d/tomcat

Redhat/CentOS:

chkconfig --add tomcat

Debian/Ubuntu:

update-rc.d tomcat defaults

If everything goes well, you'll have a working Solr search server. For more information about integrating it with Drupal, you can visit the Apache Solr Integration project on Drupal.org.

*/ /*-->*/

Ben Chavet

Systems Administrator

Want Ben Chavet to speak at your event? Contact us with the details and we’ll be in touch soon.

Related Bits

Jul 24 2013
Jul 24

on July 24, 2013 //

Learn a few tips to overcome the headache of upgrading your site

The Drupal core team releases new versions of Drupal frequently. These can contain bug fixes, security patches, or both (sometimes, they even break the law and add new features). Keeping your websites up to date will ensure your users do not see unexpected errors produced by core, and helps prevent hackers from hijacking your site.

Upgrading Drupal core in a live website with a few contributed modules is a pretty straightforward process. Doing it in a website with custom modules, large amounts of data, and custom business logic is no easy task. Patience and meticulousness are your best friends in this endeavor. Every website is different and most probably you will need to perform additional tasks when upgrading your Drupal site.

At Lullabot we have come to a list of common steps which act as a guideline to reduce risks. Before we start with it, let's imagine the following (and pretty safe base to work from), scenario:

  • You have a production environment, a development environment and a local environment.
  • Your local environment has Drush installed.
  • You use a Version Control System such as Git to track code changes.
  • You are able to extract a database backup of your production environment.
  • You have SSH access to the Development and Production environments, plus permissions to manage the directory where Drupal is installed in each of these environments.

The local and development environments are useful, because they allow you to test upgrades before performing them on your production environment. If there's any way to avoid it, do not upgrade core straight in production (That would definitely qualify as "Extreme Programming.") The closest you can get to the scenario listed above, the better. It will help you trap bugs during the process before getting to production.

Verifying current and target versions

Let's start by checking how many versions behind we are (the higher the number, the longest each forthcoming step it will be).

  1. Locally, run drush status to check what version the site is at.
  2. Go to http://drupal.org/node/3060/release, filter by the version of Drupal core you are using and see how many versions behind your site is.
  3. Read each of the release notes (do not be sneaky here, open the whole release node and do not read just the summary) to pinpoint API changes that may break the site (normally this is not the case, but beware). The Drupal core team makes it very clear if there is a change in an API (for example, a function have been removed or its arguments have changed). If that happens, verify that your custom modules (and maybe even contributed ones) comply with that change.

Updating the code locally and inspecting changes

Now let's get the new version in place locally:

  1. Open a console and go to the root directory of our local Drupal installation
  2. Make sure that your code and database are up to date. The former may mean to execute git pull if you are using Git, while the latter can be achieved by executing drush sql-sync @myprodsite @self. Alternatively, you can just extract a database dump of your production environment, recreate your local database and load that dump into it.
  3. On the command line execute drush pm-update drupal to obtain the new release and update the database.
  4. Now you have the new version of Drupal core in your local environment. You may like to check what has changed in case you want to restore, for example, your customized .htaccess file, or in case you do not want files such as INSTALL.txt or LICENSE.txt in your root directory. You can get an overview of these changes with git status and quickly revert changes in some of the files with git checkout path-to-a-file.
  5. Now we are going to create a commit with the changes in core, while reading at what has changed. If you feel confident enough, you can just do git add . and commit that. Alternatively, if you want to really know what has changed in this new core release run git add --patch, which will go change by change and will let you decide if you want to commit or discard each of them. Note that this can be a very long process, but it will teach you a lot too.

Testing

Test the new release locally before pushing your changes to the remote repository. Navigate through your site and simulate the most important tasks in it, verifying that there is nothing that break them. If there are automated tests (the best and rarest scenario), run them.

Next step is to push our changes to the remote repository and update the development environment. Normally you will just need to do the following (unless an automated job does it for you):

  1. Log into the develop environment and install a copy of the production environment's database.
  2. Go to the Drupal root directory.
  3. Run the following commands:
  4. git pull
    drush updatedb

That's it. Now let your QA team have a look at the site for a while.

If your workflow follows a SCRUM methodology or similar, try to get the core upgrade into the development environment at the start of a sprint so the rest of the team can test the new codebase while the sprint goes on.

Hitting the red button to go to production

Once you have done enough testing, you are ready to go. The steps would be pretty similar to the ones in the previous section, except that you should already have backups of the current and previous states of the production's database. It is very useful to use git tags for the production environment and point it to them instead of a branch, as it gives you the option to roll back to the previous tag in case something goes wrong.

Want Juan Pablo Novillo Requena to speak at your event? Contact us with the details and we’ll be in touch soon.

Jul 17 2013
Jul 17

on July 17, 2013 //

Simplify testing with a dedicated QA site for every new feature — automatically!

It's no secret that at Lullabot, we love GitHub. We use it for as many projects as possible, and have found some great success with the tools it provides. They've helped us simplify development, code review, documentation, and even communication and transparency with our clients.

Our typical process for Drupal project begins with of a checkout of our Drupal Boilerplate (thanks Eric Duran!). It gives us a base directory structure to start from, some basic drush commands, and drush aliases to simplify deployment tasks. From there, we commit Drupal into docroot and start to build out the site as normal.

Next, we input the projects requirements into the GitHub issue tracker. These take various forms for different clients, depending on whether we're starting from user stories, visual design assets, or actual written technical requirements. After we have a decent backlog of tickets, we prioritize them with the client and group them into milestones. Those milestones are typically set up as two week sprints, and each ticket will typically get its own branch of code.

When a ticket is ready for review, the issue can be turned into a pull request with a nice command line tool called hub. Pull requests are an effective means of performing peer review on code before merging into your stable branch. If one developer sends a pull request, another reviews the code before it's merged with the project's master branch.

While the peer review process is something we do for our own sanity, quality control, and knowledge sharing, it's rarely a process that clients can participate with. When the client is technically savvy and has time to work with us on that level it's great, but it's not something we can count on with every project.

A solution we've found to address this is to leverage the power of GitHub and to add some Jenkins magic into the mix. By tying GitHub's webhooks into a Jenkins instance, we can turn the changes for each pull request into a fully testable Drupal environment. This allows the client or a reviewer to click around a fresh QA site, test the features that would be affected, and easily approve or deny those changes. They don't have to manually push code to a QA environment, or worry about stepping on other in-progress features in the process. The site they're testing is completely dedicated to the changes within that feature's branch, and it's extremely productive as a result.

If you'd like to skip ahead to the geeky details, dive right into the GitHub repository. Otherwise, you can stick around for an overview of how we did it.

The process goes something like this:

  1. A new Pull Request is created.
  2. Jenkins detects the Pull Request, creates a new Drupal instance, and applies the Pull Request to the new instance.
  3. Jenkins posts back to the Pull Request on GitHub with a comment about where the new environment can be found.
  4. Once the Pull Request is merged, you can tell Jenkins to delete the environment, and it can then post a comment to that effect.

There are other commands you can access by posting to the pull request's comment, such as asking the bot to please rebuild (such as after a new commit), and it will also post to the thread if a build fails.

This process has really helped in our projects to streamline peer reviews. Here's what a client had to say about the process:

"The pull request environments have been a huge help in testing and validating the features or bugs for our sites. We are able to isolate an issue and validate it on a functioning site before final testing and deployment. It has also made our development practices clear, since you know what you're committing code to and testing against." — Mike Shaver, Intel

Overall this tool really saves Lullabot a lot of time, which saves our clients money. We've open-sourced the project on GitHub and would love to hear what you think of this. If you find it useful, but don't have the expertise to set it up, give us a shout and let's talk about how we can help you.

Jerad Bitner

Sr. Technical Project Manager

Want Jerad Bitner to speak at your event? Contact us with the details and we’ll be in touch soon.

Jul 15 2013
Jul 15

on July 15, 2013 //

Disconnect idle users to keep unattended computers safe

It's inevitable: no matter how secure your web site is, Murphy's Law ensures that one of your site's users will eventually leave their account logged in, and their computer unlocked. If they're a new visitor to the site, the stakes are low, but if an administrator's account is left logged in on an insecure computer, it can be disastrous. Many banks and e-commerce sites solve the problem with an aggressive session timer. If you're not active in your browser for twenty minutes, you're logged out. With the Auto Logout module, you can give your site's users the same protection.

Once installed, Auto Logout gives site builders a host of configuration options. Session timeouts can be managed on a per-role and per-user basis; optional on-screen countdowns and warnings can prompt users before they're disconnected to avoid lost work; and site builders can tweak everything from the text of the warnings to the URL they're redirected to once they're disconnected.

Auto Logout module doesn't cover every use case: if you need to prevent people from logging in on multiple computers, for example, additional modules like Session Limit are required. It's a quick and easy feature to add, however, and one that can save your users' bacon if they're prone to leaving a computer unattended.

Jeff Eaton

Senior Digital Strategist

Want Jeff Eaton to speak at your event? Contact us with the details and we’ll be in touch soon.

Related Bits

Jul 12 2013
Jul 12

In this episode Addison Berry has fellow Lullabots Sally Young and Juampy Novillo Requena sit down to discuss their recent work on "decoupling" Drupal. This is based on concepts like COPE, which was pioneered by NPR (National Public Radio), and posits the benefits of completely separating content and editorial workflow from the presentation. Within Drupal, that means using Drupal to manage the content, but not using Drupal's theme layer at all. Instead we talk about how they use Drupal to create APIs which can be used as web services by any front end technology you'd like, such as AngularJS, iOS, or Symfony2. Find out what they're on about, and how they've been doing this with Drupal on recent projects.

Jul 11 2013
Jul 11

on July 11, 2013 //

Live in the Midwest and like Drupal? So do we! Or at least some of our team. We'll be out in force at this year's annual Twin Cities DrupalCamp, held July 18th-21st.

As a resident of Minneapolis I've been attending this camp regularly since its inception and it's always been a great opportunity to meet other local Drupal developers and to interact with those that I've known for years. Every year I'm delighted at the number of people who come to learn about Drupal. Last year we had nearly 300 people attend the camp. This year is shaping up to be just as good and myself, Andrew, Blake, Emma, Jeff, and Nate will all be in attendance.

I've also been participating in helping to organize the camp this year in a much larger capacity than previous years. I've normally poked my head in a bit here and there, but this year I've been involved with the entire session selection process: soliciting session submissions, helping to choose sessions, and contacting presenters. It's been a great way to get to know more of the people in the local community. I also helped by building the camp website which was a learning experience better suited for another blog post in the future. Suffice it to say, I'm super excited to see this all come together and to have so many of my friends in town for the weekend.

Keynote By Jeff Eaton

Our very own Jeff Eaton will be delivering the keynote on Saturday, titled "The Page is Dead, Long Live the Page". In which he'll talk about the rise of mobile apps, content APIs, and site personalization and the tools that Drupal provides us with to build flexible, future-friendly sites.

Sessions

The Lullabot team will be presenting the following sessions throughout the weekend.

I am really exited about the great content from all the speakers. And, unlike past years where we waited till the last minute to schedule everything, we're ahead of the curve this year and have already put together a complete schedule for the weekend.

Drupalize.Me Free PSD to Theme Workshop

Last year Emma was the keynote speaker at the camp and during her presentation announced that she would be open-sourcing her PSD to Theme workshop. This year Emma and Joe will be teaching a version of the PSD to Theme workshop that is completely free. (Not surprisingly, the workshop is already FULL.) But you should still come to the camp and talk to us about the newly open sourced curriculum.

We're looking forward to seeing you at the camp next week. Be sure to say "hello"!

Jul 11 2013
Jul 11

on July 11, 2013 //

This weekend is going to be a scorcher of Lullabot activity in North America! Our team is making appearances in LA, New York, and Toronto. Participating in DrupalCamps is a great way for the Lullabot team to stay in touch with what's happening in the Drupal community, and odds are good that we'll be coming to an event near you.

This camp has a history of featuring Lullabot presenters, and this year we are delighted they have asked Andrew Berry to keynote the conference! His presentation Druplicon's Fables: Three stories of mistakes, morals, and doing it right will be delivered on Saturday morning. Emma Jane Westby will be giving two sessions at the camp: From Interior Decorator to Architect: Changing how we work and Git Makes Me Angry Inside. Attendees at the camp will also have the opportunity to meet our Senior Developer, Angus Mak (be sure to ask him about his new puppy and airport time capsules).

NYC Camp (July 12-15)

With nearly 800 registered attendees, this camp is going to blow your mind. Keynote sessions are being delivered by Larry "Crell" Garfield and Fabien Potencier (the creator of Symfony). Lullabot's own Kyle Hofmeyer and Dave Burns are making the trek from Philly to deliver our Community Tools Workshop. They're looking forward to getting you up to speed on the essential tools used by the Drupal community. By the end of this workshop, you'll have Git and Drupal 8 installed on your laptop. It's an extremely valuable workshop for anyone who wants to get started with Drupal development, but needs guidance to get up and running.

Finally we've got the left coast covered too with DrupalCamp LA. Our Senior Account Manager, Andrew Wilson, will be at the camp talking to folks about Drupalize.Me, our premium Drupal training videos and training services. Andrew is passionate about hearing people's stories about the challenges they're facing in Drupalland, so be sure to drop by the Drupalize.Me booth and give him an earful. Sean Lange, Senior Front-End Developer, will also be in attendance. If you're looking for someone to give you an earful, stop by and ask him to explain the difference between 'mobile', 'adaptive', and 'responsive' web sites!

Hopefully we'll see you at one of these fantastic community events this weekend. (And if you're interested in having us speak at an upcoming event, please get in touch!)

Jul 11 2013
Jul 11

on July 11, 2013 //

The O'Reilly book is being rolled out in videos

We've started a brand new series of series! We are turning the great O'Reilly book, Using Drupal, 2nd Edition, into videos lessons. Instead of being just one series, it will be thirteen individual series—one for each chapter of the book (plus appendices). We've created a new guide on Drupalize.Me to house the new series as they come out, and you can see that the first two are already up there. In addition to the first series, About the Using Drupal series, the first video from the Drupal Overview chapter, "What is Drupal?," is also free.

Addison Berry

Director of Education

Want Addison Berry to speak at your event? Contact us with the details and we’ll be in touch soon.

Jul 10 2013
Jul 10

on July 10, 2013 //

With a few tweaks, I can see my localhost on every device I use

Do you develop your websites on your local machine? Do you need to test those sites across multiple devices, and face hassles using them to access the locally-hosted site? Have you tried different methods, processes, workarounds, tutorials, and blog posts about how to connect to your local machine? If you are like me, then you answer all of these questions with a loud, "yes!"

The solution is xip.io, a service so simple that I spent a long time trying to figure out exactly what it does!. It's a free site from 37Signals, the makers of Basecamp, that uses "wildcard" domain names to route requests to any computer on your network. After googling around to understand how xip.io actually works, I was able to configure my system to use it with complete success. My new workflow for testing is easy, fast, and flexible! Let me show you how I did it.

My goal is to view my locally-hosted website on as many displays/devices as possible.

Before we get into the weeds. Let's take a look at where I was starting from.

  • I used MAMP PRO to manage my sites.
  • I used a wireless router within my house.
  • I developed on a Mac, which gave me great access to Chrome, Safari, and Firefox. With a virtual box I could test Windows with IE7, 8, 9, and 10. It was slow, but serviceable most of the time.
  • I had a Windows laptop connected to my local network. If I edited its hosts file, I could access the website being hosted on my Mac.
  • I had an iPhone, iPad and an Android tablet -- and testing on those devices was not easy. I was constantly changing settings and configurations, altering settings in virtual hosts, using easyDNS, and cobbling together partial fixes to get a stable setup that worked for our testing process.
  • I wanted all of my devices to agree that a particular domain name, like "http://www.my-client-web-site.dev", be served from my Mac.

If you have a similar setup and face similar problems, the steps I used to get xip.io working might be a good solution for you, too. If your setup is different, I hope it will make you curious enough to investigate alternatives and maybe even share your own development process in the comments!

Making it happen

With xip.io, and a simple addition to my localhost configuration (an alias in MAMP Pro), all of my devices can access my local site from a custom URL. I can grab my iPhone, connect to my wireless network, and enter an address like 'ahoy.192.168.1.14.xip.io' into Safari. The request is routed to my Mac, then MAMP PRO matches the first part of the address (ahoy) to the site alias that I set up. Once it's setup any machine on my local network can use the same address to test the site!

Here are the setup steps I followed to get the magic running.

The site on my local Mac (http://ahoy.local):

My MAMP Pro setup:

The ip address for my local machine:

An additional MAMP Pro alias for this site:

This maps my local ip address, and the xip.io naming convention.

That's it!

The end result

I can now see my site (http://ahoy.192.168.1.2.xip.io) on all of my devices. By simply changing the prefix on #.#.#.#.xip.io to another site alias, I can use different names for different sites. Once it's set up, it just works!

Sean Lange

Senior Front-end Developer

Want Sean Lange to speak at your event? Contact us with the details and we’ll be in touch soon.

Jun 28 2013
Jun 28

There's a lot of ground to cover and we don't waste any time. In this episode we talk about the key reasons to use BDD: to improve development processes; to improve conversations with stakeholders; and, of course, to automate testing of your sites so that you can avoid click-style testing of new code. We dive into the specifics of what tools you will need for Drupal-powered sites: Behat, Mink, Selenium, and the Drupal extension.

We also discuss the Gherkin feature syntax and how it's intended to provide a way to both automate tests and to make it easier for everyone involved with a project to discuss a feature set using a common language. As well as some of the philosophies behind BDD and best practices for writing good testing scenarios.

We wrap up the episode with a discussion of how BDD is being used to assist in the development of Drupal.org and it's used to ensure our favorite community resource continues to function as expected.

Jun 21 2013
Jun 21

Content strategy, tactics, and humor from industry experts and assorted ne'er-do wells.

Length: 34:34 minutes (13.65 MB) Format: mono 44kHz 55Kbps (vbr)
Jun 19 2013
Jun 19

on June 19, 2013 //

Help yourself, the project and everyone in your team by following best practices as much as possible

At Lullabot, while working for a client's project, we assign resolved tickets to other bots for peer review. This process has turned out to be very effective in helping knowledge share, improving our coding standards and doing general QA (note: this does not exclude an external QA test).

Most of the backend and frontend developers at Lullabot have good knowledge of all sort of best practices, whether it is from Drupal.org, JQuery, Compass, AngularJS or any technology that we use. Whenever we need to solve a problem we think what is the standard and most effective way of solving this?.

When a project has been developed following coding standards and relying in third party code as much as possible, it is much more probable that new people joining the project will understand its APIs and be able to start coding without extra help, which minimizes the time someone spends on reading the quirks and custom logic that a project has. Similarly, if other company retakes the project later on, the same rule applies: they will know where the logic is; their assumptions will most probably be correct and they won't have to spend much time evaluating the overall complexity of the site.

How can I learn Drupal's Coding Standards?

All of these docs are at Drupal.org. Here are links to them:

There are also guides available for each role within a team:

How can I learn all the above?

The amount of information at the above links may be overwhelming. The best way to learn them is to get involved in the Community. Depending on your skills there are different ways to start. The more you participate in contributed modules and core, the more you will understand how Drupal works and the more you will learn from the great and friendly minds which are behind it and which will review your code and give you tips to improve it. Take it as a hobby, a place where you learn for free and are also helping to improve a tool you use.

Looking forward for seeing you in the issue queues!

Want Juan Pablo Novillo Requena to speak at your event? Contact us with the details and we’ll be in touch soon.

Jun 14 2013
Jun 14

For this episode we have special guest Chris Eppstein join Addi, Kris Bulman, Micah Godbolt, and Carwin Young to talk about Sass and Compass. Chris is the creator of the Sass framework Compass, and is also part of the core Sass team. We start out talking about CSS preprocessors generally, and comparing several of the popular ones out there today. Then we move on to talking about managing CSS when using preprocessors, what new cool things are coming in Sass 3.3, working with them in Drupal, and how Chris' new job at LinkedIn impacts the Sass and Compass projects.

Jun 12 2013
Jun 12

on June 12, 2013 //

Thanks for another great North American DrupalCon

By now everyone hopefully made it safely home from DrupalCon (if you're still there, it ended more than two weeks ago so you might want to head home), and while the dust settles we wanted to say thanks to all who helped make DrupalCon Portland a success, and also share some highlights from Lullabot.

The Lullabot Party

The Lullabot Party at DrupalCon Portland 2013

Thanks so much to all who braved the early rain (it did clear as the evening went on) and joined a packed house at the Wonder Ballroom for the Lullabot DrupalConPDX Party. A special thanks is in order for Orbit for coming all the way out to Portland to rock the Lullabot party.

Lullabot talks and presentations

Thanks so much to all of you who attended the many talks and presentations that bots were giving at this year's DrupalCon. The feedback has been great and we really appreciate the full rooms of people that attended our sessions. Lullabot was honored to have Bots be a part of eleven different sessions at DrupalCon Portland, and since they've now got audio and slides up for all of them we thought it might be helpful to have a list of links to each. We've provided a full listing of each below, organized by day and time for your convenience. If you missed any of these sessions and wanted to see them, go checkout the videos.

DrupalCon presentation

Tuesday

Wednesday

Thursday

To Austin!

Also, for those who missed it, it's official now that DrupalCon 2014 in North America will happen in Austin, Texas. We look forward to seeing you there next year.

Jun 10 2013
Jun 10

on June 10, 2013 //

Keep articles safe from collisions with per-user locks

Drupal's editorial experience has improved considerably over the past several releases, and Drupal 8 promises to be even better. However, it's still easy for writers and editors to collide with each other when they collaborate. If two people edit the same piece of content at the same time, one user's changes are inevitably lost. Fortunately, the Content Locking module is ready to help.

Module in action

When Content Locking is enabled, opening a node's edit form "checks it out." Any other users who attempt to edit the node will receive a warning, and won't be able to make changes until the initial user saves their changes or closes the edit form. This behavior can be turned on and off for individual content types, and the module can also warn editors when unsaved changes are about to be lost.

More importantly, site admins can set up timeouts for these locks. If a user edits a node but closes their browser without saving, Content Locking will wait 30 minutes (or more, depending on the site's configuration) then release the node for other editors. Administrators can also hit an overview page that displays all of the currently active locks. If a user needs emergency access, admins can revoke a lock manually to avoid the wait.

Module in action

Preventing editor collisions is a tough problem, but Content Locking solves the worst of the challenges. It also provides plenty of configuration switches for site builders who need to tweak its behaviors.

Jeff Eaton

Senior Digital Strategist

Want Jeff Eaton to speak at your event? Contact us with the details and we’ll be in touch soon.

Related Bits

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web