Sep 13 2018
Sep 13

Yesterday at Drupal Europe, Drupal founder and lead developer Dries Buytaert gave a keynote that outlined the future for Drupal, specifically the release of Drupal 9, and how it impacts the lifespan of Drupal 7 and Drupal 8.

For the TL;DR crowd, the immediate future of Drupal is outlined in the snappy graphic above, and shared again below (thanks, Dries!).

The big takeaways are:

  • Drupal 9 will be released in 2020.
  • Drupal 7 end-of-life will be extended out to 2021, even though Drupal usually only supports one version back.
  • Drupal 7 and Drupal 8 will be end-of-life in 2021.

Wait… what? This proposed schedule breaks with tradition – Drupal has always supported one version back. And this schedule gives D8 users a single, short year to upgrade from Drupal 8 to Drupal 9.

So what now? Wait until 2021 to move your site off Drupal 7? Do two (possibly costly) upgrades in three years? Bail on Drupal entirely?

First and foremost, Don’t Panic.

Let’s explore each of the options in a little more detail to help inform your decision making process.

Upgrade from Drupal 7 to 9

When Drupal 8 became available, a lot of organizations using Drupal 6 opted to wait and bypass Drupal 7 entirely. The same is certainly an option for going from D7 to D9.

On the plus side, taking this route means that it’s business as usual until 2020, when you need to start planning your next steps in advance of 2021. Your contributed modules should still work and be actively maintained. Your custom code won’t have to be reworked to embrace Drupal 8 dependences like Symfony and the associated programming methodologies (yet).

Between now and then, you can still do a lot to make your site all it can be. We recommend taking a “Focused Fix” approach to your D7 work: rather than a wholesale rebuild, you can optimize your user experience where it has the most business impact. You can scrub your content, taking a hard look at what is relevant and what you no longer need. You can also add smaller, considered new features when and if it makes sense. And savvy developers can help you pick and choose contributed solutions that have a known upgrade path to Drupal 8 already.

But it isn’t all roses. Delaying potential problems in updating from 7 to 8 doesn’t make those problems go away. Drupal 9 will still require the same sort of rework and investment that Drupal 8 does. It is built on the same underlying frameworks as Drupal 8. And Drupal is still going to push out some updates to Drupal 7 up until its end-of-life, most notably requiring a more modern version of PHP. Changes like this will definitely affect both community-driven modules and any custom code you may have as well.

Upgrade from Drupal 7 to 8 to 9

“Ginger Rogers did everything [Fred Astaire] did, backwards and in high heels.”

— Bob Thaves

Colloquially, the most efficient way to get from Point A to Point B is a straight line. Major versions of a platform are effectively a line. In this case, you can think of that “straight line” as going from D7 to D8 to D9, instead of trying to go around D8 entirely.

It’s critically important to understand one unique feature of Drupal 9: It is designed from the ground up to be backwards compatible with Drupal 8.

Angie Byron, a.k.a. Webchick, gave an excellent talk about what this really means at BADCamp last year.

[embedded content]

Again for the TL;DRs — “backwards compatibility” means that code is deprecated and ultimately removed  from a code base over time in a way that provides a lot of scaffolding and developer notice. This practice results in major version upgrades that require very little rework for the site to stay functional.

The backwards compatible philosophy means that the hard work you do upgrading to Drupal 8 now will still be relevant in Drupal 9. It won’t be two massive upgrades in three years. As long as your Drupal 8 site is up to date and working properly, D9 should not bring any ugly surprises.

Have more questions about Drupal 7 vs 8 vs 9? Contact us and we’d be happy to help with your project.

Let’s talk community code

When Drupal 8 was released, one of the BIGGEST hurdles the community faced (and continues to face) was getting contributed modules working with the new version. It required ground-up rewrites of… well… pretty much everything. A lot of modules that people were using as “basics” in Drupal 7 were folded into Drupal 8 core. But a number were not, and people volunteering their time were understandably slow to bring their contributed code over to Drupal 8. As a result, many sites were hesitant or unable to upgrade, because so much work would have to be customized to get them to same place the were on Drupal 7.

So will it be the same story going from Drupal 8 to Drupal 9? Will we have to wait years, in some cases, for our business-critical tools to be updated?

According to Dries’ post, the answer is no. Drupal is extending the backwards-compatible philosophy to the contrib space as well.

… we will also make it possible for contributed modules to be compatible with Drupal 8 and Drupal 9 at the same time. As long as contributed modules do not use deprecated APIs, they should work with Drupal 9 while still being compatible with Drupal 8.

— Dries Buytaert

Assuming  this plays out as intended, we shouldn’t see the same dearth of contrib support that we did when Drupal 8 became a reality.

And yes. There are a lot of assumptions here. This is Drupal’s first pass at a backwards-compatible upgrade methodology. There is inherent risk that it won’t work flawlessly. All we can say for sure is that the community is very hard at work getting to a reliable release schedule. A thoughtful upgrade approach should make the “Drupal Burn” associated with major version upgrades a thing of the past.

So which way should I go?

So which approach is best? For starters, think about whether an upgrade benefits you in the immediate term. Read a little about Drupal 8, audit your site with our website checklist, and if you still aren’t sure, you can start with our quiz.

If all of this feels overwhelming, contact us. Kanopi Studios is known for its great support (if you choose to stay on D7), as well as great design and build execution (if you choose to go to D8). Whichever way you choose, we’ve got you covered.

Aug 30 2018
Aug 30

Pelo Fitness spinning class

Pelo Fitness spinning class

One of the best things about Drupal is its security. When tens of thousands of developers work collectively on an open source project, they find all the holes and gaps, and strive to fix them. When one is found, patches go out immediately to keep sites safe and secure. But a site is only secure if those patches are applied when they are released.

Pelo Fitness is a Marin County-based community dedicated to a culture of fitness. They offer cycling, strength, yoga & nutrition programs customized to an individual’s needs and fitness level. Whether someone is a competitive athlete, a busy executive or a soccer mom (or perhaps all three), their programs are designed to build strength and endurance, burn calories and boost energy.

Yet their site was weak since they hadn’t applied a few major Drupal security updates. There was a concern that the site could be hacked and jeopardize client information. Pelo Fitness customers use the site to purchase class credits and reserve bikes for upcoming classes, requiring users to log in and enter personal information.

Want to keep your site secure? Contact us to get started. 

The solution

Kanopi performed all the security updates to get the Pelo Fitness on the latest version of Drupal. All out of date modules were updated, and the site was scanned for suspicious folders and code; anything that looked suspect was fixed. Care was taken not to push code during high traffic times when reservations were being made, so code was pushed live during specific break times that would allow for the least disruption. Lastly the site was also moved over to Pantheon for managed hosting.

Due to the Drupal support provided by Kanopi, the Pelo Fitness website is now protected and secure. Inspired to make all their systems stronger, Pelo Fitness also switched to a different email system as well so all their tech solutions were more up to date.

How to keep your site secure

Websites are living organisms in their way, and need constant care and feeding. It’s imperative to always apply critical security patches when they come out so your users information (and your own) is kept secure at all times. There are a few simple things that you can do on your Drupal site to minimize your chances of being hacked.

  • Stay up to date! Just like Pelo Fitness, make sure you pay attention to security updates for both Drupal core and your contributed modules. Security releases always happen on Wednesdays so it’s easy to keep an eye out for them. To stay up to date, you can subscribe via email or RSS on Drupal.org or follow @drupalsecurity on Twitter.
  • Enable two-factor authentication on your site. It’s a few seconds of pain for an exponential increase in security. This is easily one of the best ways to increase the security of your site. And besides, it helps you makes sure you always know where your phone is. The TFA module provides a pluggable architecture for using the authentication platform of your choice, and Google Authenticator integration is available already as part of their basic functionality.
  • Require strong passwords. Your site is only as secure as the people who log into it. If everyone uses their pet’s name as their password, you can be in trouble even if your code base is “bulletproof” (nothing ever is). The Password Policy module sets the gold standard for traditional password strength requirements, or you can check out the Password Strength module if XKCD-style entropy is more your thing.
  • Make sure you’re running over a secured connection. If you don’t already have an SSL (TLS, technically, but that’s another story) certificate on your website, now is the time! Not sure? If your site loads using http:// instead of https://, then you don’t have one. An SSL certificate protects your users’ activities on the site (both site visitors and administrators) from being intercepted by potential hackers.
  • Encrypt sensitive information. If the unthinkable happens and someone gets hold of your data, encryption is the next line of defense. If you’re storing personally identifying information (PII) like email addresses, you can encrypt that data from the field level on up to the whole database. The Encrypt module serves as the foundation for this functionality; check out the module page and you can build up from there.
  • Don’t let administrators use PHP in your content. Seriously. The PHP filter module can get the job done quickly, but it’s incredibly dangerous to the security of your site. Think seriously about including JavaScript this way, too. If your staff can do it, so can a hacker.
  • Think about your infrastructure. The more sites you run on a single server, the less secure it is. And if Drupal is up to date, but your server operating system and software isn’t, you still have problems. Use web application and IP firewalls to take your security even further. 

Contact us at Kanopi if you need help keeping your site secure.

Aug 29 2018
Aug 29

Image of a task board with MVP tickets

Image of a task board with MVP tickets

Congratulations! Your Boss just gave you approval to build the website you’ve been pitching to them for the past year. A budget has been approved, and you have an enthusiastic team eager to get started. Nothing can stop you… until you receive the deadline for when the website has to go live. It’s much earlier than you planned and there just simply isn’t enough hours in the day, or resources available to make it happen. Whatever will you do?

Let me introduce you to the minimum lovable product, or MLP.

What is an MLP?

You may have heard of a minimum viable product (MVP). Where a minimum viable product is a bare-bones, meets your needs solution; the minimum loveable product can be described as the simplest solution that meets your needs and is a positive step toward achieving your goals. It’s easy to view every aspect, every deliverable, as being fundamental to a project’s success. But when you actually look at each nut and bolt with a more discerning eye, you begin to realize that each component is not fundamental to the overall product’s success.

So basically the MLP is the sufficient amount of features your site needs to be satisfactory to your business goals for launch.

It’s important to note that an MLP is not necessarily a reduction in scope. It’s more a prioritization in the order for which things are addressed. The project team can circle back on anything that wasn’t part of the MLP. The goal behind an MLP is to deliver a functional product that you’re excited about, within the confines of the project.

When should you consider an MLP?

An MLP isn’t for every project, but is usually best leveraged when there is a restraint of some sort. I used timeline as an example in my opening, but as you know restraints can take many forms:

  1. Timeline: Maybe the deadline you need to hit, simply won’t provide enough time to complete all the work you have queued.
  2. Resource Availability: Perhaps there are scheduling conflicts, or limited resource availability during your project.
  3. Budget Constraints: Another possibility is that the budget just isn’t sufficient to get to everything you have on your list.

Regardless of the restraint you’re facing, an MLP can help you realign priorities, and expectations to compensate. But how do you go about evaluating your project for an MLP?

Need help with defining your MPL? Contact us.

How do you create an MLP

When you’re able to parse the individual elements that are crucial to your website’s success into user stories and features, you’ll have effectively identified your project. But how do you actually go about separating the core building blocks that will comprise your MLP from the bells and whistles?  It all starts with goals.

Goals

Chances are that you already have a set of goals describing  what you’re hoping to achieve with the project. These ideally should be as specific as possible (ie. increase traffic) and ideally measurable (analytics). Without realistic, concrete goals you set the project up for failure. For example if your goal is to make people happy; chances are you’re going to have a hard time measuring whether you were successful. Establishing measurable goals will set the project up for success.

It’s not enough to know your goals, you have to be able to prioritize them. It’s simply not realistic that every goal be top priority. Try to narrow your priorities down to no more than three goals. Goals in hand where do we go from here in our quest to define an MLP?

Definition

Begin by thinking of all the factors that are needed for a User to accomplish a given goal. These could include anything from Layouts, to Features, to Content. Start a list of these factors:

  1. What are the things a User sees?
  2. What copy does a User read?
  3. What actions is a User taking while they navigate through the site?

Everything you write down while asking these questions should be in the interest of one of your priority goals. If an item isn’t directly contributing to accomplishing the goal, then it should not be on the list. If you’re not a subject matter expert that will be directly contributing to the work, you should connect with your team to determine the specific work that needs to be carried out for each of the items you’ve identified. Additional refinement, and further simplification may be needed to compensate for the restraint you’re up against.

By this point, you’ve probably realized that defining the MLP is a difficult task. The choices will be tough, and ultimately everyone is not going to get their way. What’s important is that the work you do strives to meet the goals you’ve set. This sometimes means detaching personal wants from the needs of the company. If you can tie the work back to this core philosophy, you’ll always have a strong direction for your product.

Time to get to work!

All done? Congratulations! You’ve now defined your MLP. Now you’re off to the races. Best of luck on the journey of building out your minimum loveable product.

May 08 2018
May 08
May 8th, 2018

Over the past few months, Four Kitchens has worked together with the Public Radio International (PRI) team to build a robust API in PRI’s Drupal 7 site, and a modern, fresh frontend that consumes that API. This project’s goal was to launch a new homepage in the new frontend. PRI intends to re-build their entire frontend in this new structure and Four Kitchens has laid the groundwork for this endeavor. The site went live successfully, with a noticeable improvement in load time and performance. Overall load time performance increased by 40% with first-byte time down to less than 0.5 seconds. The results of the PRI team’s efforts can be viewed at PRI.org.

PRI is a global non-profit media company focused on the intersection of journalism and engagement to effect positive change in people’s lives. PRI’s mission is to serve audiences as a distinctive content source for information, insights and cultural experiences essential to living in our diverse, interconnected world.

Overall load time performance increased by 40% with first-byte time down to less than 0.5 seconds.

Four Kitchens and PRI approached this project with two technical goals. The first was to design and build a full-featured REST API in PRI’s existing Drupal 7 application. We used RESTFul, a Drupal module for building APIs, to create a JSON-API compliant API.

Our second technical goal was to create a robust frontend backed by the new API. To achieve that goal, we used React to create component-based user interfaces and styled them with using the CSS Modules pattern. This work was done in a library of components in which we used Storybook to demonstrate and test the components. We then pulled these components into a Next-based application, which communicates with the API, parses incoming data, and uses that data to populate component properties and generate full pages. Both the component library and the Next-based application used Jest and Enzyme heavily to create thorough, robust tests.

A round of well-deserved kudos to the PRI team: Technical Project Manager, Suzie Nieman managed this project from start to finish, facilitating estimations that led the team to success. Senior JavaScript Engineer, Patrick Coffey, provided keen technical leadership as well as deep architectural knowledge to all facets of the project, keeping the team unblocked and motivated. Engineer, James Todd brought his Drupal and JavaScript expertise to the table, architecting and building major portions of PRI’s new API. Senior Frontend Engineer, Evan Willhite, brought his wealth of frontend knowledge to build a robust collection of elegant components in React and JavaScript. Architect, David Diers created mechanisms that will be responsible for managing PRI’s API documentation that can be used in future projects.

Special thanks to Patrick Coffey and Suzie Nieman for their contributions to this launch announcement. 

Four Kitchens

The place to read all about Four Kitchens news, announcements, sports, and weather.

Dec 22 2017
Dec 22

Designers mapping out a website.

Designers mapping out a website.

So your site isn’t working the way you want it to. Maybe it’s sluggish, or you’re not seeing the conversions you want, or customers are complaining. Before you drop a huge chunk of your budget on a complete rebuild, consider that there might be a simpler (and more affordable) solution to your website woes.

We see a lot of Drupal 7 and WordPress websites here at Kanopi Studios, and we often discover that it’s more cost-effective for our clients to simply update their sites rather than rebuilding them. Making targeted updates can allow you to focus on addressing a few key issues, while still leveraging the investment of time, energy and funds that went into your site’s foundation.

In this series, we’ll look at three key topics to consider:

1. How do you know when it’s time for a change?
2. Is your website optimally organized and designed to be user-friendly?
3. How strong is your technical foundation?

How do I know it’s time for a change?

Do any of these problems sound familiar?

  • Low conversion rates
  • Site pages take more than 3 seconds to load
  • Site doesn’t work well on mobile or other devices
  • Updating content is a difficult and frustrating process
  • Users struggle to find what they need on the site or have shared negative feedback
  • Site crashes when updating
  • Too many bugs
  • Building new features is difficult or may not even be possible
  • Site is not loading on https and triggers security warnings

If your answer to any of these is yes, it’s time to take action.

But first … is it really that important for me to address these issues?

Yes! A website that isn’t working optimally can dramatically affect your bottom line. An out-of-date or poorly designed website can:

  • Damage your credibility. If your website loads slowly, is crowded with clutter or is just plain not working, you are sending the message that your company is unprofessional.
  • Make you appear out of touch. A dated website tells your customers you are behind the technological times, or worse – you don’t care enough to stay up-to-date.
  • Cost you customers. Every customer who leaves your site in frustration due to broken links, complex forms, slow pages or confusing navigation is a customer you won’t get back. If your competitors offer similar services and have a stronger website experience, your loss will be their gain.

Decision time. If you want to avoid the damage that a dated website can cause, you’ll need to either rebuild your site or update it. If you’re ready to take action, we can help you find the best and most cost-effective approach.

There are two primary things to consider when maximizing your site’s ROI: your user’s needs and the technology that drives your site. If you can identify and fix problems in both of these categories, you can most likely avoid a costly rebuild.

Venn diagram showing optimum website health at the intersection of smart user experience and strong tech foundation.

Venn diagram showing optimum website health at the intersection of smart user experience and strong tech foundation.


Next, we’ll dive a bit deeper into tips to help you level up your user experience and update your website technology without starting over from scratch. Consider it the non-surgical, diagnostic approach to improving your website experience right where it needs it the most. 
Dec 22 2017
Dec 22

Website developers considering code.

Website developers considering code.

Now that you’ve considered your user experience, there are a number of possible technical fixes that might help resolve your website problems.

What version of Drupal or WordPress are you using?

  • WordPress 2, while old, may or may not require a rebuild. You might be able to get by with updating and refactoring.
  • If you’re using Drupal 7 or WordPress 3, a rebuild is likely not needed. 
  • However, if you are on Drupal 6, it is at the end of its life, which may make rebuilding more cost-effective and viable for the long term.

Does your site use a lot of custom code?

If so, what does that code do, and are you still using that functionality? Look for ways to streamline where possible.

Is your site’s code a nightmare?

Did you use a professional firm with a North American team? An offshore team? A freelance developer? Or an internal employee who no longer works at your company? It’s a good idea to get the code reviewed so that you can determine its quality and understand whether it will be easy to update or if you’d be better off starting from scratch. Contact Kanopi for a low-cost assessment.

Are you up to date with the latest security patches and updates?

Lapses can expose the site to hacks and backdoors. Often just updating your site and modules/plugins can solve many issues.

Want to learn more about how we can help you understand every aspect of your site and determine if you need to rebuild or update to help achieve your goals? Contact us to book a free 15-minute consultation. Click here to book a time.

Nov 30 2017
Nov 30

Docker and Vagrant logos

Docker and Vagrant logos

If you work on multiple projects at once, or need to collaborate with other developers (as many of us do), then getting your development environment up and running quickly can be crucial to your ability to make efficient progress.

For the past few years, the best tool to help you do that was Vagrant. Vagrant interacts with Virtual Machines. One of it’s greatest features is that most of the configuration can happen in a vagrantfile, which can then be committed to your project. This allows developers to easily clone a project and get a development environment up and running without any special configuration.

Now Docker is the new kid on the playground. Docker provides the ability to have thin containers which focus on a specific service, whether that’s MySQL, Nginx, Apache, or testing applications like Behat, and Selenium. So now we have smaller containers, without the same overhead as that of a traditional Virtual Machine.

Sounds great, right? Well yes, but now your existing tools may need to interact with Docker. Or maybe you’ve run into a need for both Docker and Vagrant to co-exist with each other, depending on your needs. The good news is that there is a solid way of making this happen!

In this post I’ll walk you through installing Docksal and setting it up so that Docker can work side by side with Vagrant. All of the following steps have been tested on macOS going through a command line.

Installing and Configuring

We’ll start with the basics.

Step 1: Installing Docksal

The first step is making sure you install Docksal. To do this, you can use the handy one-liner below.

curl -fsSL get.docksal.io | sh

This line of code will install the Docksal command fin and, if needed, will install Virtualbox. That means there’s no need to go out and install Docker ahead of time. Note: If you already have Vagrant and Virtualbox installed it may be best for you to shut down all VMs as the installation can sometimes hang in the process.

Step 2: Create the Projects Folder to House Development

Next, we have to configure the directory that the Docksal VM mounts for use with Docker. By default, Docksal will attempt to mount just the /Users directory. The problem with this is that if you have a Vagrant VM mounted anywhere within the the same same folder hierarchy then it will cause an error. So, you’ll need to tell Docksal to mount a folder deeper within the structure that isn’t already being mounted.

mkdir -p ~/projects/docksal

For this example, we will place a folder within our user’s home directory labeled projects. Sometimes this folder will already exist. If so, you could just change into that directory.

Create a Docksal directory to house all of the Docksal projects. The name of this folder is arbitrary. For this example we will use a simple name. This folder’s main purpose is to hold all of your Docksal projects. This is also the data that will get mounted to your projects when they are started.

Step 3: Configuring Mounted Path

Once we have created the folder hierarchy for our projects, we have to tell Docksal what folder to mount into the VM, so we’ll have to add the following line to our global docksal.env file which is located in ~/.docksal/docksal.env

DOCKSAL_NFS_PATH=~/projects/docksal

To speed up this process, use the following one-line command:

echo "DOCKSAL_NFS_PATH=~/projects/docksal" >> ~/.docksal/docksal.env

Step 4: Start Virtual Machine

After we’ve added the DOCKSAL_NFS_PATH line, now comes the process of starting our VM. Running the vm start command will make sure that the VM is running. The following command can be run from any folder in a terminal window.

fin vm start

It should result with a similar response:

Starting "docksal"...
(docksal) Check network to re-create if needed...
(docksal) Waiting for an IP...
Machine "docksal" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Enabling automatic *.docksal DNS resolver...
Clearing DNS cache...
Configuring NFS shares...
NFS shares are already configured
Mounting NFS shares...
Starting nfs client utilities.
Mounting local /Users/example/project/docksal/ to /Users/example/project/docksal/
Importing ssh keys...
Identity added: id_rsa (id_rsa)

If you happen to get the following message:

Machine "docksal" is already running.

then a restart may be necessary, which can be done using the this command:

fin vm restart

Upon a successful restart, you should see a similar response:

Stopping "docksal"...
Machine "docksal" was stopped.
Starting "docksal"...
(docksal) Check network to re-create if needed...
(docksal) Waiting for an IP...
Machine "docksal" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Enabling automatic *.docksal DNS resolver...
Clearing DNS cache...
Configuring NFS shares...
NFS shares are already configured
Mounting NFS shares...
Starting nfs client utilities.
Mounting local /Users/example/project/docksal/ to /Users/example/project/docksal/
Importing ssh keys...
Identity added: id_rsa (id_rsa)

Want to learn more? Contact us.

Testing Configuration

Step 1: Start Docksal Setup

Now comes the fun part where we get to test our new configuration. Was it successful? Let’s see if our work has paid off and get our first Docksal project up and running.

Start by navigating to the project folder that was created in the previous steps.

cd ~/projects/docksal

Then we will clone a basic Drupal 8 project that has Docksal configured,

git clone https://github.com/kanopi/drupal8-composer-docksal drupal8

and change into that project we just downloaded:

cd drupal8

Now, initialize the project

fin init

If you previously had Docksal installed and the following error appears on your screen,

Minimal fin version required is: 1.22.0
Please run fin update and try again

then run the update command for the latest version of Docksal:

fin update

In this project, we have a basic initalize command that will use composer to download all of the libraries. Don’t have composer? Don’t worry, composer will get installed in the container. Drush then runs the site-install command.

Want to know if this command worked properly? Did you get results like this? If so, great!

Step 1 Initializing stack...
Removing containers...
Removing drupal8_web_1 ... done
Removing drupal8_db_1 ... done
Removing drupal8_cli_1 ... done
Removing network drupal8_default
Removing volume drupal8_project_root
Volume docksal_ssh_agent is external, skipping
Starting services...
Creating network "drupal8_default" with the default driver
Creating volume "drupal8_project_root" with local driver
Creating drupal8_cli_1 ...
Creating drupal8_cli_1
Creating drupal8_db_1 ...
Creating drupal8_cli_1 ... done
Creating drupal8_db_1 ... done
Creating drupal8_web_1 ... done
Waiting for drupal8_cli_1 to become ready...
Connected vhost-proxy to "drupal8_default" network.
Waiting 10s for MySQL to initialize...
Step 2 Initializing site...
Making site directory writable...
/var/www/docroot/sites/default/settings.local.php already in place.
You are about to DROP all tables in your 'default' database. Do you want to continue? (y/n): y
Starting Drupal installation. This takes a while. Consider using the --notify global option. Installation complete. User name: admin User password: 7yDUeUyVvH
Congratulations, you installed Drupal!
real 0m22.527s
user 0m6.640s
sys 0m2.980s

Drum roll… Open a browser to http://drupal8.docksal and you should see a freshly installed Drupal 8 site.

Step 2: Confirming Vagrant is Intact

For this step, we won’t be able to guide you through the process since all projects are different. The easiest way to confirm is to navigate to one of your Vagrant projects, then stop and restart the project.

vagrant halt
vagrant up

Running this should not cause any issues with mounting the project, and should start your Vagrant project.

Summary

To summarize, we completed a basic Docksal install. The one liner was installed which can usually accommodate, unless you are also running Vagrant. In that case we modify the folder which mounts to the Docksal VM. The reason for this is that NFS exports can’t overlap. By default, Docksal uses /Users which can cause an issue, as most, if not all the projects a developer does in Vagrant are usually in that User’s directory.

What this also means is that all Docksal projects will have to live within the DOCKSAL_NFS_PATH folder, because when Docksal uses the minimal VM layer on virtualbox it’s only mounting that one folder, whereas Vagrant projects are mounting individual projects to their respective VM.

We also ran a test to make sure we could get a basic Drupal 8 installation. This provides a good starting point when testing development within the Docksal system.

Nov 13 2017
Nov 13

Business people working on project in office

Business people working on project in office

By now you have likely heard quite a bit about Drupal 8. But do you have a good sense of when and why to make the switch?

Switching to Drupal 8 will make new features and functionality available for your site and help you stay current with the latest best practices. But it will take time and effort, and may mean a bit of refactoring as well.

What’s new in Drupal 8?

Drupal 8 adds a number of helpful features into core, making it possible to build fully-featured websites out of the box. Drupal 8 takes care of basic needs, so contributed modules can be reserved for specialized functionality.

There are more than 200 new features in Drupal 8, including built-in support for multilingual and mobile-friendly sites, a simplified content authoring experience with in-place editing, native web services, Views integration into core, stronger HTML5 support and much more.

In addition, Drupal 8 is written in well structured, object-oriented PHP based on the Symfony framework. And it leverages the Twig templating system, making design patterns simpler, faster, more logical and more secure.

Once you are on Drupal 8, you can easily take advantage of minor releases that will add powerful functionality on a predictable schedule, without requiring you to reinvent your site. And the focus on backwards compatibility beginning with Drupal 9 means upgrading between major versions won’t be a massive headache like it has been with past versions of Drupal.

Time to switch?

There are a number of factors to consider when deciding to switch to Drupal 8. In general, the sooner you can bring your site up to the most up-to-date standards, the better. But it’s also important to consider your objectives when deciding on the best time for an upgrade.

If the functionality in Drupal 8 would revolutionize the way you do business, or you are considering rolling out significant new functionality, now might be a good time to switch. But if your Drupal 7 site is running well and there aren’t any solid business reasons to make the switch, you may consider holding off until Drupal 9 becomes available.

To help clarify your decision, we’ve created a quiz to help you determine when it’s time to make the switch.

Feb 09 2017
Feb 09

Not sure who's to blame, but we have a new HTML validation method from GoDaddy. It is an improvement from the "none HTML validation at all" phase they went through, but took me a while to make it work with apache. The problem was the dot/hidden directory they request to put your validation code in: /.well-known/pki-validation/godaddy.html

In my case there were a couple of reasons why this was difficult:

  • I didn't know about the hidden directory (.) block in Apache.
  • In my case some domains run the whole site over HTTPS, so I needed to make the new rules co-exist with the old HTTPS redirection rules.
  • I have a mixture of hostings. For some sites I control apache, so I could use Virtual Host configurations. But for others (like the ones running on Acquia) I need to create .htaccess rules.

The solution was much simpler than I anticipated, but quite difficult to debug. Finally I made it work for both environments.

I could have used the DNS ownership verification method, but in my case that means I would need to involve the people owning the domain. In my experience that takes longer and it can become really involved when owner doesn't know anything about DNS.

Using Virtual Host config (possible on self hosted sites)


RewriteEngine  on
RewriteRule    "^/\.well-known/pki-validation/godaddy\.html/" "/godaddycode.txt" [PT]
RewriteRule    "^/\.well-known/pki-validation/godaddy\.html$" "/godaddycode.txt" [PT]
    

If the site is only running on HTTPS and I have a redirection rule I'll work around these URLs. The rules below will work together with the one above:


RewriteCond %{REQUEST_URI} =!/.well-known/pki-validation/godaddy.html
RewriteCond %{REQUEST_URI} =!/.well-known/pki-validation/godaddy.html/
RewriteRule ^(.*)$ https://www.mydomain.com/ [R=permanent,L]
    

Using only .htaccess rules (and with no HTTPS redirection):


# GoDaddy verification rewrite rules
<IfModule mod_rewrite.c>
  RewriteRule    "^.well-known/pki-validation/godaddy.html/" "/godaddycode.txt" [PT,L]
  RewriteRule    "^.well-known/pki-validation/godaddy.html$" "/godaddycode.txt" [PT,L]
</IfModule>
    

Using .htaccess rules when site is only running over HTTPS:


# GoDaddy with HTTPS redirection rules
<IfModule mod_rewrite.c>
  # GoDaddy PassThrough rules
  RewriteRule    "^.well-known/pki-validation/godaddy.html/" "/godaddycode.txt" [PT,L]
  RewriteRule    "^.well-known/pki-validation/godaddy.html$" "/godaddycode.txt" [PT,L]
  # Set "protossl" to "s" if we were accessed via https://.  This is used later
  # if you enable "www." stripping or enforcement, in order to ensure that
  # you don't bounce between http and https.
  RewriteRule ^ - [E=protossl]
  RewriteCond %{HTTPS} on
  RewriteRule ^ - [E=protossl:s]
  # Redirect HTTP to HTTPS
  RewriteCond %{HTTP:X-Forwarded-Proto} !=https
  RewriteCond %{REQUEST_URI} !=/godaddycode.txt
  RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
</IfModule>
    

And to make this work on Acquia I had to borrow some rules from D8 .htaccess

So I replaced these sections/rules:


# Protect files and directories from prying eyes (D7)
<FilesMatch "\.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)(|~|\.sw[op]|\.bak|\.orig|\.save)?$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|^#.*#$|\.php(~|\.sw[op]|\.bak|\.orig\.save)$">
  Order allow,deny
</FilesMatch>

# Block access to "hidden" directories whose names begin with a period... (D7)
RewriteRule "(^|/)\." - [F]
    

With these D8 sections/rules:


# Protect files and directories from prying eyes (D8)
<FilesMatch "\.(engine|inc|install|make|module|profile|po|sh|.*sql|theme|twig|tpl(\.php)?|xtmpl|yml)(~|\.sw[op]|\.bak|\.orig|\.save)?$|^(\.(?!well-known).*|Entries.*|Repository|Root|Tag|Template|composer\.(json|lock))$|^#.*#$|\.php(~|\.sw[op]|\.bak|\.orig|\.save)$">
  <IfModule mod_authz_core.c>
    Require all denied
  </IfModule>
  <IfModule !mod_authz_core.c>
    Order allow,deny
  </IfModule>
</FilesMatch>

# Block access to "hidden" directories whose names begin with a period... (D8)
RewriteRule "(^|/)\.(?!well-known)" - [F]
    

I hope this helps someone else. I know it took me some time to figure it out and couldn't find an specific blog post about it.

Note: Just to be super clear, you should put the code given by GoDaddy into a file called godaddycode.txt on your docroot directory.

Oct 27 2016
Oct 27

We have been using Memcached as the standard to speed up our Drupal internal caching. Shall we now use Redis instead?

It seems there is a new trend on the Drupal community, Redis. What made me pay attention to it is that it seems to go in detriment of Memcached.

In a nutshell:
  • Memcached does only one thing, and it does it well. Stores key/value pairs in memory.
  • Now Redis has about the same performance than Memcache storing/accessing key/value pairs, but it does much more.

This alone would make a point as to why you should use Redis if you are developing a new platform. Granted, it is more convoluted and involved to learn, but you can do much more. It also has a solid scalable architecture.

In my book the recipe goes:
  • If you are already using memcached, stick with it. I don't see a reason to change right away.
  • If you have a provider which restricts you to either of them, just follow their lead. Best to have key-value server for cache than using the main database. No matter if it is Memcached or Redis, use what ever they provide.
  • If you are not using either of them, just choose one and use it. Both will do the trick.

For Drupal7 both modules are stable: memcache and Redis. I have some details on how to configure memcache module in this Drupal performance presentation (in spanish), but there are lots of articles on how to do it online.

If you are using Drupal8 or you are doing a new development that could benefit from in-memory key-value cache and store I would look into using Redis. There are alpha versions of the Drupal integration modules for either of them.

Important fact about major players: Acquia only supports Memcached. Pantheon and Platform only support Redis.

If you want to know more about Redis I can recommend:
May 03 2016
May 03

A walk through my first Platform experience with steps on how to avoid problems if you need to upload your local version into it. Lots of hints to get it right the first time!

If you are in a hurry and only need a recipe, please head to the technical part of the article, but I would like to start sharing a bit of my experience first because you might be still deciding if Platform is for you.

I decided to try Platform because a friend of mine needed a site. Do to several reasons I didn't want to host it on my personal server. But I didn't want to run a server for him either. I wanted to forget about maintaining the server, keeping it secure or do upgrades to it.

So I started thinking about options for small sites:

The list is not fair in terms of what's being offered, the offer is quite different. But I used all of these before for different projects. Only one left to try was Platform.

So going over above list with a fine cone:

  • I discarded RH Openshift because I wanted a solution, not just running an application. Others option offer caching servers too which does speed up things significantly.
  • Aberdeen: Used it before, works, but focused on the Europe market.
  • Acquia is easy to use, there was a comparable option, but price was still higher.
  • Pantheon: I've used it and like it, but the pricing was a bit risky. 25USD option was had traffic limits, the risk of needing the 100USD option was real.

Since I've been hearing very good things about Platform, the comparison on price/specs, and my appetite for new things, I decided to give it a try.

The Experience

Platform do not have a free option, only way to try/use is to pay for a site. So it was a bit of a jump from a cliff. Even had a couple of rough moments while fighting to get it online when I thought it wasn't the right choice. I must say I'm more used to sysadmin for dummies interfaces, this is a programmer oriented environment. In any case as with the others once you learn its perks, the offer balance out. Not having to deal with sysadmin stuff is great to avoid charging for support and OS upgrades to your clients and focusing on Drupal alone. They provide lots of documentation, which is quite useful. But some how I failed to get things running using just that, so I had to lean a bit on their online support. The time to respond was reasonable, and support was really good.

I would compare their approach to RH Openshift platform offering. Price is a bit higher, but final service is better. Along with hosting Drupal (or any other PHP app) they provide Solr search, Redis cache and a CDN caching network). So it is a more complete PaaS solution. I haven't run a production Drupal on RH Openshift, but given my experience I wouldn't risk running Drupal without a proper caching mechanism.

Any side effects? Well... Given I'm more familiar with Acquia or Pantheon service, I found a bit annoying the time you have to wait after every git push... Each time you do a push your application needs to be packaged, configured and delivered. So you end up with a bit of delay between you push something and are able to see it online. This happens with other platforms, but I found it a bit more noticeable. I guess Platform targets programmers as their main customer, so having a local environment would be a plus.

Overall I found the experience positive, but a bit involved. If you want an easy solution I must recommend against it, but hopefully this tutorial and all their documentation will help you to cross over the bridge. And keep in mind that's only the first step, then it is just a new way of launching a site in Drupal.

I haven't launched the site yet, so I yet to see performance of it. But given the caching approach and different services provided I think it will be ok.

The How

I guess things are easier if you don't have a process before and follow their workflow 100%. Since I had prior experience I decided to build the site locally first, and then publish online. How hard it could be? Well... Not hard, but has its perks.

So here are the steps if you are pushing your local Drupal 7 site to Platform:

  1. First thing is to add your key and start a GIT repo (or upload your current GIT repo). More on the repo on last steps, but you need to get your SSH key into Platform.sh as the first step to gain access to it using GIT, SSH, etc...
  2. Then you should write your own ".platform.app.yaml" file, here is my example:
    
    name: drupal
    type: php:5.6
    build:
        flavor: drupal
    relationships:
        database: "database:mysql"
        solr: "search:solr"
        redis: "cache:redis"
    web:
        document_root: "/"
        passthru: "/index.php"
        whitelist:
          # robots.txt.
          - /robots\.txt$    
          # CSS and Javascript.
          - \.css$
          - \.js$
          # image/* types.
          - \.gif$
          - \.jpe?g$
          - \.png$
          - \.ico$
          - \.bmp$
          # fonts types.
          - \.ttf$  
    disk: 2048
    mounts:
        "/public/sites/default/files": "shared:files/files"
        "/tmp": "shared:files/tmp"
        "/private": "shared:files/private"
    hooks:
        # We run deploy hook after your application has been deployed and started.
        deploy: |
            cd public
            drush -y updatedb
    crons:
        drupal:
            spec: "*/20 * * * *"
            cmd: "cd public ; drush core-cron"
        
    
  3. We also need to add MySQL service (among others) to our application. So we need to add another file to the repository: ".platform/services.yaml":
    
    database:
      type: mysql:5.5
      disk: 512
    
    search:
        type: solr:4.10
        disk: 512
    
    cache:
        type: redis:2.8
        
    

    This is actually one of the good things about using Platform.sh, you can have your Drupal with SOLR+Redis at the same cost. When using search on your site SOLR does really help, and when configured in Drupal Redis can speed up your site's logged in response time (it is similar to Memcache).

  4. We also need to add the domain, this is done by adding the file ".platform/routes.yaml":
    
    "http://www.{default}/":
        type: upstream
        upstream: "drupal:php"
    
    "http://{default}/":
        type: redirect
        to: "http://www.{default}/"
        
    
  5. Now you need to configure Drupal to use platform DB service. You do this by customizing your settings.php, Platform will automatically create a "settings.local.php" file which will connect you to the DB. So replace your "settings.php" with:
    
    <?php
    $update_free_access = FALSE;
    
    $local_settings = dirname(__FILE__) . '/settings.local.php';
    if (file_exists($local_settings)) {
      require_once($local_settings);
    }
        
    
  6. And we are ready to start copying things across. I will being with the DB copy. There are other options, but I will explain using an SSH connection:
    
    scp mysitesdb.dump.gz [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh:/app/tmp
    ssh [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh
    zcat tmp/mysitesdb.dump.gz | mysql -h database.internal main
    rm tmp/mysitesdb.dump.gz
        
    
  7. To get your code into the platform git repo you'll need to create/push it:
    
    cd mysite/code
    git init
    git remote add platform [PROJECT-ID]@git.[REGION].platform.sh:[PROJECT-ID].git
    git add --all
    git commit -m "Initial commit of My Project"
    git push
        
    

    It is important to know that each git push trigers your app building by platform. So the push will take more than you are used to. It won't only push, but also build the whole application for you on each push.

  8. To import the files you can use rsync:
    
    rsync -r my/local/files/. [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh:public/sites/default/files/
        
    
  9. Finally we will need to rebuild the site registry:
    
    ssh [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh
    drush dl registry_rebuild --destination=/app/tmp
    sed -i 's/, define_drupal_root()/, '"'"'\/app\/public'"'"'/' /app/tmp/registry_rebuild/registry_rebuild.php
    cd /app/public
    php ../tmp/registry_rebuild/registry_rebuild.php
        
    

Following these steps I ended up with my local version of the site running on platform. Process for Drupal 8 is quite similar, you definitely don't need the final step. Also I recall a simpler setup for settings.php, but it doesn't differ much from the above.

Main gotchas for me where adding normal files (CSS, JS & images) to the ".platform.app.yaml" file within the "whitelist" section and took me a while to be able to run from proper DB (by creating custom setting.php shared above). In both situations support took a bit of time, but got me into the right track.

There is lot of documentation, but I would like to flag the "Getting started for the impatient" because it is in line with this blog post. Didn't solve all my issues, but it is a nice brief of the whole thing.

Hope this helps someone out there. I know I looked for prior experiences and found little information but the Platform.sh docs.

May 02 2016
May 02

A walk through my first Platform experience with steps on how to avoid problems if you need to upload your local version into it. Lots of hints to get it right the first time!

If you are in a hurry and only need a recipe, please head to the technical part of the article, but I would like to start sharing a bit of my experience first because you might be still deciding if Platform is for you.

I decided to try Platform because a friend of mine needed a site. Do to several reasons I didn't want to host it on my personal server. But I didn't want to run a server for him either. I wanted to forget about maintaining the server, keeping it secure or do upgrades to it.

So I started thinking about options for small sites:

The list is not fair in terms of what's being offered, the offer is quite different. But I used all of these before for different projects. Only one left to try was Platform.

So going over above list with a fine cone:

  • I discarded RH Openshift because I wanted a solution, not just running an application. Others option offer caching servers too which does speed up things significantly.
  • Aberdeen: Used it before, works, but focused on the Europe market.
  • Acquia is easy to use, there was a comparable option, but price was still higher.
  • Pantheon: I've used it and like it, but the pricing was a bit risky. 25USD option was had traffic limits, the risk of needing the 100USD option was real.

Since I've been hearing very good things about Platform, the comparison on price/specs, and my appetite for new things, I decided to give it a try.

The Experience

Platform do not have a free option, only way to try/use is to pay for a site. So it was a bit of a jump from a cliff. Even had a couple of rough moments while fighting to get it online when I thought it wasn't the right choice. I must say I'm more used to sysadmin for dummies interfaces, this is a programmer oriented environment. In any case as with the others once you learn its perks, the offer balance out. Not having to deal with sysadmin stuff is great to avoid charging for support and OS upgrades to your clients and focusing on Drupal alone. They provide lots of documentation, which is quite useful. But some how I failed to get things running using just that, so I had to lean a bit on their online support. The time to respond was reasonable, and support was really good.

I would compare their approach to RH Openshift platform offering. Price is a bit higher, but final service is better. Along with hosting Drupal (or any other PHP app) they provide Solr search, Redis cache and a CDN caching network). So it is a more complete PaaS solution. I haven't run a production Drupal on RH Openshift, but given my experience I wouldn't risk running Drupal without a proper caching mechanism.

Any side effects? Well... Given I'm more familiar with Acquia or Pantheon service, I found a bit annoying the time you have to wait after every git push... Each time you do a push your application needs to be packaged, configured and delivered. So you end up with a bit of delay between you push something and are able to see it online. This happens with other platforms, but I found it a bit more noticeable. I guess Platform targets programmers as their main customer, so having a local environment would be a plus.

Overall I found the experience positive, but a bit involved. If you want an easy solution I must recommend against it, but hopefully this tutorial and all their documentation will help you to cross over the bridge. And keep in mind that's only the first step, then it is just a new way of launching a site in Drupal.

I haven't launched the site yet, so I yet to see performance of it. But given the caching approach and different services provided I think it will be ok.

The How

I guess things are easier if you don't have a process before and follow their workflow 100%. Since I had prior experience I decided to build the site locally first, and then publish online. How hard it could be? Well... Not hard, but has its perks.

So here are the steps if you are pushing your local Drupal 7 site to Platform:

  1. First thing is to add your key and start a GIT repo (or upload your current GIT repo). More on the repo on last steps, but you need to get your SSH key into Platform.sh as the first step to gain access to it using GIT, SSH, etc...
  2. Then you should write your own ".platform.app.yaml" file, here is my example:
    
    name: drupal
    type: php:5.6
    build:
        flavor: drupal
    relationships:
        database: "database:mysql"
        solr: "search:solr"
        redis: "cache:redis"
    web:
        document_root: "/"
        passthru: "/index.php"
        whitelist:
          # robots.txt.
          - /robots\.txt$    
          # CSS and Javascript.
          - \.css$
          - \.js$
          # image/* types.
          - \.gif$
          - \.jpe?g$
          - \.png$
          - \.ico$
          - \.bmp$
          # fonts types.
          - \.ttf$  
    disk: 2048
    mounts:
        "/public/sites/default/files": "shared:files/files"
        "/tmp": "shared:files/tmp"
        "/private": "shared:files/private"
    hooks:
        # We run deploy hook after your application has been deployed and started.
        deploy: |
            cd public
            drush -y updatedb
    crons:
        drupal:
            spec: "*/20 * * * *"
            cmd: "cd public ; drush core-cron"
        
    
  3. We also need to add MySQL service (among others) to our application. So we need to add another file to the repository: ".platform/services.yaml":
    
    database:
      type: mysql:5.5
      disk: 512
    
    search:
        type: solr:4.10
        disk: 512
    
    cache:
        type: redis:2.8
        
    

    This is actually one of the good things about using Platform.sh, you can have your Drupal with SOLR+Redis at the same cost. When using search on your site SOLR does really help, and when configured in Drupal Redis can speed up your site's logged in response time (it is similar to Memcache).

  4. We also need to add the domain, this is done by adding the file ".platform/routes.yaml":
    
    "http://www.{default}/":
        type: upstream
        upstream: "drupal:php"
    
    "http://{default}/":
        type: redirect
        to: "http://www.{default}/"
        
    
  5. Now you need to configure Drupal to use platform DB service. You do this by customizing your settings.php, Platform will automatically create a "settings.local.php" file which will connect you to the DB. So replace your "settings.php" with:
    
    <?php
    $update_free_access = FALSE;
    
    $local_settings = dirname(__FILE__) . '/settings.local.php';
    if (file_exists($local_settings)) {
      require_once($local_settings);
    }
        
    
  6. And we are ready to start copying things across. I will being with the DB copy. There are other options, but I will explain using an SSH connection:
    
    scp mysitesdb.dump.gz [PROJECT-ID][email protected][REGION].platform.sh:/app/tmp
    ssh [PROJECT-ID][email protected][REGION].platform.sh
    zcat tmp/mysitesdb.dump.gz | mysql -h database.internal main
    rm tmp/mysitesdb.dump.gz
        
    
  7. To get your code into the platform git repo you'll need to create/push it:
    
    cd mysite/code
    git init
    git remote add platform [PROJECT-ID]@git.[REGION].platform.sh:[PROJECT-ID].git
    git add --all
    git commit -m "Initial commit of My Project"
    git push
        
    

    It is important to know that each git push trigers your app building by platform. So the push will take more than you are used to. It won't only push, but also build the whole application for you on each push.

  8. To import the files you can use rsync:
    
    rsync -r my/local/files/. [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh:public/sites/default/files/
        
    
  9. Finally we will need to rebuild the site registry:
    
    ssh [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh
    drush dl registry_rebuild --destination=/app/tmp
    sed -i 's/, define_drupal_root()/, '"'"'\/app\/public'"'"'/' /app/tmp/registry_rebuild/registry_rebuild.php
    cd /app/public
    php ../tmp/registry_rebuild/registry_rebuild.php
        
    

Following these steps I ended up with my local version of the site running on platform. Process for Drupal 8 is quite similar, you definitely don't need the final step. Also I recall a simpler setup for settings.php, but it doesn't differ much from the above.

Main gotchas for me where adding normal files (CSS, JS & images) to the ".platform.app.yaml" file within the "whitelist" section and took me a while to be able to run from proper DB (by creating custom setting.php shared above). In both situations support took a bit of time, but got me into the right track.

There is lot of documentation, but I would like to flag the "Getting started for the impatient" because it is in line with this blog post. Didn't solve all my issues, but it is a nice brief of the whole thing.

Hope this helps someone out there. I know I looked for prior experiences and found little information but the Platform.sh docs.

Sep 19 2012
Sep 19

While we were working on one of our upcoming projects, a new website for Stichting tegen Kanker, we had to integrate the Apache Solr module. We needed Solr for its faceted search capabilities. In combination with the FacetAPI module, which allows you to easily configure a block or a pane with facet links, we created a page displaying search results containing contact type content and a facets block on the left hand side to narrow down those results.

One of the struggles with FacetAPI are the URLs of the individual facets. While Drupal turns the ugly GET 'q' parameter into a clean URLs, FacetAPI just concatenates any extra query parameters which leads to Real Ugly Paths. The FacetAPI Pretty Paths module tries to change that by rewriting those into human friendly URLs.

Our challenge involved altering the paths generated by the facets, but with a slight twist.

Due to the projects architecture, we were forced to replace the full view mode of a node of the bundle type "contact" with a single search result based on the nid of the visited node. This was a cheap way to avoid duplicating functionality and wasting precious time. We used the CTools custom page manager to take over the node/% page and added a variant which is triggered by a selection rule based on the bundle type. The variant itself doesn't use the panels renderer but redirects the visitor to the Solr page passing the nid as an extra argument with the URL. This resulted in a path like this: /contacts?contact=1234.

With this snippet, the contact query parameter is passed to Solr which yields the exact result we need.

  1. /**
  2.  * Implements hook_apachesolr_query_alter().
  3.  */
  4. function myproject_apachesolr_query_alter($query) {
  5. if (!empty($_GET['contact'])) {
  6. $query->addFilter('entity_id', $_GET['contact']);
  7. }
  8. }

The result page with our single search result still contains facets in a sidebar. Moreover, the URLs of those facets looked like this: /contacts?contact=1234&f[0]=im_field_myfield..... Now we faced a new problem. The ?contact=1234 part was conflicting with the rest of the search query. This resulted in an empty result page, whenever our single search result, node 1234, didn't match with the rest of the search query! So, we had to alter the paths of the individual facets, to make them look like this: /contacts?f[0]=im_field_myfield.

This is how I approached the problem.

If you look carefully in the API documentation, you won't find any hooks that allow you to directly alter the URLs of the facets. Gutting the FacetAPI module is quite daunting. I started looking for undocumented hooks, but quickly abandoned that approach. Then, I realized that FacetAPI Pretty Paths actually does what we wanted: alter the paths of the facets to make them look, well, pretty! I just had to figure out how it worked and emulate its behaviour in our own module.

Turns out that most of the facet generating functionality is contained in a set of adaptable, loosely coupled, extensible classes registered as CTools plugin handlers. Great! This means that I just had to find the relevant class and override those methods with our custom logic while extending.

Facet URLs are generated by classes extending the abstract FacetapiUrlProcessor class. The FacetapiUrlProcessorStandard extends and implements the base class and already does all of the heavy lifting, so I decided to take it from there. I just had to create a new class, implement the right methods and register it as a plugin. In the folder of my custom module, I created a new folder plugins/facetapi containing a new file called url_processor_myproject.inc. This is my class:

  1. <?php
  2.  
  3. /**
  4.  * @file
  5.  * A custom URL processor for cancer.
  6.  */
  7.  
  8. /**
  9.  * Extension of FacetapiUrlProcessor.
  10.  */
  11. class FacetapiUrlProcessorMyProject extends FacetapiUrlProcessorStandard {
  12.  
  13. /**
  14.   * Overrides FacetapiUrlProcessorStandard::normalizeParams().
  15.   *
  16.   * Strips the "q" and "page" variables from the params array.
  17.   * Custom: Strips the 'contact' variable from the params array too
  18.   */
  19. public function normalizeParams(array $params, $filter_key = 'f') {
  20. return drupal_get_query_parameters($params, array('q', 'page', 'contact'));
  21. }
  22.  
  23. }

I registered my new URL Processor by implementing hook_facetapi_url_processors in the myproject.module file.

  1. /**
  2.  * Implements hook_facetapi_url_processors().
  3.  */
  4. function myproject_facetapi_url_processors() {
  5. return array(
  6. 'myproject' => array(
  7. 'handler' => array(
  8. 'label' => t('MyProject'),
  9. 'class' => 'FacetapiUrlProcessorMyProject',
  10. ),
  11. ),
  12. );
  13. }

I also included the .inc file in the myproject.info file:

  1. files[] = plugins/facetapi/url_processor_myproject.inc

Now I had a new registered URL Processor handler. But I still needed to hook it up with the correct Solr searcher on which the FacetAPI relies to generate facets. hook_facetapi_searcher_info_alter allows you to override the searcher definition and tell the searcher to use your new custom URL processor rather than the standard URL processor. This is the implementation in myproject.module:

  1. /**
  2.  * Implements hook_facetapi_search_info().
  3.  */
  4. function myproject_facetapi_searcher_info_alter(array &$searcher_info) {
  5. foreach ($searcher_info as &$info) {
  6. $info['url processor'] = 'myproject';
  7. }
  8. }

After clearing the cache, the correct path was generated per facet. Great! Of course, the paths still don't look pretty and contain those way too visible and way too ugly query parameters. We could enable the FacetAPI Pretty Path module, but by implementing our own URL processor, FacetAPI Pretty Paths will cause a conflict since the searcher uses either one or the other class. Not both. One way to solve this problem would be to extend the FacetapiUrlProcessorPrettyPaths class, since it is derived from the same FacetapiUrlProcessorStandard base class, and override its normalizeParams() method.

But that's another story.

Jun 29 2012
Jun 29

I got annoyed by the mess on my laptop. Three years hence, its' hard disk has turned into an archive of stuff I've been working at some point or another. The vast majority ranges from fully fledged projects we maintain to half borked vanilla Drupal installations I once used to demo something to fully fledged. Over time, I changed employer a time or two and did some freelance stints. Each time, I had to adapt to new conventions, configurations, versioning tools and development setups.

In the end, the thing I *didn't* do well was managing my own development environment. I'm not talking about the technology I'm using (as it happens: I'm on a OSX using MAMP) but the way, or lack of, I kept order in the data I manage. Inevitably, you're development environment gets cluttered over time. If you're not very careful, you'll end up with stale databases, different versions of files, directories and dumps with no specific purpose, cryptic leftover configuration files,... In short: chaos. Why? There's only so much time, and at the end of the day, it's very easy to just wing it rather then clean up abandoned local projects. That's what happened to me.

So, I set forth to think how to improve this for myself. I've started using a set of common conventions each time I set up a new project. Most of them I picked up at Krimson. I decided to share them, here it goes:

Filesystem

  • Create a Workspace folder i.e. /home/you/workspace. All PHP files, applications, DB dumps,... should be stored here.
  • Group each project in it's own folder. i.e. /home/you/workspace/myproject. Of course, you might work for different clients or employers. Just prefix your folders: i.e. colada_project and krimson_project.
  • Each project has four basic subfolders: www/ which contains your document root, db/ which contains database dumps, docs/ which contains your development information (notes,...) and patches/ with specific drupal patches you might need when setting up that project.

 

Webserver

  • Use virtual hosts. No exceptions. Putting each project in a subfolder of the documentroot so you can surf to http://localhost/myproject is bad practice since such setups require extra configuration in .htaccess. When you go live, you might end up doing mucking in your .htaccess. Or worse, discovering you have to change hardcoded URL's in your code if you were really careless!
  • Use a comprehensive local domainname. You could go for "myproject.dev" or "myproject.lan". I use "netsensei.myproject" for extra clarity. The first part could also refer to the hostname of my machine.
  • Each vhosts comes with its' own vconf file. In my MAMP, I have a separate folder called sites/ which contains a vhost configuration per project.
  • Each configuration file is called netsensei.myproject.conf.
  • Use a working template file for your configuration. Just copy and paste and alter when need to do so (you shouldn't on your own machine unless really needed!) This way, you'll avoid having to debug your configuration for the gazillionth time i.e. because the path to your logfiles contains an error.
  • Same goes for your /etc/hosts file: keep it clean! I try to keep all the entries alphabetically ordered so I can easily find an entry in the list.

Of course, you don't have to work with /etc/hosts and configuration files. You could also do a one time setup of a local BIND DNS server and let Apache do the heavy lifting for you. Check out this article on how to set up a smarter MAMP.

Database

As a Drupal dev, I'm 99.9% of the time working with a MySQL setup. But these should serve for any db system really...

  • Never, ever use the root user to connect to your database from your project. Not even on your development machine. Why? It's a bad habit and mistakes are easily made. Moreover, if something goes really wrong, you don't want other projects to get messed up too.
  • Use the same name for your database as for your user. Ie. DEV_MYPROJECT. Each user only has privileges granted to that database.
  • Be consistent in how you name your user and database: always go for uppercase or lowercase. But don't intermingle. Keep things clear and simple.
  • I use the DEV_ prefix on my local machine. Why? When my PHPMyAdmin tool is open, I want to be pretty sure that's not a production database I just dropped.

 

Drush

You didn't install Drush? Shame! You should! Drush Makes developer life a breeze!

  • Use Drush aliases for every project you start! An alias file allows you to load and bootstrap Drupal from the command line from every location.
  • Each file is called netsensei.myproject.alias.drushrc.php

Those are a few conventions we try to live by. Of course, most of this is sheer repetition after a while. To avoid errors, we wanted to automate the process of setting up a project. A few years back, we worked with a set of shell scripts to get the job done. Over time, we converted our scripts to Drush commands. We couldn't share them because they were very Krimson specific, locked in our own workflow. A few months back, we started using Git. Integrating support in our tools turned out to be quite the hack. So I ventured out and created my own Drush toolset called Rum. It does two things:

Setting up a new local project starting from a vanilla Drupal core setup or setting up an existing project from a remote repository (Git or Subversion) honoring the above conventions.

Rum allows you to delete the entire project including the vhost configuration setup, database and reference in the host file.

This is strictly a aid for use in a development environment. Of course, this is just one way of doing things. These might be the right tool when you're working on your own but turn out to be contraproductive on an enterprise level where individual developers many more variables to take into account. There are other ways of setting up development environments which allow you to easily and safely sync. I highly recommend looking at Vagrant which allows you to rapidly setup and tear down custom tailored virtualized development environments per project. I would also recommend taking a look into the realm of DevOps, which is a fairly young field aiming to tear down the wall between developers and system operators, lowering the bar for deployment, maintenance, security,... in terms of efficiency and flexibility.

I presented Rum and my own setup as a BoF session at DrupalCampGent 2012 on May 26th. I've put my slides on Slideshare (Dutch)

Apr 20 2012
Apr 20

In a previous article, we already gave you a sneak peek of the upcoming changes for Display Suite. More than two months later and a lot of coding, the new branch is ready to start testing. In this article we'll highlight the biggest changes.

Menu structure and discoverability

The main task of Display Suite is simple: configuring the display of your content. For new users, it took a while before they understood how they could get that 2 column layout on their basic page because of two major problems:

  • The main menu entry point at admin/structure/ds leads you to another overview screen with, depending on other settings and/or enabled modules, new options. The main task, namely configuring that layout, was not even the first one. And it was cryptically named as ‘layout’.
  • On the “Manage display” screen, the vertical tab on the default view mode, which usually is the first one you click on, underneath the table wasn’t visible enough to give users the hint that they should choose a layout before the magic starts.

So we completely changed the menu structure for this. Clicking on Display Suite now leads directly to the overview of all available displays, with the most important one, namely, all content types available in your current Drupal website. Other functionality like fields and view modes have been converted to a menu tab.

Besides that, the layout tab is now always the first with some new enhancements as well.

Previews and default fields

Nothing feels better to actually know how your template looks like. In the past, the only way to for site builders to get a bit of a hint, was reading the descriptions in the select box carefully. Or worse, switching over and over until they found the exact layout they want. From now on, selecting a layout will show you a preview image of how the structure looks like. Ajax, oh, we love you. Changing a layout will also show you the previous and next state, so you can’t miss anymore. And it gets better: from now on, when first selecting a layout, default fields will be inserted into the first known region. Imagine you have already configured formatter settings of an image and the body: you will not lose them anymore after selecting a layout.

Configurable wrappers per layout

In Display Suite 7.x.1.x layouts and regions were traditionally wrapped inside a div. It was possible to choose a different wrapper for regions and the layout inside a template. This has several drawbacks.

  • You have to create your own custom templates in code
  • Each sets of wrappers needs a custom template.

To overcome this problem we decided to make the wrappers configurable in the user interface. This increases the flexibility of the layouts a lot. At this moment we support the following wrappers:

  • div
  • span
  • section
  • article
  • header
  • footer
  • aside

Sometimes a group of fields listed in a region doesn't belong to the same wrapper. You can group a set of fields inside one region with fieldgroup. As of today there is an HTML5 formatter for fieldgroup that provides the same HTML5 wrappers we added to Display Suite.

All these changes turn Display Suite into a flexible HTML5 layout builder. To make this possible we had to add more php code to the templates. But as a bonus, they now work on both display and form. If you want to create your own custom template it is advised to use the built in drush script.

  1. $ drush ds-build "Two regions" --regions="Region 1, Region 2"
  2. $ drush ds-build "Several regions and css" --css=1

Hide empty regions

We have removed the 'Hide empty region' checkbox. Instead, we ship with fluid and non fluid templates. The logic now happens inside the template to print or hide a region with or without content in it. This may seem dramatic, but it's actually a better way of working. Choose your layouts wisely now when configuring a display.

Field template settings

One of the hardest coding parts for this branch was moving the field template settings, an option in the Extras module, into the cogwheel on the right, instead of a link triggering a fake formatting screen. This benefits both frontend and backend performance, since we do not print all field template form elements for every field on the manage display screen. It’s now only loaded when configuring a single field. The 'expert' field template now has full control over the field template including (default) classes and attributes.

There are still some tiny problems with this conversion, but the basic functionality is working fine.

Alter before you print

While the layouts in combination of fields gives you great flexibility to decide which content should be printed, there are still use cases where you do want to override a configured layout just before it's sent out to the template file. This was impossible in DS 1, but we now introduced hook_ds_pre_render_alter so you can make any changes you have to. Add new fields, switch them in regions, add new regions .. you name it, it's all possible now.

Upgrade path

With all the changes we listed so far, we have decided to not support an upgrade path between both branches. It’s not that we want too, it's simply because we know for sure sites will break, especially with the changes on template level. We don't want you to reconfigure your site together. Both branches are still supported, however, new features will only go into the second branch. So for existing sites, keep using the first branch. For every new site, you can start using the second one - or wait at least for an official release.

Call for testers and coders

We expect to roll out at least one release candidate at the beginning of june and a full release at the end of june, maybe somewhere just after the Drupal Dev Days in Barcelona. We will test this ourselves extensively the next coming weeks, but we invite you to test along with us and report any bugs you might find. Patches are always better of course. There are 2 critical issues remaining in the queue which need special attention from both site builders and theme developers.

  • http://drupal.org/node/1536066: due to the changes in the template system, we still need to decide how themes, like the Mothership, should provide layouts that can work for both versions of Display Suite.
  • http://drupal.org/node/1519916: the cogwheel needs some fixes to not print out summaries for fields that aren’t allowed to use the field templates, e.g. preprocess fields. Consequently, the field template options can not be available either, but for some the label should remain.

Screencast

We have plenty other subtle changes and a couple of new features. A full overview is available at http://drupal.org/node/1524800. However, a screencast is always better than words, so we recorded one for your viewing pleasure. Sit tight, grab some coffee and enjoy watching. The screencast is also embedded at the end of the post.

Thanks to all who helped

Usually, I don’t get so melodramatic, but sometimes thanking people can be fun, so here goes. I’d like to thank Bram "aspilicious" Goffings for accepting an internship at Krimson working almost full time during that period to get these changes rolling. He's also co-author of this post. Bojhan and Yoroy, our UX gatekeepers who assisted us in the issue queue with great screenshots, sketches and layouts. Pfrenssen, mrfelton and frega helping us out during the 3 days Easter Display Suite sprint. And last, but not least, Krimson for trusting me and jyve that this was worth doing. Maybe another last one: to all those 20K+ installations using our software. The steadily growth and feedback from everyone out there is what keeps me motivated to make this as good as possible!

We’re excited, we hope you will be as well!

Mar 05 2012
Mar 05

on 5 March, 2012

In the software development world, Continuous Integration (CI) is the process of continuously applying quality control to a piece of software in development. What this usually amounts to in practice is having automated systems that build, deploy and test your software each time a change is made. As software complexity increases, and more developers are added to the team, having these types of automated systems in place becomes essential to controlling the quality and cost of projects.

At Atchai, we have been using many of the components of CI (version control, automated build systems, testing frameworks) for years but only recently have we needed to put them all together into a single, coherent system.

Our Setup

Architecture

The system we’ve settled upon revolves around a few crucial pieces of software. Meet the team:

Jenkins

The boss - Responsible for polling Git for changes, software builds and deployment, initiating the tests and providing reports.

Fabric

The glue - Executes build and deployment scripts remotely

Selenium

The perfectionist - Runs high-level, “black box” functional tests against the site such as logging in or submitting content

Siege

The sledgehammer - Tests site performance by initiating a large number of requests

The Stack

To give you some context, here’s the software stack/tools we generally use for our projects (although alternatives to these should work just as well):

  • Ubuntu
  • Apache
  • MySQL
  • PHP
  • Git
  • Drupal / Drush

A Commit (A Day In The Life)

To give you an idea of how our system works day-to-day I’ll walk you through what happens with a single commit:

  • Tarquin (our developer) commits a change to the master branch of the website he’s working on and pushes it to the git server
  • Jenkins is polling the git server every minute and, noticing Tarquin’s commit, launches a new build process
  • The build process starts with a Fabric script in the project’s root (available on github). This script is run (by Fabric) on the build server and:
    • Updates the working copy of the repository on the build server (or clones a new working copy if none exists)
    • Runs a Drupal install using Drush’s “site install” command
    • Runs any site-specific install commands such as enabling modules, migrating content, search indexing, etc
  • After a successful build a series of functional tests are run by Selenium. These are run on the CI server and connect to the newly built site on the build server. For Tarquin, these tests consist of:
    • A user logging in
    • An admin user logging in and creating a blog post
    • A user navigating to a blog post, logging in and posting a comment
  • After all functional tests have completed successfully a performance test is run by Siege. On Tarquin’s project the performance test consists of loading various key pages over 10 concurrent connections to the site. Reports on load averages, query usage, etc are determined using New Relic.
  • An email report is sent to Tarquin informing him of any build or test failures

Installation Instructions

Had enough of the theory? Let's go...

Install Jenkins

  • sudo apt-get install jenkins fabric
  • sudo apt-get install apache2
  • sudo a2dissite default
  • sudo a2enmod proxy proxy_http headers rewrite
  • sudo htpasswd -c /etc/apache2/htpasswd username
  • sudo nano /etc/apache2/sites-available/jenkins
    <VirtualHost *:80>
        ServerName example.com
        DocumentRoot "/var/www"
    
        <Location />
           AuthType Basic
           AuthName "Restricted Files"
           AuthUserFile /etc/apache2/htpasswd
           Require valid-user
    
           RequestHeader   unset X-Forwarded-User
    
           RewriteEngine   On
           RewriteCond     %{LA-U:REMOTE_USER} (.+)
           RewriteRule     .* - [E=RU:%1]
           RequestHeader   set X-Forwarded-User %{RU}e
        </Location>
    
        ProxyPass         /     http://localhost:8080/
        ProxyPassReverse  /     http://localhost:8080/
        ProxyRequests     Off
    
        <Proxy http://localhost:8080/*>
           Order deny,allow
           Allow from all
        </Proxy>
    </VirtualHost>
    
  • Navigate to the Jenkins web interface in a browser
  • Install plugin "reverse-proxy-auth-plugin"
  • Add a user through the web interface
  • Turn on authentication
  • Install plugin "Jenkins GIT plugin"
  • sudo -u jenkins ssh-keygen
  • Add the jenkins user's public key (/var/lib/jenkins/.ssh/id_rsa.pub) to your git server
  • sudo -u jenkins git clone [email protected]:/example.git /tmp/repo (to add the git server's host key)
  • ssh remoteserver.com
  • useradd jenkins
  • sudo -u jenkins ssh-keygen
  • Add the jenkins user's public key (/home/jenkins/.ssh/id_rsa.pub) to your git server
  • sudo -u jenkins git clone [email protected]:/example.git /tmp/repo (again, to add the git server's host key)

Install Selenium

  • sudo apt-get install default-jre
  • Install plugin "Hudson Seleniumhq plugin" in Jenkins
  • Download selenium server standalone to /var/lib/jenkins/
  • sudo apt-get install xvfb
  • sudo nano /etc/init.d/xvfb
    #!/bin/bash
    ### BEGIN INIT INFO
    # Provides:          Xvfb
    # Required-Start:    $remote_fs $syslog
    # Required-Stop:     $remote_fs $syslog
    # Default-Start:     2 3 4 5
    # Default-Stop:      0 1 6
    # Short-Description: Start daemon at boot time
    # Description:       Enable service provided by daemon.
    ### END INIT INFO
    
    if [ -z "$1" ]; then
        echo "`basename $0` {start|stop}"
        exit
    fi
    
    case "$1" in
        start)
            /usr/bin/Xvfb :99 -ac -screen 0 1024x768x8 &
            ;;
    
        stop)
            killall Xvfb
            ;;
    esac
    
  • sudo chmod 755 /etc/init.d/xvfb
  • sudo /etc/init.d/xvfb start (you can ignore any font errors)
  • sudo update-rc.d xvfb defaults 10
  • sudo apt-get install firefox
  • Add an "Execute shell" build step to your Jenkins job:
    DISPLAY=":99" java -jar $JENKINS_HOME/selenium-server-standalone-2.19.0.jar -browserSessionReuse -htmlSuite *firefox http://example.com $WORKSPACE/test/TestSuite.html $WORKSPACE/seleniumhq/result.html
    

Install Siege

What's Next?

Everything is now ready for your CI system except for the Fabric scripts to actually build the project. The Fabric script for each project will necessarily be different but if you'd like to see an example, especially if you use Drupal, have a look at our CI project on github which has helper class for installing Drupal using Drush and an example usage of it.

Once you have your system running I recommend looking through the Jenkins plugins for anything that might be useful to your setup. The power (and complexity) of Jenkins comes through its plugins so it's worth getting acquainted.

Feb 18 2012
Feb 18

Krimson hosted another Drupal User Group at its Ghent offices this week. The topic was CultuurNet, a client we have done multiple projects with in the recent past.

CultuurNet is an organization gathering all events happening in Flanders. It distributes those events over multiple websites targeted by location or target audience over multiple platforms (website, mobile website, iPhone, Android, ...).

To allow this, CultuurNet makes heavily usage of Drupal at different points in its architecture. In this showcase we show where this precisely is and how we created a reusable layer between all Drupal solutions to allow CultuurNet to launch new sites fast and cheap.

CultuurNet showcase
Nov 07 2011
Nov 07

In the last post I demonstrated creating a very basic install profile in Drupal 7. It was more or less a stripped down version of the standard profile with a few very minor additions.

I've been getting some great comments on my posts and one I wanted to note was from @david regarding the Profiler project. I've not had a chance to use it yet but it looks very promising. The profiler module provides an improved API and tools to vastly simplify what's necessary in writing your install profiles. While I would guess some more complex tasks still require you to use the raw Drupal API, this tool looks like it could give you a huge head start.

So, in the previous post, one of the additions I wanted to make but couldn't, was to create the default client user. The client name and email address will obviously differ on all sites we build. For this we need to add a new step in the install process to allow us to configure the client account prior to it's creation. All of the code from here on out will sit in our .profile file (recall that this is equivalent to a .module file, but for install profiles).

The first thing we need to do is define our profile tasks.

/**
 * Implements hook_install_tasks().
 */
function brochure_install_tasks() {
  $tasks = array(
    'brochure_client_form' => array(
      'display_name' => st('Setup Client'),
      'type' => 'form',
    ),
  );
  return $tasks;
}

hook_install_tasks() allows us to create new steps in the install process. Each task more or less maps to a function in your .profile - in this case brochure_client_form. The display_name is used as the text displayed in the sidebar with the install profile steps as can be seen in the screenshot below.

If you're wondering about the st() function, it's basically equivalent to the t() function but used in a context where the localization system may not yet be available. You should generally use st() in your install profile where t() would normally be used.

Install profile tasks can be far more complex than what I've presented here and I'd strongly recommend reading through official documentation on install profiles or the API docs for hook_install_tasks().

The only other thing left to do here is to define our form API handler. This is done the same as any other Drupal module with _form(), _form_validate(), and _form_submit() functions inside your .profile file. There's nothing specific to install profiles here.

function brochure_client_form() {
  $form = array();
  $form['intro'] = array(
    '#markup' => '<p>' . st('Setup your default client account below.') . '</p>',
  );
  $form['client_name'] = array(
    '#type' => 'textfield',
    '#title' => st('Client Username'),
    '#required' => TRUE,
  );
  $form['client_mail'] = array(
    '#type' => 'textfield',
    '#title' => st('Client E-mail Address'),
    '#required' => TRUE,
  );
  $form['client_pass'] = array(
    '#type' => 'password',
    '#title' => st('Client Password'),
  );
  $form['submit'] = array(
    '#type' => 'submit',
    '#value' => st('Continue'),
  );
  return $form;
}

function brochure_client_form_validate($form, &$form_state) {
  if (!valid_email_address($form_state['values']['client_mail'])) {
    form_set_error('client_mail', st('Please enter a valid email address'));
  }
}

function brochure_client_form_submit($form, &$form_state) {
  $values = $form_state['values'];

  // Setup the user account array to programatically create a new user.
  $account = array(
    'name' => $values['client_name'],
    'pass' => !empty($values['client_pass']) ? $values['client_pass'] : user_password(),
    'mail' => $values['client_mail'],
    'status' => 1,
    'init' => $values['client_mail'],
  );
  $account = user_save(null, $account);

  // Assign the client to the "administrator" role.
  $role = user_role_load_by_name('administrator');
  db_insert('users_roles')
    ->fields(array('uid' => $account->uid, 'rid' => $role->rid))
    ->execute();
}

By using tasks you have the ability to do some very customized stuff out of the box for your profile. Creating a client user and assigning a role to them (as done in this example) takes maybe 15 seconds to do for each site. If you create say 3 brochure client sites a month that's about 9 minutes of savings per year! That's enough time to take at least 2 showers!

Oct 24 2011
Oct 24

This is the second post in my series on install profiles. It covers the anatomy of an install profile and creating .install and .profile files. We create a simple brochure style install profile which is based on the standard D7 profile with a few customizations of my our own.

If you haven't already, take a look at my last post on install profiles and create the folder structure described there. Don't worry about the files as we'll create them in this post.

The folder structure

For reference, the structure will be as follows (I've added "libraries" since the last post):

profiles/brochure
profiles/brochure/libraries
profiles/brochure/modules
profiles/brochure/modules/contrib
profiles/brochure/modules/custom
profiles/brochure/modules/features
profiles/brochure/themes

I've found splitting the modules up into the folders as shown here to work best while developing drupal sites. The contrib folder will store any contrib modules we use, the custom folder will be all custom modules we write (I also use this for contrib modules that I'm a maintainer of), and features will store any Drupal features that we create.

brochure.info

Just as with any theme or module we'll need to start with a .info file. Basing this off of the minimal.info we might have something such as:

name = Brochure
description = Install a basic brochure style of website
core = 7.x

; Core modules
dependencies[] = block
dependencies[] = dblog
dependencies[] = field_ui 
dependencies[] = file 
dependencies[] = help
dependencies[] = image  
dependencies[] = menu
dependencies[] = number
dependencies[] = options
dependencies[] = path
dependencies[] = taxonomy
dependencies[] = toolbar
dependencies[] = rdf

; Sub core modules
dependencies[] = boxes
dependencies[] = context
dependencies[] = ctools
dependencies[] = features
dependencies[] = libraries
dependencies[] = pathauto
dependencies[] = strongarm
dependencies[] = token
dependencies[] = views

; Development tools
dependencies[] = devel

files[] = brochure.profile

I personally prefer to split up my modules by an arbitrary category. In this case "sub core" modules are all of the modules that I tend to use on almost all of my sites, like views and ctools. I also generally will use certain modules during development that would be disabled when the site is ready to go live, like devel or even dblog.

I'm not sure the files[] = brochure.profile strictly needs to be defined here as that's normally used for defining files that contain classes, but since both minimal and standard do it, it's probably a good idea.

Download all of the contrib modules specified in the .info file above and drop them into profiles/brochure/modules/contrib/.

brochure.install

Again, the easiest thing to do here will be to base your custom
profile off of minimal.install (or standard.install). In fact, we want everything that the minimal.profile has so you can just copy that straight out and change "function minimal_install()" to "function brochure_install()".

The first part of the file defines the blocks to use on the site. For the moment there's no reason we can't just go with the default blocks.

function brochure_install() {
  // Enable some standard blocks.
  $default_theme = variable_get('theme_default', 'bartik');
  $values = array(
    array(
      'module' => 'system',
      'delta' => 'main',
      'theme' => $default_theme,
      'status' => 1,
     ....

The default theme being used is bartik, which again, is fine for the moment. As we're just going to be installing the site by hand (i.e. not using Aegir), we can just change that theme after install since each brochure site will likely have it's own custom theme.

Looking through the minimal install profile we can see that it defines the following blocks for display:

Module Block (delta) Region system main content user login sidebar_first system navigation sidebar_first system management sidebar_first system help help

If you want to remove or add blocks just copy the way they've done it here. Of course, if you're using contexts and features this becomes unnecessary.

The next thing done in the .install is setting the variables and permissions, and that's about it.

The standard.install takes this stuff to the next level. It does the following:

  • Defines filter formats
  • Sets the default theme
  • Sets up the blocks
  • Creates two node types (page and article)
  • Sets up RDF mappings
  • Configures node options, comments, and user profile stuff
  • Creates a taxonomy vocabulary called "tags"
  • Creates and adds a taxonomy reference field to the article type
  • Creates and adds an image field to the article type
  • Sets up user role permissions
  • Adds a main menu to the site
  • Enables and sets a default administrative theme

Looking through the standard.profile the first time was a bit of overwhelming, particularly with the image field aspect of it (more so because at the time I had no experience with entities or fields). If you take some time going over the rest of the code you'll notice it's all actually quite straight forward module development stuff.

Building a complex install profile requires a very strong knowledge of the Drupal API or at least willingness to learn and spend hours in frustration banging your head on the table when things don't work (I have the bruises to show for it). However, if you're not a strong programmer but relatively comfortable working with features, in one of my next posts I'll try and demonstrate how to completely replace the standard.install using features, so you may find that much more enjoyable :).

Now as it turns out, the standard install profile already gives us the basics for a simple brochure style website! Who'd a thunk!? All we need to do is remove a couple things provided in there and add a few tricks of our own.

At this point, all I want is a few pages of text (i.e. a brochure) for the website. No blog necessary. So copy everything into brochure.install with the exception of the article node type and any fields added to it, the taxonomy field, the article specific rdf mapping, and the comment settings. Also remove the comment related permissions since comment isn't being installed in this profile either. If you don't remove the comment related permissions, your install profile will fail.

brochure.profile

For now we can simply copy minimal.profile (which is identical to standard.profile) and rename the function to start with brochure_ instead of profile_. All this is doing is setting the default site name on install form.

Additions to the install profile

There are still a number of customizations I'll want on a basic brochure site including:

  1. More customized initial install form.
  2. A WYSIWYG editor
  3. A customized set of shortcuts for the client
  4. Adding a client user account
  5. Roles for the client to use (content editor, for example)

When building an install profile in practice I will always start with a base site and then configure it as I want it to be. I add every single little configuration step back into the install profile as I go. For many of the configurations I want to do I need to browse through the core code to understand how components are added and updated. That's how I solved #3 below, for example.

1. When installing a new site you're presented with a form that you must fill in the site mail, account name and mail, country, and timezone. 95% of all of the sites I would build with a profile like this would have the exact same options on the initial install page, so why should I manually fill them in each time. The standard and minimal profiles automatically set the site name, but I'll take this one step further. In my brochure.profile I've added the following:

function brochure_form_install_configure_form_alter(&$form, $form_state) {

  // Pre-populate the site name and email address.
  $form['site_information']['site_name']['#default_value'] = $_SERVER['SERVER_NAME'];
  $form['site_information']['site_mail']['#default_value'] = '[email protected]';

  // Account information defaults
  $form['admin_account']['account']['name']['#default_value'] = 'admin';
  $form['admin_account']['account']['mail']['#default_value'] = '[email protected]';

  // Date/time settings
  $form['server_settings']['site_default_country']['#default_value'] = 'CA';
  $form['server_settings']['date_default_timezone']['#default_value'] = 'America/Vancouver';
  // Unset the timezone detect stuff
  unset($form['server_settings']['date_default_timezone']['#attributes']['class']);

  // Only check for updates, no need for email notifications
  $form['update_notifications']['update_status_module']['#default_value'] = array(1);
}

The last item there changes the update settings so that I won't receive an email for security updates but a message will still be displayed on the site.

2. Adding a wysiwyg to the site is fairly simple. Install something like ckeditor to profiles/brochure/modules/contrib and download the ckeditor library to profiles/brochure/libraries. (Yes, we could use the wysiwyg module for this too, and in fact in practice that's probably what you should choose. However, wysiwyg profiles can be tedious to setup in install profiles and work to make them exportable in features is still underway http://drupal.org/node/624018).

dependencies[] = ckeditor

If you install the ckeditor 3rd party code into your libraries folder you'll also need to add the following line to your .install:

variable_set('ckeditor_path', 'profiles/brochure/libraries/ckeditor');

The reason you need to hardcode the ckeditor path is because ckeditor does not properly support librares (see this issue for more details). This means that it doesn't know to find libraries in profiles (and thus will never find your ckeditor 3rd party code). You could simply install the 3rd party code into the ckeditor module folder itself, but the "clean" way is to use libraries.

These kinds of problems are by no means unique to ckeditor and you'll more than likely run into them as you customize your profiles. Many modules will work perfectly fine until you want to use them in an install profile. The best way of getting around it is to find and fix the issue and submit a patch.

3. For my shortcuts I'll simply use the core shortcut module with an additions. I want to add a quicker way to add a new page from the shortcuts bar. I've added the following code to do just that into my .install.

module_load_include('inc', 'shortcut', 'shortcut.admin');
$shortcut_set = shortcut_set_load('shortcut-set-1');
$shortcut_link = array(
  'link_title' => 'Add page',
  'link_path' => 'node/add/page',
  'menu_name' => $shortcut_set->set_name,
);
shortcut_admin_add_link($shortcut_link, $shortcut_set, shortcut_max_slots());
shortcut_set_save($shortcut_set);

Ensure this code goes after the page content type is created and after menu_rebuild() to ensure that the 'node/add/page' path exists.

4. For every brochure site I create I will need a client account, so I'd like to automate the creation of that as well. Unfortunately the client name and email address won't be identical between each site, so we'll need some user input for that. This will be a focus of the next post.

5. Though not strictly necessary for such a simple site, it makes sense to put our client into some sort of a role besides "authenticated user". This can be done in an identical manner to the way the 'administrator' role was created in standard.install.

$client_role = new stdClass();
$client_role->name = 'content editor';
$client_role->weight = 3;
user_role_save($client_role);
user_role_grant_permissions($client_role->rid, array('administer nodes', 'create url aliases', 'customize shortcut links', 'administer site configuration', 'access site in maintenance mode', 'view the administration theme', 'access site reports', 'block IP addresses', 'administer taxonomy', 'access toolbar', 'administer users'));

My goal here isn't to lock out the client account, but instead remove obvious access to parts of the site that they won't need to use.

Conclusion

When it comes down to it creating an install profile can be as simple or complicated as you want to get. The more complicated the more tedious it will become, but also the more you'll be able to do (and learn). Install profile skills translate right back into module development anyway.

Initially it may seem that unless you plan on creating dozens of sites under the same profile the ROI on building an install profile will not be high. However, as I'll get into later you can use the install profiles as a best practice to help with automating updates to your production site such that it will make sense even if you only ever have a single installation on your profile.

In the next post I'll write more about adding additional site customizations, additional steps, and forms directly into your install profile so that you will get a much more streamlined site out of the box.

Oct 20 2011
Oct 20

Yet another key component to the Drupal Platform is the install profile and it's another one that deserves a few posts to cover it adequately. The goal of this series of posts will be to build an install profile capable of creating a basic brochure-style website - more or less what WordPress's core functionality offers (ok, we'll be doing a bit less for this example ;-)).

Drupal 7 install profiles differ quite significantly from Drupal 6 install profiles so a lot of the material covered here won't necessarily apply if you're working on D6 only. There's already a lot of great documentation to help get you started. Drupal.org has an overview of install profiles that's worth taking a read through, for example. The Drupal documentation also covers all of the common hooks that you may want to use in your profile. But the best place, without question, is just reading through the Drupal core profile code (no, I'm not joking). Check out profiles/minimal, and profiles/standard folders that come with Drupal, particularly the .info, .profile, and the .install files.

To be upfront about things here, I've personally never built an install profile in Drupal 7 so this is a bit of a learning experience for me as well. I know that a lot of the limitations that existed in Drupal 6 have been solved so hopefully the process will be smoother. As always, if you notice anything I've done wrong or could do better please leave a comment as I want to promote best practices for building platforms.

There are a few differences between an installation profile that you'll run manually as you install Drupal (an option that would sit next to minimal or standard in the Drupal install process) versus an install profile to be used for automatically creating sites through Aegir. The primary difference is that an install profile to be used in Aegir should have no options that the user needs to fill out or answer. Defaults are important here. Any kind of setup that the user must do will need to be done after the site it created, not before, possibly with some sort of configuration wizard for this which I'll hopefully write more on later.

For this post I'm only going to run through a typical install profile as opposed to an Aegir-specific one which I'll write on later (and conveniently, can be much simpler than a standard profile).

Anatomy of an install profile:

The initial setup for the install profile is quite trivial. All you need to do is create the folder and the files, and then list out the modules you wish to be enabled on site creation. Basically a combination of creating a new site in a multisite setup and creating a module.

Start with this folder structure in your D7 site:

profiles/brochure
profiles/brochure/brochure.info
profiles/brochure/brochure.install
profiles/brochure/brochure.profile 
profiles/brochure/modules
profiles/brochure/modules/contrib
profiles/brochure/modules/custom
profiles/brochure/modules/features
profiles/brochure/themes

All the non-core modules used on your site will now go into the installation profiles' folder as opposed to sites/all. This isn't required, but will make your install profile more complete and is a good way to keep things organized.

.info

Unlike in D6 install profiles, you define the modules you want to install with your profile in the .info file as a dependency, just as you would with a standard drupal module. This makes install profiles much more consistent with how the rest of drupal packages work, and more importantly, makes our life easier.

.profile

The .profile file is equivalent to the .module file in a module or template.php in a theme. This is where you'll define any hooks required for your profile. Unlike Drupal 6, install profiles in Drupal 7 now have the privilege of running under a fully bootstrapped Drupal. This is a huge benefit, and makes the life of an install profile in D7 much more enjoyable than those poor D6 profiles.

.install

The tough part comes in the .install file. Take a look at the profiles/standard/standard.install file. It starts by defining and creating filter formats and then blocks. It also includes the creation of content types, fields, rdf mappings, roles, and menus.

The good news is that a huge percentage of this, the heavy lifting, can be removed by the use of features. For example, we don't need to create content types, fields, roles, menus, etc. by hand as they're all defined and created by the features we use on our site (see A simple feature for reference).

The bad news is that for more complex profiles you will need to get into the nitty gritty of the Drupal API. In some cases the install profile api (http://drupal.org/project/install_profile_api) can simplify things for you (at the time of writing there is not yet a D7 version), but for most things you should look at other install profiles for examples.

In my next post I'll demonstrate creating the skeleton of a basic profile based off of the Drupal minimal and standard profiles as well as a little pinch of our own special sauce.

Oct 04 2011
Oct 04

Over the past couple years Drush has become an essential part of the Drupal site builder's toolkit. I personally use it daily and like a lot of developers now couldn't imagine building a site without it! (Well, I suppose I could imagine it, but I can also imagine building sites in Drupal 4.6, and it's not pretty ;) ).

Drush is one of those areas of Drupal that has been very well documented and I think it would be useful to compile what I feel are some of the more appropriate areas for site builders. There are many awesome webcasts, though I'm personally biased against them due to my forever inconsistent connection speeds while I'm traveling, so don't take it personally if I pass your webcast by :). Most of the existing documentation is written for D6, but fortunately Drush is Drupal version agnostic so all D6 documentation should work almost identically to D7.

Beginning with Drush

Drush.ws

First of all, one of the best, most comprehensive resources I've found is the official Drush website itself, drush.ws. It contains the full API for all versions as well as examples and other documentation. Having said this though, I wouldn't specifically recommend it as a starting point for new users.

Introduction to Drush

One of the best introductory write-ups I've found was on the 2tbsp blog. It covers everything from installing to basic commands and even touches on site aliases. If you haven't used Drush yet, this is a great starting point.

Drupalize.me

I realize I just got over stating how I don't like webcasts, but I'll make an exception for lullabot's :). Drupalize.me has made a series of videos introducing Drush. If you haven't used Drush much, this is another great starting point. The downside is that only the first two videos are free and the rest you'll need to register for. If all you're looking for are Drush tutorials then it may not be worth it.

Drupal.org

Of course this list would not be complete without a link back to the great documentation on this topic on Drupal.org. The d.o Drush documentation covers mainly introductory Drush knowledge along with some more advanced topics such as synchronizing sites using Drush.

Intermediate to advanced

Integrating your modules

This past May Code Karate posted a good article on getting started with integrating your modules with Drush. It covers the important drush hooks as well as some key drush functions needed to get Drush support into your modules.

Drush synchonization with aliases

I've found a couple of posts on using aliases to handle a dev / production workflow but the one from level ten was my favourite. If you're looking for a quick solution to a time old Drupal problem, using aliases with Drush is definitely the way to go. Just to add to this, using SSH keys between your server and localhost for passwordless authentication may make your life easier.

Development Seed

Though no longer a Drupal shop, they've still got a lot of great resources on their site for this kind of stuff. When Drush 3 was released Adrian Rossouw posted an article detailing some of the new functionality of Drush. Even though Drush is officially at V4 now I believe everything in Adrian's post is still quite relevent.

Drush make

The Drush Make plugin for Drush written by Dmitri Gaskin is another key tool in the platform builder's toolkit. I would even go so far as to say an essential tool. Jared Stoneberg wrote a post on his Stoneberg Design blog a while back that I thought looked particularly useful on creating and using Drush make files. As an aside, Jared is also working on his own Drupal platform for restaurant websites called Aioli.

Also, one of the best places to look for beginner to advanced help on Drush topics (and Drush Make) is in the code itself. It contains plenty of great examples for doing almost everything you'll want to do with the tool.

If you know of other Drush resources you've found particularly useful please let me know in the comments.

Sep 30 2011
Sep 30

In the previous two posts (Features Part 1: A Simple Feature and Features Part 2: Managing Your Feature) I demonstrated how to build a simple feature. The result is something that's particularly useful for two things:

  1. A starting point for new site builds
  2. Dev -> Stage -> Prod style of workflow for site building

If all you want to use features for is the second option, then this is perfectly acceptable. But when you start re-using features across multiple sites, you'll end up needing to fork the feature across each new site build. While you'll no longer have the overhead of creating a blog feature when doing a new site build, you may wish to add new components to your blog feature at some point. How to you push the updated feature across all sites if you've already forked the feature to accommodate necessary customizations?

As a trivial example, assume you have two sites using the same feature, one has 10 blog posts listed per page and the other has 15. i.e. the feature has been forked to allow for the two separate configurations that are needed. Later you decide to add an image field to the blog feature so images can more easily be inserted into blog content. Because the features have been forked, you'll need to manually create the image field (and update the feature) on both sites.

After the site build you will have lost all benefits of using features. This is a shame, because with only a little extra work your feature could be completely re-usable even with the need for different settings on different sites. The idea behind a re-usable feature is to give it Good Defaults TM to begin with so that you don't need to modify it much. But we'll always need to modify some of those defaults.

Prerequisites

This is a more advanced tutorial, and you need to be familiar with the Drupal API to do some of the things described here.

  • You should have read through and have no problem understanding part 1 and part 2 on features.
  • You should be able to write a simple module in Drupal from scratch.
  • Being somewhat familiar with the Drupal API as well as popular contrib modules such as Views will definitely help.

Anatomy of a feature

If you take a look in the module folder of the example_blog feature we built in the previous two posts you'll find a standard module structure (i.e. .module and .info files) and a bunch of .inc files.

example_blog.context.inc                        example_blog.info
example_blog.features.field.inc                 example_blog.module
example_blog.features.inc                       example_blog.strongarm.inc
example_blog.features.taxonomy.inc              example_blog.views_default.inc
example_blog.features.user_permission.inc

Each .inc represents a specific export. You can see above that we have context, field, taxonomy, user_permission, views, and strongarm. Open up example_blog.module and you'll find it's nearly empty, all it does is include example_blog.features.inc which in turn sets up any hooks (ctools, views, cck) that the module needs to use.

Features will never overwrite your .module file, which means you can safely put whatever code you need to in there and not worry about it being overwritten next time you update the feature.

Hooking in

A few of the things we may want to configure in our blog module may include:

  • Home page display
  • Number of posts on the blog listing
  • Block order on the blog page

What we need to do is to basically "extract" those settings from the feature and make the feature configurable itself. In other words, we'll add a second layer of settings to the site.

This can be done either directly in the features .module file or in another .inc file you create and include from the .module. I would personally recommend using the second .inc file. For simplicity, we'll simply edit the .module in this example.

Again, to keep things simple (for me ;-) ) I'm only going to demonstrate configurability of the number of blog posts here. It may make sense in a future post to simply have a bunch of examples for overriding different parts of your site. If there's something you'd be interested in (i.e. how to control block display order, minor setting changes to cck types, etc.) just post a comment on this article and I can add it to that post that may or may not exist at some future date :)

Settings, settings, settings

Open up the example_blog.module file and add a hook_menu(). We need to add an admin settings page for our module so that users can easily make configuration changes.

/**
 * Implements hook_menu().
 */
function example_blog_menu() {
  $items = array();
  $items['admin/config/content/example_blog'] = array(
    'title' => 'Example blog settings',
    'description' => 'Configure the example blog',
    'page callback' => 'drupal_get_form',
    'page arguments' => array('example_blog_admin_form'),
    'access arguments' => array('administer site configuration'),
  );
  return $items;
}

Implement the admin settings form.

function example_blog_admin_form($form_state) {
  $form = array();

  $view = views_get_view('blog_listing');
  $view->set_display('default');

  $default_items_per_page = $view->display_handler->get_option('items_per_page');
  $form['example_blog_items_per_page'] = array(
    '#type' => 'textfield',
    '#title' => t('Items per page'),
    '#description' => t('The number of items to display per page. Enter 0 for no limit'),
    '#default_value' => variable_get('example_blog_items_per_page', $default_items_per_page),
  );

  return system_settings_form($form);
}

Note that in the above example we get our default value directly from the view. Defaults should always be pulled from the feature itself.

The only thing left to do now is to hook into views to set the number of items to display appropriately.

/**
 * Implements hook_views_pre_view().
 *
 * Set the configurable options for views.
 */
function example_blog_views_pre_view(&$view, &$display_id, &$args) {
  if ($view->name == 'blog_listing' && $display_id == 'page_1') {
    $default_items_per_page = $view->display_handler->get_option('items_per_page');
    $view->display_handler->default_display->options['pager']['options']['items_per_page'] = variable_get('example_blog_items_per_page', $default_items_per_page);
  }
}

The key line above here is:

$view->display_handler->default_display->options['pager']['options']['items_per_page'] = ...

This took a bit of time and trial and error for me to figure out. Anytime you need to modify these kinds of objects on the fly you'll just need to get a bit down and dirty with it. I'm not sure if this is the "best" way to handle this, but it works. Each setting you want to export will have it's own unique issues that you'll need to figure out.

Flush your caches and head to admin/config/content/example_blog, set the display items to something else. You should see this successfully change the blog and still not cause the feature to become overridden.

Of course, now you have more settings to export though. Every site I build has it's own feature just for that sites settings. I would add this example_blog_items_per_page variable to strongarm for that feature.

Pitfalls

Anytime you want a feature to be truly re-usable you'll need to make some tough decisions. Primarily: "what should be configurable?" This varies greatly from feature to feature, but try and keep it to just simple settings. If you get complicated you may end up losing a lot of the benefits of features and spending hours and hours on each feature. Just make some decisions and stick to them. Only make settings configurable when you find that they absolutely must be.

In the blog example we used the context module to display blocks in a region of the site. What if you're using different themes that don't have matching regions? Well, then this won't work.

On Wedful we have several themes and ultimately plan to have several dozen. Each of these themes are sub-themed off of a single master theme (which is in itself a sub-theme of zen).

What if you want to distribute your features to clients / end users who definitely don't use the same themes and regions? This is a huge problem, and will continue to be for some time. Fortunately a solution for new sites already exists, and it's called Kit. I won't go into kit here, but Shawn Price has written a post on Kit which I recommend anyone interested in this stuff (which should be ALL Drupal site builders) take a read through.

He's also doing a talk at the upcoming Drupal PWN on this stuff. I wish I could be there for it as I'm sure it'll be full of good stuff.

UPDATE:
It's come to my attention that there's another great post out there on this same topic that Roger Lopez posted just over a year ago with a D6 focus. I'd definitely recommend taking a read through his Building reusable features post as well if you're interested in doing this.

Sep 22 2011
Sep 22

In the previous post we built a basic blog feature. This post will cover details on how to manage that feature (i.e. update, revert, etc.).

Prerequisites:

You don't need much to get going with this, but what you will need is:

  • Read through the previous post and can build a feature
  • Basic use of Drush and the command line

I'll cover drush in more detail in a later set of posts, but for now it's important that you install Drush if you haven't already. Features includes a set of Drush commands that are used to update and maintain each feature. These links should help get you started with Drush:

Note that if you're doing this on a D6 site, the Drush 7.x version will work. Drush is not a module in the traditional sense, and it's Drupal version agnostic.

Terminology

If you browse back to the features admin page (admin/structure/features) the example blog feature should be enabled (enable it if it's not). Features will take notice of any changes you make to any of it's components and tell you they've been "overridden". Due to some of the things that one can do with features, the terminology can occasionally get a bit ambiguous, so it's probably important to define a couple of the terms used:

  • Default (state): All settings in your feature identically match their respective settings on the site (this is the state you're after :) ).
  • Overridden (state): The components of the feature used in the site no longer match the state of them in the feature module. For example, if you were to use the Views UI to change the number of blog posts listed on a page from 10 to 15, your feature would now be overridden as your site is displaying 15 posts, but your feature module specifies 10 posts.
  • Needs review (state): This is effectively the same state as overridden, but generally means there are more complicated changes and it's recommended you review the differences prior to reverting the feature.
  • Revert (action): You can revert an overridden feature to make the site settings match those in your feature module. In the above example you set your site to show 15 posts instead of 10 posts in a blog listing. If you decide that you no longer want that simply revert the feature and you'll be back to 10 posts. Revert is probably the most confusing term used with features.
  • Update (action): Updating a feature is the exact opposite of reverting one, instead of reverting your site to match your feature, you'll update your feature to match your site. In other words, if you prefer to have 15 posts lists, update your feature to bring it back in sync. This will change the setting from 10 to 15 posts in your feature and take your feature back to a default state.

Diffing a feature

So let's give the above a try. Change the display settings on your blog listing view to show 15 posts instead of 10 this will mean our blog feature will no longer be in sync with the settings on the live site. Back on the features listing page you should see that your blog feature is now "overridden".

Click the overridden link to bring you to your full feature page.

The above image shows that the overridden component is "views". If you've enabled the Diff module, the word overridden will link to a diff between your site and your feature (the "review overrides" tab will also give you the same thing).

Reverting a feature

After reviewing the changes, if we've decided we don't like the changes (perhaps another site admin made them, after all), we can revert the feature. Simply select the components you wish to revert (in this case only views) and click the "Revert components" button.

Your site will now be reset to match the code in your feature module and display 10 posts again instead of 15. I've occasionally experienced caching issues where the component is still listed as overridden instead of default as it should be. Flushing caches should sort this out.

Updating a feature

Ok, you've changed your mind now and do actually want 15 posts displayed on your blog listing. Head back to views and switch the number of posts to display to 15.

We now want to sync up the features code so that it matches the site. This way when we take the feature to new sites it will also show 15 posts.

On the command line change to your site folder for this site (sites/blog.scotthadfield.ca perhaps). Run the command:

$ drush features-update example_blog

example_blog should be replaced with whatever you named your feature.

Features and Drush

Though updating your feature is the only thing Drush is required for, you can perform any feature management actions directly in the command line. drush help shows the following commands:

All commands in features: (features)
 features-diff (fd)    Show the difference between the default and overridden state of a feature. 
 features-export (fe)  Export a feature from your site into a module.                             
 features-list (fl,    List all the available features for your site.                             
 features)                                                                                        
 features-revert (fr)  Revert a feature module on your site.                                      
 features-revert-all   Revert all enabled feature module on your site.                            
 (fr-all, fra)                                                                                    
 features-update (fu)  Update a feature module on your site.                                      
 features-update-all   Update all feature modules on your site.                                   
 (fu-all, fua)

We've diff'd, exported, listed, reverted, and updated our feature now, which covers all the bases for feature management.

Pitfalls

I can't stress enough the importance of keeping your site in sync with your features. If you're lazy and your features are perpetually listed as overridden you're going to be in for a world of hurt. This is particularly true when using a dev -> stage -> prod workflow with features or working with multiple site builders at one time.

NEVER have two different developers working on the same features components at the same time. For example, if developer A updates a view in the blog feature and then updates their code from developer B who updated a different view in the feature, developer A will need to redo their work after reverting the feature. This is equivalent to two developers working on the same line of code in the same file at the same time. There will be conflicts. With that said, it is possible for Developer A to update the content type while Developer B updates a view.

In a simple workflow you'll update the feature in your local dev environment, then push the changes to stage, revert the feature on stage (this is where the ambiguity of the terminology really comes into play :)), test, push to production, revert the feature on production, test.

Make as few changes as possible to the feature on the live site, i.e., even configuration updates should be done locally first and pushed out. This way you'll be less likely to break the live site and you won't run into nasty conflicts with your features as you develop the site.

What if you want to use the same feature on two sites, one which will display 10 posts in the listing and the other 15? This is where "fully" re-usable features come into play. Instead of forking your feature it is possible to give them settings of their own. I'll be going into more detail on this in my the third post on features.

Sep 19 2011
Sep 19

I first heard about features at DrupalCon DC and while it seemed like a nice idea, I couldn't really see how it could be practical to someone who can already write their own modules. Of course, I thought exactly the same of views when I first learned of it too.

If you build Drupal sites and you've never built a feature before, it's about time you gave it a go! This post will walk you through what a feature is and how to go about building one. In a later post I'll show you how to make your feature re-usable and why it's important.

A "feature" is simply a collection of drupal objects wrapped up into a module. Most of my features contains views, content types, permissions, image styles (imagecache presets in D6), etc. One of the goals of features was to provide a way to move database configuration into code, which also has the side effect of allowing you track any changes through your favorite revision control tool. While technically a feature is pretty much as simple as that, it becomes a very powerful tool when it comes to building Drupal platforms, re-usability between sites, and the dev -> staging -> production workflow. This concept is so important that there's even a major Drupal 8 initiative in the works with heyrocker behind it: http://groups.drupal.org/node/134569.

A common need for most websites these days is a simple blog. I've built a blog for websites at least a dozen times. The concept is simple enough and it only takes about an hour or two to set one up if you've done it before (not necessarily including theming work), but if you have a re-usable blog feature you'd save yourself that hour or two for every new site. If you bill out at $100/h, that can add up to some real money pretty fast.

Step 1: The prerequisites:

Starting with a fresh D7 "minimal" site install is probably best so you don't include settings from other parts of the site you don't intend to. Download and install a few standard modules that you'll want for the feature:

Next you'll need the features "toolkit" that will consist of a number of modules necessary for exporting various parts of our site. Download and install the following modules:

  • Features - This provides the core UI and functionality for creating features.
  • Strongarm - Strongarm, though it has other uses, in this case it will primarily allow us to export settings from the variables table into your features.
  • Context - Contexts is a bit more difficult to explain, but for our purposes, we'll be using it to allow us to export the block layout among other things.
  • Diff - This allows you to easily compare changes in your features as you're working on your site.

As you get into building features, you'll find you need to change the way you deal with some parts of your site build. One of the bigger changes is how you handle blocks. Because the default Drupal block system is not exportable you need to use a different solution. This is where contexts and boxes come in. Contexts is basically an input/output system... as a block replacement tool. the input is the current 'page' (node, views listing etc), and the output is the positioning of the blocks on the page. It should all make sense in a moment :). For more background on this I strongly recommend reading Affinity Bridge's "abridged" series of posts on the topic:

Step 2: Building a blog

This blog feature will consist of the following:
* Blog content type
* Tags vocabulary
* Blog listing view
* Recent comments block
* Recent posts block
* Notify users of new comments (ie comment_notify)

When creating CCK types for use in features, always try to create unique fields. Features need to be somewhat independent from each other and things will be much smoother if you don't try to create generic fields that can be re-used.

Add a vocabulary called "tags" or "blog tags" (again, try not to share vocabularies across features).

Create the blog content type with a body field (called blog_body) and a term reference (called blog_tags) . Note that all settings (comment, display, etc. that you add to the content type will be exported with the feature).

Create the views necessary for the blog (or download the exports attached to this post):

  • A full listing of blog posts containing the most recent posts
    • Page display (path of /blog)
    • Views rss display (path of /blog.xml)
  • Recent blog entries
    • Block containing the 5 most recent entries
  • New blog comments
    • Block containing the 10 newest comments on the blog

Let's put the recent comments and recent posts block next to each other on the /blog listing page. We can't use the block system for this as it's not exportable, so instead we'll use contexts. Create a new context and call it "blog_blocks". Add both a Views and a Node condition and set them up accordingly. Now add a Block reaction and add the two blocks in the proper regions (or download my context export at the end of this post).

Finally, enable and configure the comment_notify module so that readers / posters will be notified when new comments are posted.

Step 3: Creating the feature

If you made it here, then awesome! We're ready to export our blog feature now. This will give us a fully functional Drupal module that we could potentially take to any site.

Browse to admin/structure/features/create to create the new feature. Be careful when naming features as you don't want to conflict with the Drupal core or contrib module namespace (i.e. Calling this feature "blog" would conflict with the blog module). Give it a name like "blog feature" or "my blog feature" or "super awesome blog". You can safely ignore the version and URL fields for now but I'll touch on that later.

Under "Edit components" select everything we just created.

  • Context: blog_blocks
  • Fields: node-blog-*, comment-comment_node_blog-comment_body
  • Content types: Blog
  • Taxonomy: Blog tags
  • Permissions:
    • All comment_notify permissions
    • All "Node: Blog:" permissions
    • All Taxonomy "Blog Tags" permissions
  • Strongarm: comment_notify_*
  • Views: blog_listing, recent_blog_comments

After you've completed this, you should see a listing on the right hand of the page that looks something like this:

You'll notice some items are in blue, and others black. Features automatically identifies dependencies and adds them as the blue items. For example, all of the comment and node settings for the Blog type have been automatically added from the variables table.

Download your feature and untar it to your sites/mysite.com/modules/features/ folder. On the main features page (admin/structure/features) you should see the blog feature you just created listed there. All you have to do now is enable the feature and you're done!

I'll get into managing your feature in the next post where I'll explain the state and actions columns as well as how to update your feature.

UPDATE:
I've added the full feature export to the attachments as well.

I've found a few other links to check out on this topic:

UPDATE 2:
I just noticed that I made a mistake in building the feature here, I forgot to add the views we created to the feature. The article and feature export has been updated to fix that now.

Sep 16 2011
Sep 16

Building the Wedful platform took me about a year of pain, tears, blood, and triumph (not necessarily in that order) and since then I've been contacted by several people going through the same difficulties themselves with putting together their own Drupal platform. Wedful is designed specifically for couples planning their weddings to be able to easily launch a website and manage the details surrounding their weddings online, so we needed to be able to easily manage hundreds (hopefully tens of thousands someday) in a scalable manner. Some of the people I've spoken to have been looking to build niche products for the restaurant industry, hotel industry, and even one with a similar concept to Drupal Gardens.

Over the next several weeks (more likely months, there's a lot to cover ;-)) I'll be writing a series of blog posts on just this topic to help anyone else looking to build a Drupal platform of their own. Something that will hopefully reduce the pain, tears, and blood aspect of the process :). I'll initially cover all of the fundamentals and building blocks of the process, such as best practices, features, drush, make files, etc. and then get into the bigger topics of install profiles, distributions and using Aegir to actually build a vanilla Drupal platform. Following that I'd like to write about some more advanced topics such as passing data from Aegir to your site and not giving up UID 1 when you give away sites to the masses.

This will all be Drupal 7 centric, though I'm certain that much of the information will apply back to Drupal 6 (I'll try and make notes where this isn't the case) and likely forward to Drupal 8. Since I've done only a small amount of work on platforms in D7 this will be a learning process for me too, as I work through building a comprehensive Drupal 7 platform.

Ultimately I plan on focusing on two main use cases. The first would be a wordpress.com style of service for simple brochure websites. The second will be using the same techniques to manage a single site deployment with a scalable dev -> stage-> live workflow. Both use very similar techniques so almost everything I cover will apply to each use case.

I'll start next week with a few posts on building features. If there's anything anyone's interested in learning more about that's inline with this "platform" topic, just post a comment or send me an email and I'll see if I can work it in.

Aug 20 2011
Aug 20

If you want to share users, content and configuration between Drupal sites you have several options. The most common approaches are either to use a multi-site architecture, RSS feeds, or domain access module. In this post, I'll discuss an alternative method using Spaces and PURL modules, a highly flexible architecture that enables you to tailor multiple spaces that can appear to be completely independent sites, but which all run from the same Drupal installation.

Our Use Case

Watershed is a hub for arts, culture and creativity in the South West of England, running an independent cinema and funding various projects, such as the Pervasive Media Studio. They approached us and told us about their problem of maintaining so many online properties, representing sub-brands and projects under the Watershed umbrella. Not only was it the cost of maintaining and integrating systems built on many different technologies, but they also wanted to promote and facilitate content sharing between groups that has historically been digitally segregated. We were engaged to design a technical architecture that would consolidate Watershed’s various digital properties.

Requirements

The requirements in Drupal terminology are:

  • Each site appears on a subdirectory URL of one main domain.
  • Ability to share content and users between each site
  • Shared sign on, with ability to grant users access to each site individually
  • Ability to easily add another site that inherits it's functionality and visual appearance
  • Ability to completely override the appearance and functionality of a site

Implementation Options

Multi-site

Part of Drupal's core functionality is the ability to run multiple sites from one codebase, sites can share certain database tables, but this is not required. In order to share content and users we would have to share some users and node related tables. This approach of sharing at the database level quickly becomes problematic, and can be disasterous when you want to unravel it and move a site off on it's own. As well as with having no real native support for restricting users access to sites individually or denoting which sites content should appear in, there is no way to turn on and off functionality across each site.

Domain Access Module

This is a probably the most widely used solution for a use case such as ours. Domain Access module hooks into the node_access table to enable content to be assigned to a particular site. There are also a number of submodules for domain access that enable to tailor the functionality and appearance in each site, single sign on is also achievable. The downside is that domain access hooks in all over the place to achieve these feats, and in order to override something on a domain basis, there must be a domain submodule that will hook into the appropriate places for you.

This approach is messy and means that the domain_access module has to keep adding code when something new needs to be overridden. There are some particularly egregious examples, such as the domain_theme sub-module, which allows you to set the theme for each domain, yet this could be achieved as part of a more generalised domain_variables module - since the active theme is set in a variable.

Sites Module

This is quite an interesting approach, and the newest of all. It's project page says that it's designed to be a more lightweight alternative to the Spaces module, integrating PURL directly with context and views. Unfortunately, I believe that Sites is perhaps a little too lightweight since it doesn't rely on Features, which, along with Strongarm, allow us to capture just about any aspect of Drupal configuration in code. This has a tremendous advantage to your workflow when building and updating Drupal sites, so there is really very little reason to not be using Features module. By not depending on Features to capture the configuration, it appears Sites module is really putting itself in a position where it has to re-invent the wheel in a similar fashion to domain_access.

Our solution

Both Spaces and PURL modules certainly comply with the unix philosophy of "Do one thing and do it well". This is a great philosophy to build modules by, in my opinion, as it encourages a wide range of lightweight tools that can be plugged together in a variety of ways. The flip-side is that you are left with a number of options for how you can configure and plug these modules together, which can make the process seem quirky first time around.

PURL allows you to define rules based on URL patterns, depending on paths, domain, subdomain and more. Spaces can be set to be activated when one of these rules is triggered. Spaces then allows you to override the appearance and functionality of the site within a particular space, by switching on and off features, and overriding variables.

The figure above shows two spaces that we defined, the first is what we call a default space, so this space will be set even if no modifier is found in the path. This is not something that spaces deals with natively, so this is explained in more detail below.

Technical Challenges

  1. Spaces and PURL assume that there is the global site, and then there are spaces within that site which are triggered when certain URL patterns are present. We want to do something slightly different here, we want a user to always be in one of the spaces; even though the Watershed site can be thought of as the parent/global site, we do not necessarily want it to contain all content, we want to be able to select the content that appears on the Watershed site in the same manner as all the others. We had to invent the concept of a default space, or a space that would be activated when there were no modifiers present in the URL pattern. In order to achieve this, we wrote a PURL processor plug-in, which we've released as a separate module - purl_default.

  2. We want to be able to set content to appear in multiple spaces, but each node should have a primary space that is redirected to when the content is requested. By using this patch along with primary_term module, we were able to achieve this http://drupal.org/node/828416

  3. There should be a canonical url for all nodes. This patch to PURL helps with that http://drupal.org/node/828384 by adding a to all content that appears in more than one space.

  4. It's possible to make views and panels spaces-aware. So for views you have the option of only selecting nodes that are tagged with the active space, and panel variants can be defined based on the active space. The spaces_panels module allows you to achieve this.

Conclusion

All these solutions work in quite different ways, so ultimately the best choice will depend on your use case. We found that it took a fair amount of configuration, patching and a little extra code, but the spaces/purl solution works really well for us.

The main thing I like about this approach is that is stands on the shoulders of giants, using Features to capture the configuration for each space, and taxonomy to define the spaces and assign content.

This site was only to be available in one language, but I fear that there would be some complications if you were to try to use i18n module with PURL, adding language subdomains or path modifiers to the mix. Scalability would also be a concern on very large sites as you are running the sites all from one database, but if you are able to use a mysql-compatible cloud service, Amazon RDS for example, and doing mostly reads, then this would probably not be an issue.

I think it's a perfect alternative for many cases where we would have used multi-site or domain_access in the past. And the ability to quickly create a new space from a preset makes it ideal for cases where you want to quickly generate microsites or sub-sections that need to inherit a lot of functionality while allowing the flexibility to override anything on a space-by-space basis.

Jul 15 2011
Jul 15

Lately, I've been working on a Drupal project which involves Mailchimp integration. Mailchimp is an excellent service which manages all the monkeyjobs while you can send out newsletters to lists of subscribers carefree.

The service allows you to create a (or use a predefined) reusable template for your newsletter. You pass the content which needs to be sent out on a frequent (weekly? monthly?) basis to your subscribers to Mailchimp and it will churn out a new newsletter based on your template.

One of the requirements of the project is that a subscriber can opt in to receive a customized newsletter based on selected interest groups. The content of the newsletter are Drupal nodes that were published since the last edition of the newsletter. Using the taxonomy module, these nodes are assigned to one or more of the twelve available interest groups.

I need to aggregate the data in Drupal into digestible content and send it in a single Mailchimp campaign. There's a catch though: a campaign is send out to all users, not those that have opted in for interest groups x, y or z. So, how can you solve this?

Enter: smart merge tags with groups. Using Mailchimps' smart merge tags within the newsletter content generated by Drupal, I can target specific interest groups.

Like this:

  1. *|INTERESTED:Interests:Sports and entertainment|*
  2. Node Title B
  3. *|END:INTERESTED|*
  4.  
  5. *|INTERESTED:Interests:Lifestyle|*
  6. Node Title A
  7. Node Title C
  8. *|END:INTERESTED|*

Depending on the interest groups a user is subscribed to, only the content relevant to him or her will be shown in the newsletter. Thus creating personalized newsletters. Pretty cool!

There are a few limitations though. Over the past two days, I've been sparring with the Mailchimp support desk (They've been great and very helpful!) about the usage of commas in group names. You could do something like this:

  1. *|INTERESTED:Interests:Sports and entertainment, lifestyle|*
  2. Node Title D
  3. Node Title E
  4. *|END:INTERESTED|*

Which will act as an OR filter showing content to people who are either subscribed to 'Sports and entertainment' or 'lifestyle'. But what if your group names themselves contain commas? What if you had a group called 'Sports, Culture and entertainment'. Well, Mailchimp will be confused and think these are two groups 'Sports' and 'Culture and entertainment'. Since those are nonexistent groups, the content won't show up at all. I've tried escaping the commas and putting everything between quotes: nothing works.

This is the solution provided by the Mailchimp people:

"Regarding the use of commas in a group name, we would recommend removing these commas if at all possible.

We have passed this issue onto our developers for investigation. At this time we do not have an estimated time frame of when we will be able to hear back from them. However, we will continue to investigate and push a resolution as soon as one becomes available if our developers do believe this to be necessary.

At this time, removing all commas from the group names and creating an ELSE statement would be our top ideas!"

Awesome!

Of course, you're probably eager to hear about the Drupal bit of my Mailchimp adventures. Well, stay tuned. I've been toying with the Mailchimp API. And I like it. A lot. I've chalked up a few ideas. I'm going to let them stew a little bit longer until I've got a clear head about what is feasible.

Feb 10 2011
Feb 10

on 10 February, 2011

Here are the slides from the short talk I did on Development Seed's excellent Features module, at the February Drupal-Drop In, hosted at Microsoft's offices in London.

It was a great evening, where several people did a short talk on their favourite modules, or modules that they find themselves using all the time. Thanks to everyone who came along! For those who missed it, there's a good re-cap over at UBelly.

Jan 25 2011
tom
Jan 25

Here at krimson, we are working on our own super-duper start-theme. (Who's not, right?)
In that quest for our holy grail, I did find some interesting tips and tricks I want to share.

You've certainly heard about HTML5, and the new semantics this provides us.
If you haven't, have a look at the magnificent keynote from Jeremy Keith at Drupalcon Copenhagen, called 'The design of HTML5'.

At one brave day, I started to build a drupal theme, based on those new HTML5 elements.
Very soon, a problem became clear: older browsers don't support those new semantic tags.

The HTML5 boilerplate solves this by using modernizr, a javascript library to detect a browsers support for HTML elements.
On top of that, it also provides a way to make older browsers use elements like header, section, etc.

A quote from the modernizr website:

"Modernizr also adds support for styling and printing HTML5 elements. This allows you to use more semantic, forward-looking elements such as section, header and dialog without having to worry about them not working in Internet Explorer."

With the help of some clever javascript, modernizr creates those new elements in browsers that don't support them. Let's look at the technique used here:

  1. (function(){

  2. if (!/*@[email protected]*/0) return;

  3. var e = "abbr,article,aside,audio,bb,canvas,datagrid,datalist,details,dialog,eventsource,figure,footer,header,hgroup,mark,menu,meter,nav,output,progress,section,time,video".split(','),i=e.length;

  4. while (i--) {document.createElement(e[i])}})()

So here all elements are created by javascript, and when you actually use them in your CSS selectors, everything will work .... Until you turn off javascript!
This is where problems arise. Some statistics show that about 5% of all browsers don't have javascript capabilities or it is simple turned off.
Do we (or our clients) need to decide if we simply ignore those users? Should we wait until everyone is migrated to more modern browsers?

The million dollar question....
I personally don't like that idea. I think we should work with the tools that are available on this actual moment.
When this is just about HTML5 semantics like header, footer, nav, section, article, etc, we simply can use some simple tricks so we can still use them in our markup.
But first of all, why do we want them in our markup?

Does it give us any advantage at all? Why bother?

Its all about adding some extra semantic richness to your document. This is good for the accessibility of your document.
Screen readers know exactly what the navigation is, and what the main article is, so in theory this could replace the 'skip to content' anchors as an example.

Another advantage of using HTML5 semantics is for search engines optimization, or SEO for friends.
You can define more clearly which is which, and what is what in your documents, and this all tastes like sweet apple pie for search engines.

Enough ranting, let's look at a small trick, so you can start using those semantics without letting down those older browsers without HTML5 support and javascript turned off.

  1. <header>

  2. <div class="header">

  3. <h1>Page Title</h1>

  4. <h2>Page Subtitle</h2>

  5. </div>

  6. </header>

  7. <article>

  8. <div class="article">

  9. <header>

  10. <h1>Article Title</h1>

  11. </header>

  12. <section class="main">

  13. <div class="main">

  14. Article goes here...

  15. </div>

  16. </section>

  17. </div>

  18. </article>

And a CSS example:

  1. div.header {

  2. color: #000;

  3. }

  4. div.article h1 {

  5. color: #fff;

  6. }

You see, the point is to create a child div for each HTML5 element, and we use those div's in our CSS selectors.
Now if we create some CSS for our markup, we don't use any HTML5 element in our selectors, but we use the child divs.
This way we can get the best of both worlds. We can start using those new semantics, and we have full browser compatibility.
At least the new semantics won't be the show stopper.

Any disadvantages?

Nothing is without any cost. Disadvantage of working with child divs is that you create extra, unnecessary markup.
But I think most drupal themers can cope with that. :)

Other ideas

If you know a better/other way to achieve the same, please shoot.

Jan 03 2011
Jan 03

It has been officially announced that January 5th 2011 will be etched in our memory as the release date of Drupal 7.0. This will introduce many of improvements for everyone: developers, themers, system administrators and end users. Time to take a look at 7 improvements that Drupal 7 brings for end users.

1. New core themes

No less than three new core themes have sneaked their way into Drupal 7, each targeted at it's own audience:

Bartik

The new default front end theme (replacing Garland). This beautiful looking theme with tons of regions and color module support should allow everyone to create a pleasant and modern looking site without the hassle of creating their own theme.

Seven

The first administration theme shipped with Drupal core. A product from the Drupal 7 User Experience Project. Clean and no sidebars!

Stark

A CSS only theme. Ideal to test Drupal 7's default markup, or from which to create your own subtheme.

2. Improved administration interface

By means of a survey in 2007 started by Dries Buytaert, it became clear that one of the focus points for the next version should be Usability and User Experience, and in particular that of the administration interface. The Drupal 7 User Experience Project was thus started and people such as Mark Boulton and Leisa Reichelt were hired to guide the project to achieve its goals. From that project and from the various activity in the issue queue were born a few modules to enhance the experience.

Toolbar, Shortcuts, Overlay, Contextual links, hook_admin_paths

One of the most irritating experiences for some beginner Drupal users is having to switch between the administration and front end theme. Three modules and one hook in Drupal 7 finally close that gap.

After enabling these modules you get a toolbar at the top of every page with the top level links of the Management menu (think like the Administration menu in Drupal 6). Below that toolbar you get a bunch of shortcuts where every user can collect his/her favorite and most used pages. Using the overlay you never have to leave the front end page you're on. Once you access an administration page, it opens in an overlay on top of your current page and once you close it after performing your tasks, you're back on the page where you were.

Contextual links give you quick access to configure parts on a page. When you hover over a block a popup appears allowing you to edit the block, when you hover over an article teaser, the popup contains a link to edit the article, etc.

In previous versions of Drupal, the "administration environment" was constructed of every page which URL started with /admin (except for the content editing pages). Developers can now use hook_admin_paths to mark certain paths as belonging to the administration interface. Expect a few modules in contrib that will build on this to provide a user interface for non-programmers.

Vertical tabs

Vertical tabs were already available as a contributed module in Drupal 6, but are now part of core. They make long forms shorter. You will notice this when writing articles, pages, … since the node form is now a lot shorter.

Improved navigation

The whole administration menu (called "Management" in 7, "Navigation" in 6 and before) is reorganized in a much logical structure.

More drag & drop

Drag & drop was introduced in Drupal 6, it's now available on more pages.

3. Improved installation and updates

Drupal 7 ships with a full-fledged Update manager, meaning you can now update your modules and themes without doing the whole "download, unpack, upload module and run update.php" routine. Once the update manager announces an update of a certain module, you can tell it to update the particular module using the web interface, and the routine is done for you by Drupal itself behind the scenes. The Update manager can also be used to install new modules and themes. Just provide it with the URL of a module or theme or upload the tarball and you're done.

Drupal also ships with two installation profiles: you can choose the minimal profile to install Drupal with as few modules enabled as possible, or choose the default profile that enables some modules and creates some content types for you.

The Drupal core installer is also rewritten as an API which means that you can install Drupal from the command line. Try it with drush site-install.

4. CCK in core (now Fields)

Tears of joy! Our beloved Content Construction Kit (CCK) is now part of core. It is re-baptized as Fields and it rocks.

It's a whole new beast because without extra modules you can now not only add fields to all your content types, but also to users and taxonomy terms.

What the heck… it's written so generic that you can now actually add fields on practically everything (that supports it). Start the (r)evolution.

5. Lots of contrib module functionality in core

A lot of functionality existing in Drupal 6 as contributed module, has been ported to Drupal 7. It is actually so much that more than 50 contrib modules (probably even a lot more) have become obsolete for Drupal 7.

To mention only a few: functionality appears in Drupal 7 as "Image styles", complete with preview functionality and all.

Those always forgetting to set up cron or not able to, will be happy to know that 's functionality is now available in core. As soon as you have installed Drupal, cron is being set up to run every 3 hours.

6. Better user management

One of the most frustrating things in Drupal 6 was the 'administer nodes' permission which gives users more rights then you want. In Drupal 7 this issue has been solved by splitting this permission into more permissions like 'Bypass content access control', 'Administer content types', 'Administer content', 'Access the content overview page', …

Instead of only have a machine name in Drupal 6, permissions now have a human readable name and description and may also have a warning: 'Warning: Give to trusted roles only; this permission has security implications'.

To make your site more secure, you can now keep the User 1 account secret and give people "User 1 like permissions" using the Administrator role, a role that receives all permissions by default, something that was only possible in Drupal 6 using the Admin role module.

New in Drupal 7 is also that users can cancel their own account. With that also comes support to handle the cancellation of accounts (by the administrator or the owner of the account himself). You are given a few options about how Drupal needs to treat the user's content and profile.

7. Better support for multilingual sites

Drupal 6, maintained by Gábor Hojtsy, put a lot of focus on making multilingual sites with Drupal easier and better. In Drupal 7 quite a bit of cool stuff was introduced again.

Language negotiation got improved significantly. You can now completely configure what gets precedence when Drupal decides what the active language is, based on the URL, a session variable, user preference, browser settings, ...

Something you will notice as soon as you install Drupal 7 is that we now have timezone support. This also means Drupal 7 supports daylight saving time.

The translation system in Drupal 7 now also is aware of the context a certain string is used in. So for example the string "May" may be used as the month "May" in one context and used as the verb "May" in another one. In Dutch this would translate to "mei" en "mogen".

To close off this impressive round of improvements to the language system, we can also note that Search now also can work language aware.

More improvements

This list of improvements is part of a presentation that Krimson will do at the Drupal 7 release party they are organizing together with Calibrate. The presentation will highlight 42 more improvements which gives us together with the ones above 7 x 7 reasons to kick off your next project in Drupal 7. Join us!

Dec 15 2010
Dec 15

on 15 December, 2010

I gave a talk on "Feeding Drupal in Real-Time" at the Guardian on Tuesday, for the Drupal Drop-In event. It was a great evening, I met lots of interesting people and enjoyed some fantastic presentations. Thanks to everyone who came and made the event a success, especially Robert Castelo and Mark Baker for organising, and Microsoft and the Guardian for sponsoring.

My slides from the talk are below, but I also did two live demonstrations which are kinda hard to reproduce here! However, here's what happened for anyone who missed it:

  1. We used Feeds module to import Flickr photos with location data and display them on a Google map.

  2. We imported Gowalla check-ins and used pubsubhubbub to show my location update in real-time on a Drupal Gmap, as I checked in to the Guardian HQ. This was very exciting and almost definitely doomed to fail on the day, but by some fluke of fortune, it didn't!

I'll write blog posts with more detailed instructions for replicating these demonstrations soon, you can be informed of these by subscribing to our pubsubhubbub-enabled RSS feed. :)

Dec 13 2010
Dec 13

Riding the semweb

A few weeks back, we blogged about the Semantic Web and how it will gain more importance in day-to-day life. We've seen how the lack of easy-to-use tools to leverage its power is keeping it from becoming mainstream and saw how Drupal fits in the story. And so Krimson, in an effort to bring the semweb in Drupal, takes part in a Flemish government-sponsored research project called Archipel.

Archipel is a consortium of private and public stakeholders: sociocultural entities, academic institutions and private enterprises. It's a research project that runs over two years and is sponsored by IWT (Government agency for Innovation by Science and Technology) Its main goal is to create a common platform which facilitates the exchange of data in an open and transparent fashion between large repositories that contain digitized audiovisual heritage. The project relies on concepts and technologies taken from the semantic web: Linked open data, RDF, SPARQL, OAI-PMH harvesting...

Krimson has been engaged as a technical partner. Our role is to realize a series of project sites that interact with a common open data layer. These sites are real use cases with hard functional requirements issued by other partners that act as 'clients' within the Archipel project. This approach allows us to test Drupal modules that support semantic technologies, discover gaps and give feedback to their maintainers.

As the project has almost rounded its first year and parts of the platform are slowly becoming functional, Krimson also met the core goals of its first project case.

The Toneelstof case

Our first client, VTi (Vlaams Theater Instuut / Institute for the professional performing arts in Flanders), runs a successful project, called Toneelstof, that documents the history of the performing arts in the Low Countries. Over the past years, the main deliverable of Toneelstof were sets of DVD's containing archived interviews with important players (directors, actors, producers, writers,...) and other historical documents (videoclips from plays,...) VTi plays a double role: acting as a provider, their holdings are opened up through the Archipel platform. As a consumer, the Toneelstof site reuses data stored in the shared open layer.

Our other partners, Inuits and IBBT (Interdisciplinary institute for BroadBand Technology), created an environment allowing easy ingest of objects by VTi. Objects are harvested via the OAI-PMH protocol and stored in a central triple store. The objects consist of an archival copy and dissemination copies in different web accessible formats. Metadata is mapped to Dublin Core. The triple store features a SPARQL endpoint through which data is made available to the outside world. Krimson is to build a website for the Toneelstof case that can connect with the SPARQL endpoint, launch a SPARQL query, retrieve video clips and their accompanying metadata and present them to the end user in an usable and accessible way.

The website itself isn't ready for release yet, but we've made a screencast of the current state of things:

Support for Semantic webtechnology is a fast evolving domain within the Drupal ecosystem. There are no production-ready modules available off the shelf. So our options were limited. We could have build our own custom solution but that would entail several drawbacks.

  • Developing from scratch, without community support, takes a lot of time.
  • Building our own components means less chance of reusing them in other projects...
  • ... components that might not be suitable for contribution back to the community.
  • Since there are already several modules under way, we might end up reinventing the wheel.

So, we decided to base the Toneelstof project on existing modules that offer SPARQL support. This approach would cover the first miles without having to invest extra effort. If we were to need a new feature or encounter bugs, we could dive into the code and contribute our own solutions as patches to the different module projects. We would also enjoy the benefits of community feedback as we published our patches for testing.

SPARQL Views

It became quickly apparent that SPARQL Views would become our weapon of choice. This module allows you to to compose and issue queries to remote SPARQL endpoints through the Views module. Building on top of the Views API gives development a serious boost since it does all the heavy lifting: handling of filters and arguments, rendering of an entire view, rows and fields, dynamic composition and execution of a query. The SPARQL Views module goal is to integrate the ARC2 library functionality in Views and adapt the interface allowing it to also handle SPARQL query composition.

Lin Clark, maintainer of SPARQL Views, has created several screencasts on how this module works:

Did we benefit from this approach? Yes, rather then building everything from scratch, we were able to spend time improving the SPARQL Views module. We ended up fixing several bugs and adding two features to the module: support for Views argument handling and Views pagers.

We had to pass multiple keywords to the View as an array of arguments. We added a primitive argument handler to the module which does just that. In SPARQL, you can add a variety of different functions to FILTER expressions though. At the moment, our SPARQL Views arguments only understand the regex() function. This is enough to suit our purposes but other useful functions still need to be implemented.

The module didn't support paging. We had to write a specific pager class for the SPARQL Views module. For any kind of paging to work, two queries are needed: a COUNT query to establish the total number of objects in a result set and a ranged query to retrieve the actual objects for a given page number. The SPARQL specification is actually a recommendation. This means it's still in full development and lacks certain features which are available in other query languages. A well defined COUNT modifier is not yet supported. The ARC2 library provides it's own SPARQL+ extensions which include COUNT support, but if the endpoint does not have SPARQL+ installed, a COUNT query will return an error. Instead, our pager class retrieves the entire result set. Determining the total number of objects is done in PHP. This solution works for small sets but doesn't scale well when queries return larger sets of data.

While SPARQL Views harnesses the power of the Views framework, there are several drawbacks. Views requires you to register your datasources before you can query them. It doesn't automatically read out the entire database structure. This means you have to explicitly define tables and their relationships in code using hook_views_data(). Handlers for fields, filters and arguments are statically associated with those fields. This allows Views to apply the correct handlers at runtime.

In SPARQL, variables are dynamically bound. Without a static definition of the available fields, Views does not know which handlers to instantiate. SPARQL Views tries to solve this issue by running the query from within hook_views_data() and associate a mapping based on the structure of the resultset. Views' architecture is not build to alter the data definition in the context of hook_views_data() when a query is run, though. This resulted in a series of nasty hacks on the part of SPARQL Views to register those fields and handlers nonetheless. Another tradeoff is that Views caching has to be disabled to make this work.

SPARQL Views comes with it's own generic field handler which is applied on all the attributes in a result set. Part of the flexibility of the Views framework is it's ability to instantiate specific handlers which can be assigned to fields depending on their datatype. For instance, imagecache formatters are only available for fields which are associated with the imagefield handler. SPARQL Views is not yet ready to automatically determine the type of a field and assign a specific handler. A possible solution might be tracking down the predicate of the matching triple pattern and looking at the associated schema against which the query was run. For now, the lack of typed fields restricts developers to the theming layer, overriding theme_views_view_field(), and project specific code to get the job done. Modules like Display Suite do make it easier to theme the overall view.

Conclusion

The best way to conclude our first year on the Archipel project is to provide an answer to a few questions.

Is it easy to query and reuse open data in our own Drupal projects?

Easy publishing RDF formatted data has made a few leaps in the past year. But querying data is still non-trivial. SPARQL is an unfinished specification. Major features like aggregated queries have yet to be defined. Most tools are still experimental. SPARQL Views is arguably the most flexible tool available although it's still under heavy development and it comes with it's quirks. A good understanding of RDF and SPARQL is still required if you want to ride the semweb.

What about performance?

The common triple store only contains a few dozen objects and our SPARQL queries are fairly simple. Since we don't retrieve large sets of data this means there is currently no notable performance hit at this point. With the problems we raised in the article in mind and an increase of the amount of data in the triple store under way, scaling up is our next challenge.

So, when will SPARQL Views be really ready?

This is the classic chicken and egg dilemma. Without testing and contributions, it will take longer for tools like SPARQL Views to evolve. Then again, as long as they are still experimental, developers tend to stay away from them. If the Toneelstof case proves one thing, it's this: starting to use these tools and returning feedback drives development.

How can I help?

If you're up to it, start by downloading and reading the instructions at the project homepage. The latest version of the module is available on GitHub.

Aug 20 2010
Aug 20

on 20 August, 2010

In this article we will talk through setting up a simple load testing scenario for Drupal applications using Amazon’s Elastic Cloud computing. Amazon EC2 will enable you to easily set up testing scenarios for a relatively low cost, e.g. you can find out what the effect of adding an additional database server will make without actually buying one! JMeter will allow us to create complex test plans to measure the effect of our optimisations, we'll set up a remote JMeter load generator on EC2 that we'll control from our desktop.

Improving Drupal's performance is beyond the scope of this article, but we'll talk more about that in future. If you need some suggestions now then check out the resources section for links to good Drupal optimisation articles.

Setting up your test site on EC2

If you don’t already have an account then you’ll need to sign up for Amazon Web Services. It’s all rather space-age and if you haven’t been through the process before then it can be a bit confusing. We want to set up a working copy of our site to test on EC2, so once you have your AWS account, the process goes something like this:

  • Select an AMI (Amazon Machine Image) that matches your production environment - we use alestic.com as a good source of Debian and Ubuntu AMIs.

  • Create a high-CPU instance running your AMI. Small-CPU instances only have one virtual CPU, which can be stolen by other VMs running on the same physical hardware, which can seriously skew your results when running a test. There is always going to be a certain amount of variance in the actual CPU time available to your AMI, since it’s always going to sharing the physical hardware, but we find that high-CPU instances tend to reduce the CPU contention issues to a reasonable level.

  • Give your instance an elastic IP, which is Amazon's term for a static IP that you can use to connect to it.

  • Ssh into the machine, you’ll need to make sure that port 80 and 22 are open in the security group, and set up a keypair. Download the private key and use that when connecting, the simplest way is to do:

ssh -i /path/to/your/private/key.pem [email protected] 
  • Install the LAMP server packages you require, try to mirror the production environment as closely as possible. A typical LAMP server can be installed on Debian/Ubuntu by running:
apt-get  install apache2 php5 php5-mysql php5-gd mysql-server php5-curl 
  • Now you need to set up a copy of the site you want to test on your new server. EC2 instances give you a certain amount of ephemeral storage, which will be destroyed when the AMI is terminated, but will persist between reboots - this can be found at /mnt. If you want to terminate your AMI but may need the test sites that you are going to create again, it's a good idea to back up /mnt to Amazon S3.

  • We will create two copies of the site, one called “control” and another called “optimised”. Give them each their own virtual host definition and make sure that they each point to their own copy of the database. “Control” should be left alone, we’ll use this version to get baseline statistics for each test plan. We’ll tweak and tune “optimised” to improve the performance, and compare our results with “control”. Give each of the sites an obvious subdomain so that we can connect to them easily without getting confused. You should end up with two copies of your site set up on /mnt, with separate domains and dbs, something like this:

http://foo-control.bar.com   -> /mnt/sites/foo/control/   -> DB = foo_control
http://foo-optimised.bar.com -> /mnt/sites/foo/optimised/ -> DB = foo_optimised

Setting up JMeter to generate load

We don't want fluctuating network bandwidth to affect our results, so it's best to run a JMeterEngine on a separate EC2 instance and control that from JMeter running on our local machine. First we'll get JMeter generating load from our local machine, then we'll set up a remote JMeterEngine on EC2.

  • First download JMeter, you'll also need a recent copy of the Java JVM running. On OS X, I moved the downloaded JMeter package to Applications, and then ran it by executing bin/jmeter.

  • If you're new to Jmeter, you can download some sample JMeter test plans for stress testing a Drupal site from the nice guys at Pantheon. Or just create your own simple plan and point it at your test server on EC2.

  • Now we have a basic test plan in place, we should spin up another EC2 instance that we'll use to generate the load on our test server. This will provide more reliable results as we're removing our local network bandwidth from the equation. We'll still use our local JMeter to control the remote load generator. We used a prebuilt AMI that comes with Ubuntu and JMeter already installed. JMeter has some good documentation on how to tell your local JMeter to connect to the remote machine, in essence you need to add the remote machine's IP address to your local jmeter.properties file.

  • You'll need to open a port on EC2 for JMeter to talk to the remote engine, add 9090 TCP to your security group that the AMI is running under.

  • We found that JMeter started to hang when we increased the amount of data being transferred in our test plans. Tailing jmeter.log told us that we were running out of memory, increasing the heap size available solved this.

  • Test, test, and more tests. It's important to repeat your tests to make sure you're getting reliable results. It's also important to know that your tests are representative of average user behaviour, it's possible to set up JMeter as a proxy that will capture your browsing activity and replay that as a test. It's also possible to replay apache logs as test plans.

Resources

Jul 13 2010
Jul 13

One of the biggest tasks when building a Drupal site is selecting, configuring, and integrating contrib modules into your site. For almost everything you need to do "there's a module for that", but whether you should be using that module is an entirely different question.

For every new module I choose for a site, I go through some quick steps and questions (mostly unconsciously now) to determine whether I should risk a module.

1. Most recent release - While this may not be a good guide of whether the module is stable or not, it tends to provide a pretty good guide of how actively maintained it is. The first thing I do, before even reading through the description is check the latest release. If the project doesn't have a release in over a year this usually isn't a good sign.

2. Issue queue - The second thing I do is check the issue queue. I expect the average module to have about half a page to a page of open tickets. I'm mostly concerned here with whether or not a maintainer is actually responding to tickets and active with the project. If simple tickets with patches have been sitting open for several months with no response from a maintainer, this is a very bad sign. For larger modules you can probably expect several pages of issues. In all honesty, my only concern here is that tickets are being addressed by maintainers.

3. Usage stats - Every project page on Drupal has a link to "usage statistics". The more users a module has the better bug tested it is, in general.

4. Documentation - Is the project description useful? How about the README? Whether you read the docs or not, knowing they exist is important. In my experience a project with no readme and poor documentation is generally very lacking in other areas too.

5. Post install troubles - Any trouble figuring out how to use the module after you've enabled it? Even after you read the README? Probably a sign that you may have problems integrating and configuring the module with your site.

6. Code structure - I almost always take a peek at the code of every module I use on a site these days. If you're not a developer yourself this won't mean as much to you. But taking a quick scan through the code can set off alarms bells pretty quickly.

You'll likely run into modules that fail in every one of these areas. It might be a good idea to take a pass on it and try to find similar functionality elsewhere. If it's a module that's important enough to your site ask to become a co-maintainer. Just please don't create a second identical module on drupal.org because you thought you could do a better job.

There are also a few other tools that can aid in selecting modules such as http://drupalmodules.com which lets you rate and comment on modules you've used.

As a developer, for me choosing a module comes down to three main questions:

  1. How responsive is the maintainer / How active is the development?
  2. How important is it for me to have this functionality?
  3. Do I have time to take the module on and clean it up myself?

Less important functionality simply gets axed if the module isn't up to par. But ultimately a lot of the selection process just comes down to experience as well as asking others what modules have worked for them.

For every site I maintain I keep a running list as to which ones are likely to cause problems. This helps to identify problems when "unexplained" things happen and also so you know which areas you should try and clean up / replace when you've got a few spare moments to work on a site.

Sep 19 2008
Sep 19

I apologise for my last post on this topic, it probably wasn't very interesting :-)

I've done the Drupal 6 upgrade, and it was relatively painless. Most modules ported smoothly, a few required me to learn how to port modules to Drupal 6, and one I just gave up on.

On the whole, the porting is simple, Druplal.org has a pretty good howto on the topic. A few APIs have changed, and that's about it. A great tool to help with this is the coder module, which knows about the API changes, as well as Drupal's coding standards.

I've added the GeSHi module for code syntax highlighting (apologies for the planet-spam caused by this), and I've moved from marksmarty to markdown + typogrify (which I had to port to Drupal 6). I'm not too happy with the geshi colour-scheme and indenting, but it does a good enough job. I should write a "command prompt" mode for it, but that can wait for now...

Akismet is currently totally broken for Drupal 6, even if it's labelled as being in beta. I got about half way through porting it before giving up and switching to mollom, which looks like a pretty good replacement (and it takes care of the sign-up form too).

Finally, the subject of input-filters. Drupal lets you define a "default filter", but that filter has to be available for everyone, even comments. So your default filter has to protect against XSS. I'd much prefer it if commenters used a simple, locked-down input-format, and I used a nice markdown format.

I'm not the only one to notice this, and it seems like it'll be fixed in Drupal 7. Until then, I'm using remember-filter which remembers that I use markdown, and all the commenters use the default, locked-down filter. (Again, ported.)

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web